SlowGuess commited on
Commit
b68cbf2
·
verified ·
1 Parent(s): a171af3

Add Batch 6de36c21-2e37-4de0-947e-b64c0a1d4806

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_content_list.json +3 -0
  2. abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_model.json +3 -0
  3. abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_origin.pdf +3 -0
  4. abstractrationalestanceajointmodelforscientificclaimverification/full.md +169 -0
  5. abstractrationalestanceajointmodelforscientificclaimverification/images.zip +3 -0
  6. abstractrationalestanceajointmodelforscientificclaimverification/layout.json +3 -0
  7. achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_content_list.json +3 -0
  8. achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_model.json +3 -0
  9. achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_origin.pdf +3 -0
  10. achievingmodelrobustnessthroughdiscreteadversarialtraining/full.md +382 -0
  11. achievingmodelrobustnessthroughdiscreteadversarialtraining/images.zip +3 -0
  12. achievingmodelrobustnessthroughdiscreteadversarialtraining/layout.json +3 -0
  13. activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_content_list.json +3 -0
  14. activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_model.json +3 -0
  15. activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_origin.pdf +3 -0
  16. activeeaactivelearningforneuralentityalignment/full.md +388 -0
  17. activeeaactivelearningforneuralentityalignment/images.zip +3 -0
  18. activeeaactivelearningforneuralentityalignment/layout.json +3 -0
  19. activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_content_list.json +3 -0
  20. activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_model.json +3 -0
  21. activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_origin.pdf +3 -0
  22. activelearningbyacquiringcontrastiveexamples/full.md +335 -0
  23. activelearningbyacquiringcontrastiveexamples/images.zip +3 -0
  24. activelearningbyacquiringcontrastiveexamples/layout.json +3 -0
  25. adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_content_list.json +3 -0
  26. adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_model.json +3 -0
  27. adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_origin.pdf +3 -0
  28. adapterdropontheefficiencyofadaptersintransformers/full.md +394 -0
  29. adapterdropontheefficiencyofadaptersintransformers/images.zip +3 -0
  30. adapterdropontheefficiencyofadaptersintransformers/layout.json +3 -0
  31. adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_content_list.json +3 -0
  32. adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_model.json +3 -0
  33. adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_origin.pdf +3 -0
  34. adaptivebridgebetweentrainingandinferencefordialoguegeneration/full.md +297 -0
  35. adaptivebridgebetweentrainingandinferencefordialoguegeneration/images.zip +3 -0
  36. adaptivebridgebetweentrainingandinferencefordialoguegeneration/layout.json +3 -0
  37. adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_content_list.json +3 -0
  38. adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_model.json +3 -0
  39. adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_origin.pdf +3 -0
  40. adaptiveinformationseekingforopendomainquestionanswering/full.md +412 -0
  41. adaptiveinformationseekingforopendomainquestionanswering/images.zip +3 -0
  42. adaptiveinformationseekingforopendomainquestionanswering/layout.json +3 -0
  43. adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_content_list.json +3 -0
  44. adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_model.json +3 -0
  45. adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_origin.pdf +3 -0
  46. adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/full.md +338 -0
  47. adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/images.zip +3 -0
  48. adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/layout.json +3 -0
  49. adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_content_list.json +3 -0
  50. adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_model.json +3 -0
abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7320209a3db6c5c40b15cc5a1201833fbb5f9f5615a6fc7371eb762b5548593a
3
+ size 47901
abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8329d1931c4bc65e82c037375125a463443f2f783b123fda121bcfe0f327594
3
+ size 56602
abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:269dc5f6cedcd7af70b19a8c319620dbdf06c6f212ab7b05d1715b66ac271114
3
+ size 874521
abstractrationalestanceajointmodelforscientificclaimverification/full.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Abstract, Rationale, Stance: A Joint Model for Scientific Claim Verification
2
+
3
+ Zhiwei Zhang $^{1,2}$ , Jiyi Li $^{2*}$ , Fumiyo Fukumoto $^{2}$ and Yanming Ye $^{1}$
4
+
5
+ Hangzhou Dianzi University, Hangzhou, China
6
+
7
+ University of Yamanashi, Kofu, Japan
8
+
9
+ hdluxiaozhi97@gmail.com, {jyli, fukumoto}@yamanashi.ac.jp
10
+ yeym@hdu.edu.cn
11
+
12
+ # Abstract
13
+
14
+ Scientific claim verification can help the researchers to easily find the target scientific papers with the sentence evidence from a large corpus for the given claim. Some existing works propose pipeline models on the three tasks of abstract retrieval, rationale selection and stance prediction. Such works have the problems of error propagation among the modules in the pipeline and lack of sharing valuable information among modules. We thus propose an approach, named as ARSJOINT, that jointly learns the modules for the three tasks with a machine reading comprehension framework by including claim information. In addition, we enhance the information exchanges and constraints among tasks by proposing a regularization term between the sentence attention scores of abstract retrieval and the estimated outputs of rational selection. The experimental results on the benchmark dataset SCIFACT show that our approach outperforms the existing works.
15
+
16
+ # 1 Introduction
17
+
18
+ A system of scientific claim verification can help the researchers to easily find the target scientific papers with the sentence evidence from a large corpus for the given claim. To address this issue, Wadden et al. (2020) introduced scientific claim verification which consists of three tasks. As illustrated in Figure 1, for a given claim, the system finds the abstracts which are related to the claim from a scholarly document corpus (abstract retrieval task); it selects the sentences which are the evidences in the abstract related to the claim (rationale selection task); it also classifies whether the abstract/sentences support or refute the claims (stance prediction task). Wadden et al. (2020) also provided a dataset called SCIFACT.
19
+
20
+ Most of the existing works of general claim verification are based on pipeline models (Soleimani et al., 2020; Alonso-Reina et al., 2019; Liu et al.,
21
+
22
+ ![](images/582f6611664416a1d4893e5d3dff788ff02194fb574ffce9eda15fb901958fb5.jpg)
23
+ Figure 1: An example of scientific claim verification.
24
+
25
+ 2020; Zhou et al., 2019; Nie et al., 2019; Lee et al., 2020b); some works utilize joint optimization strategies (Lu and Li, 2020; Yin and Roth, 2018; Hidey et al., 2020). These models attempted to jointly optimize the rationale selection and stance prediction, but did not directly link the two modules (Li et al., 2020). In the case of the scientific claim verification, Wadden et al. (2020) proposed a baseline model VERiSCI based on a pipeline of three components for the three tasks. Pradeep et al. (2021) proposed a pipeline model called VERT5ERINI which utilized the pre-trained language model T5 (Raffel et al., 2020) and adapted a pre-trained sequence-to-sequence model. Li et al. (2020) jointly trained two tasks of rationale selection and stance prediction, and had a pipeline on abstract retrieval task and the joint module.
26
+
27
+ Above existing works on scientific claim verification are fully or partially pipeline solutions. One problem of these works is the error propagation among the modules in the pipeline. Another problem is that the module in the pipeline trained independently cannot share and leverage the valuable information among each other. Therefore, we propose an approach, named as ARSJOINT, which jointly learns the three modules for the three tasks. It has a Machine Reading Comprehension (MRC) framework which uses the claim content as the query to learn additional information. In addition, we assume that the abstract retrieval module should
28
+
29
+ have good interpretability and tend to assign high sentence-level attention scores to the evidence sentences that influence the retrieval results; it is consistent with the goal of the rationale selection module. We thus enhance the information exchanges and constraints among tasks by proposing a regularization term based on a symmetric divergence to bridge these two modules.
30
+
31
+ The experimental results on the benchmark dataset SCiFACT show that the proposed approach has better performance than the existing works. The main contributions of this paper can be summarized as follows. (1). We propose a scientific claim verification approach which jointly trains on the three tasks in a MRC framework. (2). We propose a regularization based on the divergence between the sentence attention of abstract retrieval and the outputs of rational selection.
32
+
33
+ # 2 Our Approach
34
+
35
+ # 2.1 Notation and Definitions
36
+
37
+ We denote the query claim as $q$ and an abstract of a scientific paper as $a \in \mathcal{A}$ . We denote the set of sentences in abstract $a$ as $S = \{s_i\}_{i=1}^l$ and the word sequence of $s_i$ is $[s_{i1}, \ldots, s_{ini_i}]$ . The title of the paper $t \in \mathcal{T}$ is used as auxiliary information, the word sequence of $t$ is $[t_1, \ldots, t_{nt_t}]$ . Here, $S$ , $s_i$ and $t$ are for $a$ in default and we omit the subscripts 'a' in the notations. The purpose of the abstract retrieval task is to detect the set of related abstracts to $q$ ; it assigns relevance labels $y^b \in \{0,1\}$ to a candidate abstract $a$ . The rationale selection task is to detect the decisive rationale sentences $S^r \subseteq S$ of $a$ relevant to the claim $q$ ; it assigns evidence labels $y_i^r \in \{0,1\}$ to each sentence $s_i \in S$ . The stance prediction task classifies $a$ into stance labels $y^e$ which are in {SUPPORTS=0, REFUTES=1, NOINFO=2}. The sentences in $a$ have the same stance label value.
38
+
39
+ # 2.2 Pre-processing
40
+
41
+ As there are a huge amount of papers in the corpus, applying all of them to the proposed model is time-consuming. Therefore, similar to the existing works on this topic (Wadden et al., 2020; Pradeep et al., 2021; Li et al., 2020), we also utilize a lightweight method to first roughly select a set of candidate papers. We used the BioSentVec (Chen et al., 2019; Pagliardini et al., 2018) to obtain the embeddings of the claim or a scientific paper based on its title and abstract, and compute the cosine sim
42
+
43
+ ilarity between the claim and the paper. The papers with top- $k$ similarities are used as the candidates.
44
+
45
+ # 2.3 Joint Abstract, Rationale, Stance Model
46
+
47
+ The input sequence of our model is defined as $seq = [[\mathrm{CLS}]q[\mathrm{SEP}]t \cdots [\mathrm{SEP}]s_i[\mathrm{SEP}] \cdots]$ , which is obtained by concatenating the claim $q$ , title $t$ and abstract $a$ . We compute the list of word representations $\mathbf{H}_{seq}$ of the input sequence by a pre-trained language model (e.g., BioBERT (Lee et al., 2020a)). We obtain the word representations of the claim $\mathbf{H}_q = [\mathbf{h}_{q_1}, \dots, \mathbf{h}_{q_{n_q}}]$ , the title $\mathbf{H}_t = [\mathbf{h}_{t_1}, \dots, \mathbf{h}_{t_{n_t}}]$ , each sentence $\mathbf{H}_{s_i} = [\mathbf{h}_{s_{i1}}, \dots, \mathbf{h}_{s_{in_i}}]$ , and the abstract $\mathbf{H}_S = \mathbf{H}_a = [\dots, \mathbf{H}_{s_i}, \dots]$ from $\mathbf{H}_{seq}$ and use them in our AR-SJOINT model. Figure 2 shows the framework of our model with three modules for the three tasks.
48
+
49
+ In all three modules, we use attention layer (denoted as $g(\cdot)$ ) on word (sentence) representations to compute a sentence (document) representation. A document can be a claim, title, abstract, or their combinations. The computation is as follows (refer to (Li et al., 2020)), where the * in $\mathbf{H}_{*}$ represents any type of sentence (claim $q$ , title $t$ or a sentence $s$ in an abstract), the $\star$ in $\mathbf{H}_{\star}$ represents any type of document, $\mathbf{W}$ and $\mathbf{b}$ are trainable parameters.
50
+
51
+ $$
52
+ \begin{array}{c} g (\mathbf {H} _ {*}) = \sum_ {i} \mathbf {u} _ {* i} \boldsymbol {\alpha} _ {* i}, \boldsymbol {\alpha} _ {* i} = \frac {\exp (\mathbf {W} _ {w _ {2}} \mathbf {u} _ {* i} + \mathbf {b} _ {w _ {2}})}{\sum_ {j} \exp (\mathbf {W} _ {w _ {2}} \mathbf {u} _ {* j} + \mathbf {b} _ {w _ {2}})}, \\ \mathbf {u} _ {* j} = \tanh (\mathbf {W} _ {w _ {1}} \mathbf {h} _ {*} ^ {j} + \mathbf {b} _ {w _ {1}}) \text {f o r w o r d - l e v e l a t t e n t i o n}, \end{array}
53
+ $$
54
+
55
+ $$
56
+ \begin{array}{l} g \left(\mathbf {H} _ {\star}\right) = \sum_ {i} \mathbf {U} _ {\star i} \boldsymbol {\alpha} _ {\star i}, \boldsymbol {\alpha} _ {\star i} = \frac {\exp \left(\mathbf {W} _ {c _ {2}} \mathbf {U} _ {\star i} + \mathbf {b} _ {c _ {2}}\right)}{\sum_ {j} \exp \left(\mathbf {W} _ {c _ {2}} \mathbf {U} _ {\star j} + \mathbf {b} _ {c _ {2}}\right)}, \\ \mathbf {U} _ {\star j} = \tanh \left(\mathbf {W} _ {c _ {1}} \mathbf {H} _ {\star j} + \mathbf {b} _ {c _ {1}}\right) \text {f o r s e n t e n c e - l e v e l a t t e n t i o n .} \end{array} \tag {1}
57
+ $$
58
+
59
+ Abstract Retrieval: In this task, a title can be regarded as an auxiliary sentence that may contain the information related to the claim for the abstract, we thus use the title with the sentences in the abstract together. We build a document $ta = [t, a]$ and concatenate the word representations of $t$ and $a$ into $\mathbf{H}_{ta} = [\mathbf{H}_t, \mathbf{H}_a]$ as the input to this module. We use a hierarchical attention network (HAN) (Yang et al., 2016) to compute document representations $\mathbf{h}_{ta} \in \mathbb{R}^d$ , $\mathbf{h}_{ta} = \mathrm{HAN}(\mathbf{H}_{ta})$ . HAN is proper for document classification by considering the hierarchical document structure (a document has sentences, a sentence has words). We also compute the sentence representation of claim $\mathbf{h}_q \in \mathbb{R}^d$ with a word-level attention layer (denoted as $g(\cdot)$ ), $\mathbf{h}_q = g(\mathbf{H}_q)$ . To compute the relevance between $\mathbf{h}_{ta}$ and $\mathbf{h}_q$ , we use a Hadamard product on them and a Multi-Layer Perception (MLP, denoted as $f(\cdot)$ ) with Softmax (denoted as $\sigma(\cdot)$ ); the outputs
60
+
61
+ ![](images/87e32f8b5d7a37c014df2b8b8fbdf00ea9f3e3e66fbe989c93afa385a329c6d0.jpg)
62
+ Figure 2: Framework of our ARSJOINT model which jointly learns three modules and has rationale regularization.
63
+
64
+ are the probabilities that whether the abstract is relevant to the claim, $[p_0^b,p_1^b ] = \sigma (f(\mathbf{h}_q\circ \mathbf{h}_a))$ . A cross entropy loss $\mathcal{L}_{ret}$ is used for training.
65
+
66
+ Rationale Selection: This task focuses on judging whether a sentence in the abstract is a rationale one or not. For the multiple sentences in the abstract, they have same title information but have different rationale labels. Therefore, when judging each sentence in the abstract, using the title may not positively influence the performance. We thus use the word representation $\mathbf{H}_a$ of the abstract as input. We compute the sentence representation $\mathbf{h}_{s_i}$ by a word-level attention layer, and use a MLP with Softmax to estimate the probability $p_{i1}^r$ and $p_{i0}^r$ that whether $s_i$ is the evidence of the abstract or not. The cross entropy loss is $\mathcal{L}_{rat}$ .
67
+
68
+ Stance Prediction: The module first computes the sentence representation $\mathbf{h}_{s_i}$ in a same way with that of rationale selection. After that, it only selects the sentences $S^r$ with the true evidence label $\hat{y}_i^r = 1$ or the estimated evidence probability $p_{i1}^r > p_{i0}^r$ ; whether using the true label or the estimated label is decided by a scheduled sampling which will be introduced later. We then compute the estimated stance labels based on a sentence-level attention layer and a MLP with Softmax, $\mathbf{h}_{S^r} = g(\mathbf{H}_{S^r})$ and $[p_0^e, p_1^e, p_2^e] = \sigma(f(\mathbf{h}_q \circ \mathbf{h}_{S^r}))$ , where $S^r = \{s_i \in S | \hat{y}_i^r = 1 \text{ or } p_{i1}^r > p_{i0}^r\}$ . The cross entropy loss is $\mathcal{L}_{sta}$ .
69
+
70
+ Scheduled Sampling: Since rationale sentences $S^r$ are used in stance prediction, the error of the rationale selection module will be propagated to the stance prediction module. To alleviate this problem, following (Li et al., 2020), we also use a scheduled sampling method (Bengio et al., 2015), which is to
71
+
72
+ <table><tr><td></td><td>SUPPORT</td><td>NOINFO</td><td>REFUTES</td><td>ALL</td></tr><tr><td>Train</td><td>332 / 370</td><td>304 / 220</td><td>173 / 194</td><td>809</td></tr><tr><td>Dev.</td><td>124 / 138</td><td>112 / 114</td><td>64 / 71</td><td>300</td></tr><tr><td>ALL</td><td>456 / 508</td><td>416 / 444</td><td>237 / 265</td><td>1109</td></tr></table>
73
+
74
+ Table 1: Statistics of SCIFACT dataset. The numbers are "number of claims / number of relevant abstracts".
75
+
76
+ feed the sentences with true evidence label $\hat{y}_i^r = 1$ to the stance prediction module at the beginning, and then gradually increase the proportion of the sentences with the estimated evidence probability $p_{i1}^{r} > p_{i0}^{r}$ , until eventually all sentences in $S^r$ are based on the estimated evidences. We set the sampling probability of using the estimated evidences as $p_{\text{sample}} = \sin \left( \frac{\pi}{2} \times \frac{\text{current\_epoch} - 1}{\text{total\_epoch} - 1} \right)$ .
77
+
78
+ Rationale Regularization (RR): The attention scores have been used for interpretability in NLP tasks (Serrano and Smith, 2019;Wiegreffe and Pinter, 2019; Sun and Lu, 2020). We assume that the abstract retrieval module should have good interpretability and tend to assign high sentence-level attention scores to the evidence sentences that influence the retrieval results; it is consistent with the goal of the rationale selection module. We thus enhance the information exchanges and constraints among tasks by proposing a regularization term based on a symmetric divergence on the sentence attention scores $\alpha$ of abstract retrieval and the estimated outputs $\mathbf{y}^r$ of the rational selection to bridge these two modules. The detailed formula is as follows, where $\mathbf{p}$ and $\mathbf{q}$ are $\alpha$ or $\mathbf{y}^r$ .
79
+
80
+ $$
81
+ \mathcal {D} (\mathbf {p} | | \mathbf {q}) = - \sum_ {i = 1} ^ {l} \left(\mathbf {p} _ {i} \log \left(\mathbf {q} _ {i}\right) + \left(1 - \mathbf {p} _ {i}\right) \log \left(1 - \mathbf {q} _ {i}\right)\right),
82
+ $$
83
+
84
+ $$
85
+ \mathcal {L} _ {R R} = \mathcal {D} (\boldsymbol {\alpha} \| \mathbf {y} ^ {r}) + \mathcal {D} (\mathbf {y} ^ {r} \| \boldsymbol {\alpha}). \tag {2}
86
+ $$
87
+
88
+ Joint Training: We jointly train our model on abstract retrieval, rationale selection and stance prediction. The joint loss with our RR is as follows, $\mathcal{L} = \lambda_1\mathcal{L}_{ret} + \lambda_2\mathcal{L}_{rat} + \lambda_3\mathcal{L}_{sta} + \gamma \mathcal{L}_{RR}$ , where $\lambda_{1},\lambda_{2},\lambda_{3}$ and $\gamma$ are hyperparameters.
89
+
90
+ # 3 Experiments
91
+
92
+ # 3.1 Experimental Settings
93
+
94
+ Dataset: We utilize the benchmark dataset SCIFACT<sup>1</sup>. It consists of 5,183 scientific papers with titles and abstracts and 1,109 claims in the training and development sets. Table 1 presents the statistics of the dataset.
95
+
96
+ Experimental Settings: For our ARSJOINT model, we use Optuna (Akiba et al., 2019) to tune the hyperparameters $\lambda_{1},\lambda_{2},\lambda_{3}$ and $\gamma$ of the loss $\mathcal{L}$ on $20\%$ of the training set and based on the performance on another $20\%$ training set. We choose the optimal hyperparameters by the average F1-score on abstract-level and sentence-level evaluations. The search ranges of these four hyperparameters are set to [0.1, 12], and the number of search trials is set to 100. Table 2 lists the selected weight hyperparameters of our model. The other hyperparameters such as learning rate in the model refer to the ones used in exiting work (Li et al., 2020) to make a fair comparison. These hyperparameters are listed in Table 3.
97
+
98
+ We implement our ARSJOINT model in PyTorch. Since the length of the input sequence seq is often greater than the maximum input length of a BERT-based model, we perform a tail-truncation operation on each sentence of seq that exceeds the maximum input length. For the pre-trained language model, we verify our approach by respectively using RoBERTa-large (Liu et al., 2019) and BioBERT-large (Lee et al., 2020a) trained on a biomedical corpus. We fine-tune RoBERTa-large and BioBERT-large on the SCIFACT dataset. In addition, the MLP in our model has two layers.
99
+
100
+ We compare our ARSJOINT approach with Paragraph-Joint (Li et al., 2020), VERISCI $^1$ (Wadden et al., 2020) and VERT5ERINI (Pradeep et al., 2021). We use the publicly available code $^2$ of them. The "Paragraph-Joint Pre-training" model is pretrained on the FEVER dataset (Thorne et al., 2018) and then fine-tune on the SCIFACT dataset. The "Paragraph-Joint SCIFACT-only" is not pre-trained
101
+
102
+ <table><tr><td>Model</td><td>λ1</td><td>λ2</td><td>λ3</td><td>γ</td></tr><tr><td>ARSJOINT w/o RR (RoBERTa)</td><td>2.7</td><td>11.7</td><td>2.2</td><td>-</td></tr><tr><td>ARSJOINT (RoBERTa)</td><td>0.9</td><td>11.1</td><td>2.6</td><td>2.2</td></tr><tr><td>ARSJOINT w/o RR (BioBERT)</td><td>0.1</td><td>10.8</td><td>4.7</td><td>-</td></tr><tr><td>ARSJOINT (BioBERT)</td><td>0.2</td><td>12.0</td><td>1.1</td><td>1.9</td></tr></table>
103
+
104
+ Table 2: Hyperparameters selected by Optuna for different variants of our model. The "w/o RR" means the model does not utilize rationale regularization.
105
+
106
+ <table><tr><td>Name</td><td>Value</td><td>Name</td><td>Value</td><td>Name</td><td>Value</td></tr><tr><td>ktra</td><td>12</td><td>lr1</td><td>1 × 10-5</td><td>Batch size</td><td>1</td></tr><tr><td>kret</td><td>30</td><td>lr2</td><td>5 × 10-6</td><td>Dropout</td><td>0</td></tr></table>
107
+
108
+ Table 3: Hyperparameter settings following the existing work. $k_{tra}$ and $k_{ret}$ are the number of candidate abstracts for each claim in the training and testing stages. $lr_1$ and $lr_2$ are the learning rates of the BERT-based model and other modules of the proposed model.
109
+
110
+ on other datasets.
111
+
112
+ Evaluation: We evaluate the methods by using the abstract-level and sentence-level evaluation criteria given in SCIFACT<sup>1</sup>. Abstract-level evaluation: It evaluates the performance of a model on detecting the abstracts which support or refute the claims. For the "Label-Only" evaluation, given a claim $q$ , the classification result of an abstract $a$ is correct if the estimated relevance label $\hat{y}^b$ is correct and the estimated stance label $\hat{y}^e$ is correct. For the "Label+Rationale" evaluation, the abstract is correctly rationalized, in addition, if the estimated rationale sentences contain a gold rationale. Sentence-level evaluation: It evaluates the performance of a model on detecting rationale sentences. For the "Selection-Only" evaluation, an estimated rationale sentence $s_i$ of an abstract $a$ is correctly selected if the estimated rationale label $\hat{y}_i^r$ is correct and the estimated stance label $\hat{y}^e$ is not "NOINFO". Especially, if consecutive multiple sentences are gold rationales, then all these sentences should be estimated as rationales. For the "Selection+Label", the estimated rationale sentences are correctly labeled, in addition, if the estimated stance label $\hat{y}^e$ of this abstract is correct. The evaluation metrics F1-score (F1), Precision (P), and Recall (R) are used. We train the model using all training data, and since Wadden et al. (2020) does not publish the labels on the test set, we evaluate the approaches on the development set following (Li et al., 2020).
113
+
114
+ # 3.2 Experimental Results
115
+
116
+ Table 4 shows the main experimental results. First, the proposed method ARSJOINT (BioBERT) out
117
+
118
+ <table><tr><td rowspan="3">Models</td><td colspan="6">Sentence-level</td><td colspan="6">Abstract-level</td></tr><tr><td colspan="3">Selection-Only</td><td colspan="3">Selection+Label</td><td colspan="3">Label-Only</td><td colspan="3">Label+Rationale</td></tr><tr><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>VERISCI</td><td>54.3</td><td>43.4</td><td>48.3</td><td>48.5</td><td>38.8</td><td>43.1</td><td>56.4</td><td>48.3</td><td>52.1</td><td>54.2</td><td>46.4</td><td>50.0</td></tr><tr><td>Paragraph-Joint SCIFACT-only</td><td>69.3</td><td>50.0</td><td>58.1</td><td>59.8</td><td>43.2</td><td>50.2</td><td>69.9</td><td>52.1</td><td>59.7</td><td>64.7</td><td>48.3</td><td>55.3</td></tr><tr><td>Paragraph-Joint Pre-training</td><td>74.2</td><td>57.4</td><td>64.7</td><td>63.3</td><td>48.9</td><td>55.2</td><td>71.4</td><td>59.8</td><td>65.1</td><td>65.7</td><td>55.0</td><td>59.9</td></tr><tr><td>VERT5ERINI (BM25)</td><td>67.7</td><td>53.8</td><td>60.0</td><td>63.9</td><td>50.8</td><td>56.6</td><td>70.9</td><td>61.7</td><td>66.0</td><td>67.0</td><td>58.4</td><td>62.4</td></tr><tr><td>VERT5ERINI (T5)</td><td>64.8</td><td>57.4</td><td>60.9</td><td>60.8</td><td>53.8</td><td>57.1</td><td>65.1</td><td>65.1</td><td>65.1</td><td>61.7</td><td>61.7</td><td>61.7</td></tr><tr><td>ARSJOINT w/o RR (RoBERTa)</td><td>70.9</td><td>56.6</td><td>62.9</td><td>56.8</td><td>45.4</td><td>50.5</td><td>66.1</td><td>56.0</td><td>60.6</td><td>61.0</td><td>51.7</td><td>56.0</td></tr><tr><td>ARSJOINT (RoBERTa)</td><td>67.9</td><td>57.1</td><td>62.0</td><td>55.5</td><td>46.7</td><td>50.7</td><td>64.5</td><td>57.4</td><td>60.8</td><td>59.1</td><td>52.6</td><td>55.7</td></tr><tr><td>ARSJOINT w/o RR (BioBERT)</td><td>75.4</td><td>57.7</td><td>65.3</td><td>63.6</td><td>48.6</td><td>55.1</td><td>72.7</td><td>57.4</td><td>64.2</td><td>67.9</td><td>53.6</td><td>59.9</td></tr><tr><td>ARSJOINT (BioBERT)</td><td>76.2</td><td>58.5</td><td>66.2</td><td>66.5</td><td>51.1</td><td>57.8</td><td>75.3</td><td>59.8</td><td>66.7</td><td>70.5</td><td>56.0</td><td>62.4</td></tr></table>
119
+
120
+ Table 4: Main experimental results.
121
+
122
+ <table><tr><td>Claim: Ly6C hi monocytes have a lower inflammatory capacity than Ly6C lo monocytes.</td><td>αi</td><td>ŷiT</td><td>yiT</td></tr><tr><td>Blood monocytes are well-characterized precursors for macrophages and dendritic cells.</td><td>0.0745</td><td>0</td><td>0</td></tr><tr><td colspan="4">......</td></tr><tr><td>Under inflammatory conditions elicited either by acute infection with Listeria monocytogenes or chronic 1,0,0 infection with Leishmania major, there was a significant increase in immature Ly-6C(high) monocytes, resembling the inflammatory left shift of granulocytes.</td><td>0.0936</td><td>1</td><td>1</td></tr><tr><td>In addition, acute peritoneal inflammation recruited preferentially Ly-6C(med-high) monocytes.</td><td>0.1613</td><td>1</td><td>1</td></tr><tr><td>Taken together, these data identify distinct subpopulations of mouse blood monocytes that differ in maturation stage and capacity to become recruited to inflammatory sites.</td><td>0.0745</td><td>0</td><td>0</td></tr></table>
123
+
124
+ Table 5: Result example of Rationale Regularization. Given a claim, it lists the sentences from an abstract. $\alpha_{i}$ is sentence attention score in the abstract retrieval task; $\hat{y}_i^r$ is estimated rationale label; $y_i^r$ is true rationale label.
125
+
126
+ performs the existing works with fully or partially pipelines. VERISCI and VERT5ERINI are pipeline models and Paragraph-Joint is a partially pipeline model with a joint model on two tasks. It shows that the proposed model which jointly learns the three tasks is effective to improve the performance.
127
+
128
+ Second, when using the same pre-trained model RoBERTa-large, comparing our method and the paragraph-joint model, ARSJOINT (RoBERTa) and ARSJOINT w/o RR (RoBERTa) have better performance than "Paragraph-Joint SciFact Only", especially on Recall. It shows that jointly learning with the abstract retrieval task can improve performance. For the Paragraph-Joint method, "Paragraph-Joint Pre-training" with pre-training on another FEVER dataset has much better performance than "Paragraph-Joint SCIFACT-only" without pre-training on other datasets. Similarly, we replace RoBERTa-large with BioBERT large which contains biological knowledge; ARSJOINT (BioBERT) achieves better performance over "Paragraph-Joint Pre-training".
129
+
130
+ Third, as an ablation study of the proposed RR, in the case of using BioBERT-large, there is a significant difference between the model with and without RR. Although only a small difference in the case of using RoBERTa-large, there is still an improvement on Recall. This indicates that ratio
131
+
132
+ nale regularization can effectively improve the performance of the model. Table 5 shows an example of the results with RR. In this example, it lists a claim and the sentences from an abstract. The attention scores of the sentences in the abstract retrieval task are consistent with the true rationale labels (as well as the estimated rationale labels). The abstract retrieval module thus has good interpretability.
133
+
134
+ # 4 Conclusion
135
+
136
+ In this paper, we propose a joint model named as ARSJOINT on three tasks of abstract retrieval, rationale selection and stance prediction for scientific claim verification in a MRC framework by including claim. We also propose a regularization based on the divergence between the sentence attention of the abstract retrieval task and the outputs of the rational selection task. The experimental results illustrate that our method achieves better results on the benchmark dataset SCIFACT. In future work, we will try to pre-train the model on other general claim verification datasets such as FEVER (Thorne et al., 2018) to improve the performance.
137
+
138
+ # Acknowledgments
139
+
140
+ This work was partially supported by KDDI Foundation Research Grant Program.
141
+
142
+ # References
143
+
144
+ Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2623-2631.
145
+ Aimée Alonso-Reina, Robert Sepulveda-Torres, Estela Saquete, and Manuel Palomar. 2019. Team gplsi: approach for automated fact checking. In Proceedings of the Second Workshop on Fact Extraction and VERIFICATION (FEVER), pages 110-114.
146
+ Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 1171-1179, Cambridge, MA, USA. MIT Press.
147
+ Qingyu Chen, Yifan Peng, and Zhiyong Lu. 2019. Biosentvec: creating sentence embeddings for biomedical texts. In 2019 IEEE International Conference on Healthcare Informatics (ICHI), pages 1-5. IEEE.
148
+ Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, and Smaranda Muresan. 2020. DeSePtion: Dual sequence prediction and adversarial examples for improved fact-checking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8593-8606, Online. Association for Computational Linguistics.
149
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020a. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
150
+ Nayeon Lee, Yejin Bang, Andrea Madotto, and Pascale Fung. 2020b. Misinformation has high perplexity. arXiv preprint arXiv:2006.04666.
151
+ Xiangci Li, Gully Burns, and Nanyun Peng. 2020. A paragraph-level multi-task learning model for scientific fact-verification. arXiv preprint arXiv:2012.14500.
152
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
153
+ Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2020. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342-7351.
154
+
155
+ Yi-Ju Lu and Cheng-Te Li. 2020. GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 505-514, Online. Association for Computational Linguistics.
156
+ Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6859-6866.
157
+ Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 528-540, New Orleans, Louisiana. Association for Computational Linguistics.
158
+ Ronak Pradeep, Xueguang Ma, Rodrigo Nogueira, and Jimmy Lin. 2021. Scientific claim verification with VerT5erini. In Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, pages 94-103, online. Association for Computational Linguistics.
159
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
160
+ Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics.
161
+ Amir Soleimani, Christof Monz, and Marcel Worring. 2020. Bert for evidence retrieval and claim verification. In European Conference on Information Retrieval, pages 359-366. Springer.
162
+ Xiaobing Sun and Wei Lu. 2020. Understanding attention for text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3418-3428, Online. Association for Computational Linguistics.
163
+ James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.
164
+
165
+ David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7534-7550, Online. Association for Computational Linguistics.
166
+ Sarah Wegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics.
167
+ Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480-1489.
168
+ Wenpeng Yin and Dan Roth. 2018. TwoWingOS: A two-wing optimization strategy for evidential claim verification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 105-114, Brussels, Belgium. Association for Computational Linguistics.
169
+ Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. Gear: Graph-based evidence aggregating and reasoning for fact verification. In Proceedings of ACL 2019.
abstractrationalestanceajointmodelforscientificclaimverification/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:257ef4207f60c4a2cf0e0082c6012f811d690295fcc51b88c70711de8f7eb253
3
+ size 385685
abstractrationalestanceajointmodelforscientificclaimverification/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ae082651477493561a5be5e73b0bd01013f6e05039b13a6b36866cc57e19692
3
+ size 261604
achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5615793074297e89d47da8480d03ca6b2d1e28510f832a8492666036cb8935e8
3
+ size 105954
achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:907ff0eee1a28f48d2efaa38a570a7b5cbcab88ef2f6bfd1a5b61c68a0f39d4e
3
+ size 127606
achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bb71154ac32dc9180e6c3acdf16e648231096c72af7b6e7928cf7edb152618a
3
+ size 1532404
achievingmodelrobustnessthroughdiscreteadversarialtraining/full.md ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Achieving Model Robustness through Discrete Adversarial Training
2
+
3
+ Maor Ivgi
4
+
5
+ Tel-Aviv University
6
+
7
+ maorivgi@mail.tau.ac.il
8
+
9
+ Jonathan Berant
10
+
11
+ Tel-Aviv University
12
+
13
+ The Allen Institute for AI
14
+
15
+ joberant@cs.tau.ac.il
16
+
17
+ # Abstract
18
+
19
+ Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error. While such attacks have been extensively explored for the purpose of evaluating model robustness, their utility for improving robustness has been limited to offline augmentation only. Concretely, given a trained model, attacks are used to generate perturbed (adversarial) examples, and the model is re-trained exactly once. In this work, we address this gap and leverage discrete attacks for online augmentation, where adversarial examples are generated at every training step, adapting to the changing nature of the model. We propose (i) a new discrete attack, based on best-first search, and (ii) random sampling attacks that unlike prior work are not based on expensive search-based procedures. Surprisingly, we find that random sampling leads to impressive gains in robustness, outperforming the commonly-used offline augmentation, while leading to a speedup at training time of $\sim 10\mathrm{x}$ . Furthermore, online augmentation with search-based attacks justifies the higher training cost, significantly improving robustness on three datasets. Last, we show that our new attack substantially improves robustness compared to prior methods.
20
+
21
+ # 1 Introduction
22
+
23
+ Adversarial examples are inputs that are slightly, but intentionally, perturbed to create a new example that is misclassified by a model (Szegedy et al., 2014). Adversarial examples have attracted immense attention in machine learning (Goodfellow et al., 2015; Carlini and Wagner, 2017; Papernot et al., 2017) for two important, but separate, reasons. First, they are useful for evaluating model robustness, and have revealed that current models are over-sensitive to minor perturbations. Second, adversarial examples can improve robustness: training on adversarial examples reduces the brittleness and over-sensitivity of deep learning models to
24
+
25
+ ![](images/8810663b71d1f0f9bc9c289a722a9c85db7c1b2479f4e2170802de80651e87b0.jpg)
26
+
27
+ ![](images/937baa9587da996922ebde48f2106651a7ba83d76a75b08429fd69d3a9a80178.jpg)
28
+
29
+ ![](images/d64334038a3a97c7837e7cca41b01ee2edb919f86aa45a2a7fd873f669e8aa4e.jpg)
30
+ Figure 1: Robust accuracy vs. slowdown in training time, comparing different methods to Baseline (purple pentagon); x-axis in logarithmic scale. The popular ADVOFF (blue squares, offline augmentation with adversarial example) is $10\mathrm{x}$ slower than our simple augmentation of 4 (8) random samples (triangles, RAND-OFF-4, RANDOFF-8) and achieves similar or worse robust accuracy. Our online augmentation of adversarial examples (ADVON, yellow circles) significantly improves robust accuracy, but is expensive to train.
31
+
32
+ such perturbations (Alzantot et al., 2018; Jin et al., 2020; Li et al., 2020; Lei et al., 2019; Wallace et al., 2019; Zhang et al., 2020; Garg and Ramakrishnan, 2020; Si et al., 2020a; Goel et al., 2021).
33
+
34
+ Training and evaluating models with adversarial examples has had considerable success in computer vision, with gradient-based techniques like FGSM (Goodfellow et al., 2015) and PGD (Madry et al., 2018). In computer vision, adversarial examples can be constructed by considering a continuous space of imperceptible perturbations around image pixels. Conversely, language is discrete, and any perturbation is perceptible. Thus, robust models must be invariant to input modifications that preserve semantics, such as synonym substitutions (Alzantot et al., 2018; Jin et al., 2020), paraphrasing (Tan et al., 2020), or typos (Huang et al., 2019).
35
+
36
+ Due to this property of language, ample work has been dedicated to developing discrete attacks that generate adversarial examples through combinatorial optimization (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020; Zhou et al., 2020; Zang et al., 2020). For example, in sentiment analysis, it is common to consider the space of all synonym substitutions, where an adversarial example for an input "Such an amazing movie!" might be "Such an extraordinary film" (Fig. 2). This body of work has mostly focused on evaluating robustness, rather than improving it, which naturally led to the development of complex combinatorial search algorithms, whose goal is to find adversarial examples in the exponential space of perturbations.
37
+
38
+ In this work, we address a major research gap in current literature around improving robustness with discrete attacks. Specifically, past work (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020) only considered offline augmentation, where a discrete attack is used to generate adversarial examples and the model is re-trained exactly once with those examples. This ignores online augmentation, which had success in computer vision (Kurakin et al., 2017; Perez and Wang, 2017; Madry et al., 2018), where adversarial examples are generated in each training step, adapting to the changing model. Moreover, simple data augmentation techniques, such as randomly sampling from the space of synonym substitutions and adding the generated samples to the training data have not been investigated and compared to offline adversarial augmentation. We address this lacuna and systematically compare online augmentation to offline augmentation, as well as to simple random sampling techniques. To our knowledge, we are the first to evaluate online augmentation with discrete attacks on a wide range of NLP tasks. Our results show that online augmentation leads to significant improvement in robustness compared to prior work and that simple random augmentation achieves comparable results to the common offline augmentation at a fraction of the complexity and training time.
39
+
40
+ Moreover, we present a new search algorithm for finding adversarial examples, Best-First search over a Factorized graph (BFF), which alleviates the greedy nature of previously-proposed algorithms. BFF improves search by incorporating backtracking, and allowing to re-visit previously-discarded search paths, once the current one is revealed to be sub-optimal.
41
+
42
+ ![](images/0a77b426876b36d98ceeb67a39b10c4b726b1a70d8fcfbb14c897a334655f20d.jpg)
43
+ Expected: Positive
44
+ Figure 2: Given a movie review $x$ , the model $A$ is robust to a set of perturbations, while $A'$ is not.
45
+
46
+ We evaluate model robustness on three datasets: BoolQ (Clark et al., 2019), IMDB (Maas et al., 2011), and SST-2 (Socher et al., 2013), which vary in terms of the target task (question answering and sentiment analysis) and input length. Surprisingly, we find across different tasks (Fig. 1) that augmenting each training example with 4-8 random samples from the synonym substitution space performs as well as (or better than) the commonly used offline augmentation, while being simpler and $10\mathrm{x}$ faster to train. Conversely, online augmentation makes better use of the extra computational cost, and substantially improves robust accuracy compared to offline augmentation. Additionally, our proposed discrete attack algorithm, BFF, outperforms prior work by a wide margin. Our data and code are available at https://github.com/Mivg/robust_transformers.
47
+
48
+ # 2 Problem Setup and Background
49
+
50
+ Problem setup We focus in this work on the supervised classification setup, where given a training set $\{x_{j},y_{j}\}_{j = 1}^{N}$ sampled from $\mathcal{X}\times \mathcal{Y}$ , our goal is to learn a mapping $A:\mathcal{X}\to \mathcal{Y}$ that achieves high accuracy on held-out data sampled from the same distribution. Moreover, we want the model $A$ to be robust, i.e., invariant to a set of pre-defined label-preserving perturbations to $x$ , such as synonym substitutions. Formally, for any natural language input $x$ , a discrete attack space of label-preserving perturbations $S(x)\subset \mathcal{X}$ is defined. Given a labeled example $(x,y)$ , a model $A$ is robust w.r.t $x$ , if $A(x) = y$ and for any $\bar{x}\in S(x)$ , the output $A(\bar{x}) = A(x)$ . An example $\bar{x}\in S(x)$ such that $A(\bar{x})\neq A(x)$ is called an adversarial example. We assume $A$ provides not only a prediction but a distribution $p_A(x)\in \Delta^{\lfloor \mathcal{V}\rfloor}$ over the possible classes, where $\Delta$ is the simplex, and denote the probability $A$ assigns to the gold label by $[p_A(x)]_y$ . Fig. 2 shows an example from sentiment analysis,
51
+
52
+ where a model $A$ is robust, while $A^{\prime}$ is not w.r.t $x$ .
53
+
54
+ Robustness is evaluated with robust accuracy (Tsipras et al., 2019), i.e., the fraction of examples a model is robust to over some held-out data. Typically, the size of the attack space $S(x)$ is exponential in the size of $x$ and it is not feasible to enumerate all perturbations. Instead, an upper bound is estimated by searching for a set of adversarial attacks, i.e., "hard" examples in $S(x)$ for every $x$ , and estimating robust accuracy w.r.t to that set.
55
+
56
+ Improving robustness with discrete attacks Since language is discrete, a typical approach for evaluating robustness is to use combinatorial optimization methods to search for adversarial examples in the attack space $S(x)$ . This has been repeatedly shown to be an effective attack method on pre-trained models (Alzantot et al., 2018; Lei et al., 2019; Ren et al., 2019; Li et al., 2020; Jin et al., 2020; Zang et al., 2020). However, in terms of improving robustness, discrete attacks have thus far been mostly used with offline augmentation (defined below) and have led to limited robustness gains. In this work, we examine the more costly but potentially more beneficial online augmentation.
57
+
58
+ Offline vs. online augmentation Data augmentation is a common approach for improving generalization and robustness, where variants of training examples are automatically generated and added to the training data (Simard et al., 1998). Here, discrete attacks can be used to generate these examples. We consider both offline and online data augmentation and focus on improving robustness with adversarial examples.
59
+
60
+ Given a training set $\{(x_j, y_j)\}_{j=1}^N$ , offline data augmentation involves (a) training a model $A$ over the training data, (b) for each training example $(x_j, y_j)$ , generating a perturbation w.r.t to $A$ (using some discrete attack) and labeling it with $y_j$ , and (c) training a new model over the union of the original training set and the generated examples. This is termed offline augmentation because examples are generated with respect to a fixed model $A$ .
61
+
62
+ Online data augmentation is this setup, examples are generated at training time w.r.t the current model $A$ . This is more computationally expensive, as examples must be generated during training and not as pre-processing, but examples can adapt to the model over time. In each step, half the batch contains examples from the training set, and half are adversarial examples generated by some dis
63
+
64
+ crete attack w.r.t to the model's current state.
65
+
66
+ Online augmentation has been used to improve robustness in NLP with gradient-based approaches (Jia et al., 2019; Shi et al., 2020; Zhou et al., 2020), but to the best of our knowledge has been overlooked in the context of discrete attacks. In this work, we are the first to propose model-agnostic online augmentation training, which uses automatically generated discrete adversarial attacks to boost overall robustness in NLP models.
67
+
68
+ # 3 The Attack Space
69
+
70
+ An attack space for an input with respect to a classification task can be intuitively defined as the set of label-preserving perturbations over the input. A popular attack space $S(x)$ , which we adopt, is the space of synonym substitutions (Alzantot et al., 2018; Ren et al., 2019). Given a synonym dictionary that provides a set of synonyms $\operatorname{Syn}(w)$ for any word $w$ , the attack space $S_{\operatorname{syn}}(x)$ for an utterance $x = (w_1, \dots, w_n)$ contains all utterances that can be obtained by replacing a word $w_i$ (and possibly multiple words) with one of their synonyms. Typically, the number of words from $x$ allowed to be substituted is limited to be no more than $D = \lceil d \cdot |x| \rceil$ , where $d \in \{0.1, 0.2\}$ is a common choice.
71
+
72
+ Synonym substitutions are context-sensitive, i.e., substitutions might only be appropriate in certain contexts. For example, in Fig. 3, replacing the word "like" with its synonym "similar" (red box) is invalid, since "like" is a verb in this context. Consequently, past work (Ren et al., 2019; Jin et al., 2020) filtered $S_{\text{syn}}(x)$ using a context-sensitive filtering function $\Phi_x(w_i, \bar{w}_i) \in \{0, 1\}$ , which determines whether substituting a word $w_i$ from the original utterance $x$ with its synonym $\bar{w}_i$ is valid in a particular context. For instance, an external model can check whether the substitution maintains the part-of-speech, and whether the overall semantics is maintained. We define the filtered synonyms substitutions space $S_{\Phi}(x)$ as the set that includes all utterances $\bar{x}$ that can be generated through a sequence of no more than $D$ single-word substitutions from the original utterance that are valid according to $\Phi(\cdot, \cdot)$ . In §5.2, we describe the details of the synonym dictionary and function $\Phi$ .
73
+
74
+ ![](images/148154a816a7fa9accf6c69941f4129f8c63c99ed3ee9093802795152583398d.jpg)
75
+ Figure 3: Example of an attack space, and the paths taken by a greedy algorithm and best-first search. An adversarial example has a probability $p < 0.5$ for the gold positive label.
76
+
77
+ # 4 Best-first Search Over a Factorized Graph
78
+
79
+ Searching over the attack space $S_{\Phi}(x)$ can be naturally viewed as a search problem over a directed acyclic graph (DAG), $G = (\mathcal{U}, \mathcal{E})$ , where each node $u_{\bar{x}} \in \mathcal{U}$ is labeled by an utterance $\bar{x}$ , and edges $\mathcal{E}$ correspond to single-word substitutions, valid according to $\Phi(\cdot)$ . The graph is directed and acyclic, since only substitutions of words from the original utterance $x$ are allowed (see Fig. 3). Because there is a one-to-one mapping from the node $u_{\bar{x}}$ to the utterance $\bar{x}$ , we will use the latter to denote both the node and the utterance.
80
+
81
+ Discrete attacks use search algorithms to find an adversarial example in $S(x)$ . The search is guided by a heuristic scoring function $s_A(x) \coloneqq [p_A(x)]_y$ , where the underlying assumption is that utterances that give lower probability to the gold label are closer to an adversarial example. A popular choice for a search algorithm in NLP is greedy search, illustrated in Fig. 3. Specifically, one holds in step $t$ the current node $x_t$ , where $t$ words have been substituted in the source node $x_0 = x$ . Then, the model $A(\cdot)$ is run on the frontier, that is, all out-neighbor nodes $\mathcal{N}(x_t) = \{\hat{x}_{t+1} \mid (x_t, \hat{x}_{t+1}) \in \mathcal{E}\}$ , and the one that minimizes the heuristic scoring function is selected: $x_{t+1} \coloneqq \operatorname*{argmin}_{\hat{x} \in \mathcal{N}(x_t)} s_A(\hat{x})$ .
82
+
83
+ While greedy search has been used for character-flipping (Ebrahimi et al., 2018), it is ill-suited in the space of synonym substitutions. The degree of nodes is high - assuming $n_{\mathrm{rep}}$ words can be replaced in the text, each with $K$ possible synonyms, then the out degree is $O(n_{\mathrm{rep}} \cdot K)$ . This results in an infeasible number of forward passes through the attacked model even for a small number of search
84
+
85
+ iterations.
86
+
87
+ To enable effective search through the search space, we (a) factorize the graph such that the out-degree of nodes is lower, and (b) use a best-first search algorithm. We describe those next.
88
+
89
+ Graph factorization To reduce the out-degree of a node in the search space and thus improve its efficiency, we can split each step into two. First, choose a position to substitute in the utterance; Second, choose a substitution for that position. This reduces the number of evaluations of $A$ per step from $O(n_{\mathrm{rep}} \cdot K)$ to $O(n_{\mathrm{rep}} + K)$ . To estimate the score of a position $i$ , one can mask the word $w_i$ with a mask token $\tau$ and measure $s_A(x_{w_i \rightarrow \tau})$ where $x_{w_i \rightarrow \tau}$ is the utterance $x$ where the word in position $i$ is replaced by the mask $\tau$ .
90
+
91
+ We can describe this approach as search over a bi-partite DAG $\hat{G} = (\mathcal{U}\cup \mathcal{W},\hat{\mathcal{E}})$ . The nodes $\mathcal{U}$ are utterances like in $G$ , and the new nodes are utterances with a single mask token $\mathcal{W} = \{\bar{x}_{w_i\rightarrow \tau}\mid \bar{x}\in S(x)\wedge w_i$ is a word in $x\}$ . The edges comprise two types: $\hat{\mathcal{E}} = \mathcal{E}_1\cup \mathcal{E}_2$ . The edges $\mathcal{E}_1$ are from utterances to masked utterances: $\mathcal{E}_1 = \{(\bar{x},\bar{x}_{w_i\rightarrow \tau})\} \subset \mathcal{U}\times \mathcal{W}$ , and $\mathcal{E}_2 = \{(\bar{x}_{w_i\rightarrow \tau},\bar{x}_{w_i\rightarrow w_{syn}})\} \subset \mathcal{W}\times \mathcal{U}$ where $w_{syn}\in \operatorname {Syn}(w_i)$ . In Figure 3, the two rightmost nodes in each row would be factorized together as they substitute the same word, and the algorithm will evaluate only one of them to estimate the potential benefit of substituting "movie".
92
+
93
+ Best-first search A factorized graph makes search possible by reducing the out-degree of nodes. However, greedy search is still sub-optimal. This is since it relies on the heuristic search function to be a good estimate of the distance to an adversarial example, which can often be false. Consider the example in Fig. 3. The two adversarial examples (with $p = 0.4$ or $p = 0.45$ ) are not reachable from the best node after the first step ( $p = 0.6$ ), only from the second-best ( $p = 0.65$ ).
94
+
95
+ Best-first search (Pearl, 1984) overcomes this at a negligible cost, by holding a min-heap over the nodes of the frontier of the search space (Alg. 1). In each step, we pop the next utterance, which assigns the lowest probability to the gold label, and push all neighbors into the heap. When a promising branch turns out to be sub-optimal, search can resume from an earlier node to find a better solution, as shown in the blue path in Figure 3. To bound the cost of finding a single adversarial example, we bound the number of forward passes through the model
96
+
97
+ $A$ with a budget parameter $B$ . To further reduce "greedyness", search can use a beam by popping more than one node in each step, expanding all their neighbors and pushing the result back to the heap. Our final approach uses Best-First search over a Factorized graph, and is termed BFF.
98
+
99
+ Algorithm 1: BFF
100
+ ```txt
101
+ input :model A, factorized graph $G$ , utterance $x$ .
102
+ heap $\leftarrow \{(x,s_A(X)\}$ $x^{*} \gets x$
103
+ while $|\text{heap}| > 0$ and budget $B$ not exhausted:
104
+ [ \begin{aligned} & \bar{x} \gets \text{heap.pop()} \\ & x^{*} \gets \operatorname{argmin}_{\hat{x} \in \{\bar{x},x^{*}\}} A(\hat{x}) \\ & \text{if } A(x^{*}) \neq y \text{ break;} \\ & \text{for } \hat{x} \in \mathcal{N}(\bar{x}) \text{ do} \\ & \quad | \quad \text{heap.push}(\hat{x},s_A(\hat{x})) \end{aligned} ]
105
+ return $x^{*}$
106
+ ```
107
+
108
+ # 5 Experiments
109
+
110
+ We conduct a thorough empirical evaluation of model robustness across a wide range of attacks and training procedures.
111
+
112
+ # 5.1 Experimental Setup
113
+
114
+ To evaluate our approach over diverse settings, we consider three different tasks: text classification, sentiment analysis and question answering, two of which contain long passages that result in a large attack space (see Table 1).
115
+
116
+ 1. SST-2: Based on the Stanford sentiment treebank (Socher et al., 2013), SST-2 is a binary (positive/negative) classification task containing 11,855 sentences describing movie reviews. SST-2 has been frequently used for evaluating robustness.
117
+ 2. IMDB (Maas et al., 2011): A binary (positive/negative) text classification task, containing 50K reviews from IMDB. Here, passages are long and thus the attack space is large (Table 1).
118
+ 3. BoolQ (Clark et al., 2019): contains 16,000 yes/no questions over Wikipedia paragraphs. This task is perhaps the most interesting, because the attack space is large and answering requires global passage understanding. We allow word substitutions in the paragraph only and do not substitute nouns, verbs, or adjectives that appear in the question to avoid non-label-preserving perturbations. Further details can be found in App. A.2.
119
+
120
+ Models We consider a wide array of models and evaluate both their downstream accuracy and ro
121
+
122
+ bustness. In all models, we define a budget of $B = 1000$ , which specifies the maximal number of allowed forward passes through the model for finding an adversarial example. All results are an average of 3 runs.
123
+
124
+ To demonstrate the effectiveness of BFF for both robustness evaluation as well as adversarial training, we compare it to a recent state-of-the-art discrete attack, TEXTFOOLER (Jin et al., 2020), which we denote in model names below by the prefix TxF. The models compared are:
125
+
126
+ - BASELINE: we fine-tune a pretrained language model on the training set. We use BERT-BASE (Devlin et al., 2019) for IMDB/SST-2 and ROBERTA-LARGE (Liu et al., 2019) for BoolQ. These baselines are on par with current state-of-the-art to demonstrate the efficacy of our method.
127
+ - BFFOFF/TXFOFF Offline augmentation with the BFF or TEXTFOOLER attacks.
128
+ - BFFON/TxFON Online augmentation with the BFF or TEXTFOOLER attacks.
129
+ - RANDOFF- $L$ : We compare search-based algorithms to a simple and efficient approach that does not require any forward passes through the model $A$ . Specifically, we randomly sample $L$ utterances from the attack space for each example (without executing $A$ ) and add them to the training data.
130
+ - RANDOM: A random sampling approach that does use the model $A$ . Here, we sample $B$ random utterances, pass them through $A$ , and return the attack that resulted in lowest model probability.
131
+ - FREELB: For completeness, we also consider FREELB (Zhu et al., 2020), a popular gradient-based approach for improving robustness, which employs virtual adversarial training (see §6). This approach uses online augmentation, where examples are created by taking gradient steps w.r.t the input embeddings to maximize the model's loss. Other gradient-based approaches (e.g., certified robustness) are not suitable when using pre-trained transformers, which we further discuss in §6.
132
+
133
+ In a parallel line of work, Garg and Ramakrishnan (2020) and Li et al. (2020) used pre-trained language models to both define an attack space and to generate high-fidelity attacks in that space. While successful, these approaches are not suitable for our setting, due to the strong coupling between the attack strategy and the attack space itself. We further discuss this in §6
134
+
135
+ Evaluation We evaluate models on their downstream accuracy, as well as on robust accuracy, i.e. the fraction of examples against which the model is robust. Since exact robust accuracy is intractable to compute due to the exponential size of the attack space, we compute an upper-bound by attacking each example with both BFF and TEXTFOOLER (TxF) with a budget of $B = 2000$ . An example is robust if we cannot find an utterance where the prediction is different from the gold label. We evaluate robust accuracy on 1000/1000/872 samples from the development sets of BoolQ/IMDB/SST-2.
136
+
137
+ # 5.2 Attack Space
138
+
139
+ Despite the myriad of works on discrete attacks, an attack space for synonym substitutions has not been standardized. While all past work employed a synonym dictionary combined with a $\Phi(\cdot, \cdot)$ filtering function (see §3), the particular filtering functions vary. When examining the attack space proposed in TxF, we observed that attacks result in examples that are difficult to understand or are not label-preserving. Table 6 in App. A.4 shows several examples. For instance, in sentiment classification, the attack replaced "compelling" with "unconvincing" in the sentence "it proves quite unconvincing as an intense, brooding character study" which alters the meaning and the sentiment of the sentence. Therefore, we use a more strict definition of the filtering function and conduct a user study to verify it is label-preserving.
140
+
141
+ Concretely, we use the synonym dictionary from Alzantot et al. (2018). We determine if a word substitution is context-appropriate by computing all single-word substitutions $(n_{\mathrm{rep}} \cdot K)$ and disallowing those that change the POS tag according to spaCy (Honnibal et al.) or increase perplexity according to GPT-2 (Radford et al., 2019) by more than $25\%$ . Similar to Jin et al. (2020), we also filter out synonyms that are not semantics-preserving according to the USE (Cer et al., 2018) model. The attack space includes any combination of allowed single-word substitutions, where the fraction of allowed substitutions is $d = 0.1$ . Implementation details are in App. A.2. We find that this ensemble of models reduces the number of substitutions that do not preserve semantics and are allowed by the filtering function.
142
+
143
+ We check the validity of our more restrictive attack space with a user study, where we verify that our attack space is indeed label-preserving. The
144
+
145
+ <table><tr><td></td><td>|x|</td><td>nrep</td><td>|Syn(w)|</td><td>|Sφ(x)|</td></tr><tr><td>SST-2</td><td>8.9</td><td>2.7</td><td>2.4</td><td>27.7</td></tr><tr><td>IMDB</td><td>242.4</td><td>97.3</td><td>3.6</td><td>2.27 × 1064</td></tr><tr><td>BoolQ†</td><td>97.7</td><td>38.7</td><td>3.6</td><td>3.64 × 1025</td></tr></table>
146
+
147
+ Table 1: Statistics on datasets and the size of attack space. We show the average number of words per utterance $|x|$ , the average number of words with substitutions $n_{\mathrm{rep}}$ , average number of synonyms per replaceable word, and an estimation of the attack space size.
148
+
149
+ details of the user study are in $\S 5.6$
150
+
151
+ # 5.3 Robustness Results
152
+
153
+ Table 2 shows accuracy on the development set, robust accuracy, and slowdown compared to BASELINE for all models and datasets. For downstream accuracy, training for robustness either maintains or slightly increases downstream accuracy. This is not the focus of this work, but is indeed a nice side-effect. For robust accuracy, discrete attacks substantially improve robustness: $80.5 \rightarrow 85.3$ on SST-2, $41.2 \rightarrow 78.9$ on IMDB, and $50.0 \rightarrow 68.7$ on BoolQ, closing roughly half the gap from downstream accuracy.
154
+
155
+ Comparing different attacks, online augmentation (BFFON), which has been overlooked in the context of discrete attacks, leads to dramatic robustness gains compared to other methods, but is slow to train - 20-270x slower than BASELINE. This shows the importance of continuous adaptation to the current vulnerabilities of the model.
156
+
157
+ Interestingly, adding offline random samples $(\mathrm{RANDOFF} - L)$ consistently improves robust accuracy, and using $L = 12$ leads to impressive robustness gains without executing $A$ at all, outperforming BFFOFF in robust accuracy, and being $\sim 5\mathrm{x}$ faster on IMDB and BoolQ. Moreover, random sampling is trivial to implement, and independent from the attack strategy. Hence, the common practice of using offline augmentation with search-based attacks, such as BFFOFF, seems misguided, and a better solution is to use random sampling. Online random augmentation obtains impressive results, not far from BFFON, without applying any search procedure, but is very slow, since it uses the entire budget $B$ in every example.
158
+
159
+ Comparing BFF to TxF, we observe that BFF, which uses best-first search, outperforms TxF in both the online and offline setting. Last FREELB, which is based on virtual adversarial training, improves robust accuracy at a low computational cost, but is dramatically outperformed by discrete search
160
+
161
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Accuracy</td><td colspan="3">Robust Accuracy</td><td colspan="3">Slowdown</td></tr><tr><td>SST-2</td><td>IMDB</td><td>BoolQ</td><td>SST-2</td><td>IMDB</td><td>BoolQ</td><td>SST-2</td><td>IMDB</td><td>BoolQ</td></tr><tr><td>Baseline</td><td>91.9</td><td>93.4</td><td>84.5</td><td>80.5</td><td>41.2</td><td>50.0</td><td>×1</td><td>×1</td><td>×1</td></tr><tr><td>FREELB</td><td>92.5</td><td>93.9</td><td>85.5</td><td>82.1</td><td>62.5</td><td>55.8</td><td>×1.8</td><td>×1.8</td><td>×3.9</td></tr><tr><td>RANDOFF-1</td><td>91.9</td><td>93.5</td><td>85.6</td><td>83.5</td><td>50.3</td><td>52.2</td><td>×1.9</td><td>×1.5</td><td>×2.1</td></tr><tr><td>RANDOFF-4</td><td>91.6</td><td>93.7</td><td>85.5</td><td>83.6</td><td>57.0</td><td>58.4</td><td>×3.8</td><td>×4.5</td><td>×5.1</td></tr><tr><td>RANDOFF-8</td><td>91.1</td><td>93.8</td><td>86.1</td><td>83.3</td><td>60.9</td><td>61.3</td><td>×5.4</td><td>×8.0</td><td>×9.3</td></tr><tr><td>RANDOFF-12</td><td>91.5</td><td>93.7</td><td>85.8</td><td>84.2</td><td>60.1</td><td>63.0</td><td>×6.3</td><td>×11.5</td><td>×13.2</td></tr><tr><td>TXOFF</td><td>91.2</td><td>93.4</td><td>86.5</td><td>83.5</td><td>49.0</td><td>61.5</td><td>×3.0</td><td>×56.1</td><td>×8.6</td></tr><tr><td>BFFOFF</td><td>91.8</td><td>93.7</td><td>85.8</td><td>84.6</td><td>54.3</td><td>62.3</td><td>×5.4</td><td>×60.0</td><td>×63.2</td></tr><tr><td>RANDOM</td><td>91.7</td><td>94.1</td><td>85.6</td><td>84.9</td><td>68.5</td><td>66.0</td><td>×14.8</td><td>×249.3</td><td>×280.4</td></tr><tr><td>TXFON</td><td>91.3</td><td>93.8</td><td>86.0</td><td>84.0</td><td>67.4</td><td>65.3</td><td>×3.9</td><td>×58.0</td><td>×28.1</td></tr><tr><td>BFFON</td><td>91.7</td><td>94.2</td><td>86.5</td><td>85.3</td><td>78.9</td><td>68.7</td><td>×21.1</td><td>×270.7</td><td>×215.9</td></tr></table>
162
+
163
+ Table 2: Accuracy on the evaluation set, robust accuracy, and slowdown in model training for all datasets.
164
+
165
+ <table><tr><td rowspan="2">Model</td><td colspan="4">IMDB</td><td colspan="4">BoolQ</td></tr><tr><td>Rand</td><td>TxF</td><td>BFF</td><td>Gen</td><td>Rand</td><td>TxF</td><td>BFF</td><td>Gen</td></tr><tr><td>Baseline</td><td>73.1</td><td>70.2</td><td>49.9</td><td>54.1</td><td>62.1</td><td>67.7</td><td>50.2</td><td>52.0</td></tr><tr><td>RND-OA</td><td>74.8</td><td>74.7</td><td>52.9</td><td>59.1</td><td>70.9</td><td>72.0</td><td>59.4</td><td>62.0</td></tr><tr><td>TxFOFF</td><td>67.7</td><td>77.5</td><td>52.5</td><td>56.7</td><td>71.0</td><td>75.0</td><td>61.5</td><td>63.4</td></tr><tr><td>BFFOFF</td><td>75.4</td><td>76.9</td><td>58.6</td><td>64.1</td><td>70.9</td><td>74.8</td><td>64.7</td><td>65.2</td></tr><tr><td>RANDOM</td><td>87.0</td><td>76.4</td><td>68.5</td><td>79.6</td><td>71.5</td><td>72.6</td><td>60.1</td><td>67.5</td></tr><tr><td>TxFON</td><td>81.1</td><td>84.2</td><td>69.7</td><td>73.7</td><td>73.4</td><td>74.8</td><td>65.3</td><td>67.4</td></tr><tr><td>BFFON</td><td>87.0</td><td>84.9</td><td>79.0</td><td>81.9</td><td>75.1</td><td>76.1</td><td>69.0</td><td>70.3</td></tr></table>
166
+
167
+ Table 3: Robust accuracy of different robust models w.r.t particular discrete attacks. RND-OA is offline augmentation with a random attack and $B = 1000$ . Gen is our implementation of the Genetic Attack by Alzantot et al. (2018).
168
+
169
+ based attacks, including BFF.
170
+
171
+ To summarize, random sampling leads to significant robustness gains at a small cost, outperforming the commonly used offline augmentation. Online augmentation leads to the best robustness, but is more expensive to train.
172
+
173
+ # 5.4 Robustness across Attack Strategies
174
+
175
+ A natural question is whether a model trained for robustness with an attack (e.g., BFF) is robust w.r.t to examples generated by other attacks, which are potentially uncorrelated with them. To answer that, we evaluate the robustness of our models to attacks generated by BFF, TxF, and random sampling. Moreover, we evaluate robustness to a genetic attack, which should not be correlated with BFF and TxF: we re-implement the genetic attack algorithm from Alzantot et al. (2018) (details in A.3), and examine the robustness of our model to this attack. All attacks are with a budget of $B = 2000$ .
176
+
177
+ Table 3 shows the result of this evaluation. We observe that BFFON obtains the highest robust accuracy results w.r.t to all attacks: BFF, TxF, random sampling, and a genetic attack. In offline
178
+
179
+ augmentation, we observe again that BFFOFF obtains good robust accuracy, higher or comparable to all other offline models for any attack strategy. This result highlights the generality of BFF for improving model robustness.
180
+
181
+ # 5.5 Success Rate Results
182
+
183
+ To compare the different attacks proposed in §4, we analyze the success rate against BASELINE, i.e., the proportion of examples for which an attack finds an adversarial example as a function of the budget $B$ .
184
+
185
+ Fig. 4 compares the success rate of different attacks. We observe that BFF-based attacks have the highest success rate after a few hundred executions. TEXTFOOLER performs well at first, finding adversarial examples for many examples, but then its success plateaus. Similarly, a random approach, which ignores the graph structure, starts with a relatively high success rate, as it explores far regions in the graph, but fails to properly utilize its budget and then falls behind.
186
+
187
+ BFF combines backtracking with graph factorization. When removing backtracking, i.e., greedy search over the factorized graph, success rate decreases, especially in BoolQ. Greedy search without graph factorization leads to a low success rate due to the large number of neighbors of each node, which quickly exhausts the budget. Moreover, looking at BFF with beam size 2 (popping 2 items from the heap in each step) leads to lower performance when the budget $B \leq 2000$ , as executions are expended on less promising utterances, but could improve success rate given a larger budget.
188
+
189
+ Lastly, due to our more strict definition of the attack space, described in (§5.2), success rates of BFF and TxF are lower compared to Jin et al. (2020). To verify the correctness of our attacks,
190
+
191
+ ![](images/930b717c723a8f4dd3ad31c739564b2b64569ea166c140fe977e93b0f7dfce1b.jpg)
192
+
193
+ ![](images/d2273dd718e5b0a8ec3951e84c8a35cadbcbe593bf6b009c255097cb6928b486.jpg)
194
+ Figure 4: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget.
195
+
196
+ ![](images/dd5d8068ca9d062b2b9861ae21ed26f9e0d86eae5b684634fff34381417302c1.jpg)
197
+
198
+ <table><tr><td></td><td>Original</td><td>Random</td><td>BFF</td></tr><tr><td>IMDB</td><td>98.0</td><td>98.0</td><td>96.0</td></tr><tr><td>BoolQ</td><td>89.0</td><td>91.5</td><td>83.5</td></tr><tr><td>SST-2</td><td>97.0</td><td>96.0</td><td>94.4</td></tr></table>
199
+
200
+ Table 4: Evaluating attack space validity. We show human performance on original examples, random examples, and examples generated with BFF.
201
+
202
+ we run BFF and TxF in their attack space, which uses a larger synonym dictionary, a more permissive function $\Phi$ , and does not limit the number of substitutions $D$ and budget $B$ . We obtain a similar success rate, close to $100\%$ . Nevertheless, we argue our attack space, validated by users to be label-preserving is preferable, and leave standardization of attack spaces through a broad user study to future work.
203
+
204
+ # 5.6 User Study
205
+
206
+ Since a model is considered to not be robust even if it flips the output label for a single adversarial sample, the validity of adversarial examples in the attack space is crucial. When we examined generated attacks based on prior works, we found many label-flipping attacks. This was especially noticeable when using BFF attacks over tasks not evaluated in prior works (see examples in Appendix A.4). In this work, our focus was on evaluating different methods for increasing model robustness,
207
+
208
+ and thus over-constraining the attack space to guarantee its validity was acceptable. We stress that our attack search space is more conservative than prior work, and is a strict subset of prior attack spaces (see Appendix A.2), leading to higher validity of adversarial examples.
209
+
210
+ We evaluate the validity of our attack space and the generated adversarial samples with a user study. We sample 100/100/50 examples from SST-2/BoolQ/IMDB respectively, and for each example create two adversarial examples: (a) by random sampling (b) using a BFF attack. We ask 25 NLP graduate students to annotate both the original example and the two adversarial ones. Each example is annotated by two annotators and each annotator only sees one version of an example. If human performance on random and adversarial examples is similar to the original task, this indicates the attack space is label-preserving.
211
+
212
+ Table 4 shows the results. Human performance on random examples is similar to the original utterances. Human performance on examples generated with BFF is only mildly lower than the performance on the original utterances, overall confirming that the attack space is label-preserving.
213
+
214
+ Ideally, the validity of adversarial examples should be as high as the original examples. However, a small degradation in random vs. original is expected since the search space is not perfect, and similarly for BFF since it is targeted at finding adversarial examples. Nevertheless, observed drops were small, showing the advantage in validity compared to prior work. The minor irregularity in BoolQ between random and original is indicative of the noise in the dataset.
215
+
216
+ # 6 Related Work
217
+
218
+ Adversarial attacks and robustness have attracted tremendous attention. We discuss work beyond improving robustness through adversarial attacks.
219
+
220
+ Certified Robustness is a class of methods that provide a mathematical certificate for robustness (Dvijotham et al., 2018; Gowal et al., 2018; Jia et al., 2019; Huang et al., 2019; Shi et al., 2020). The model is trained to minimize an upper bound on the loss of the worst-case attack. When this upper bound is low, we get a certificate for robustness against all attacks. While this approach has had success, it struggles when applied to transformers, since upper bounds are propagated through many layers, and become too loose to be practical.
221
+
222
+ Gradient-based methods In a white-box setting, adversarial examples can be generated by performing gradient ascent with respect to the input representation. Gradient-based methods (Goodfellow et al., 2015; Madry et al., 2018) have been empirically successful (Gowal et al., 2018; Ebrahimi et al., 2018), but suffer from a few shortcomings: (a) they assume access to gradients, (b) they lose their effectiveness when combined with sub-word tokenization, since one cannot substitute words that have a different number of sub-words, and (c) they can generate noisy examples that does not preserve the output label. In parallel to our work, Guo et al. (2021) proposed a gradient-based approach that finds a distribution over the attack space at the token level, resulting in an efficient attack.
223
+
224
+ Virtual adversarial training In this approach, one does not generate explicit adversarial examples (Zhu et al., 2020; Jiang et al., 2020; Li and Qiu, 2020; Pereira et al., 2021). Instead, embeddings in an $\epsilon$ -sphere around the input (that do not correspond to words) are sampled, and continuous optimization approaches are used to train for robustness. These works were shown to improve downstream accuracy, but did not result in better robust accuracy. Recently, Zhou et al. (2020) proposed a method that does improve robustness, but like other gradient-based methods, it is white-box, does not work well with transformers over subwords, and leads to noisy samples. A similar approach has been taken by Si et al. (2020b) to generate virtual attacks during training by interpolating offline-generated attacks.
225
+
226
+ Defense layers This approach involves adding normalization layers to the input before propagating it to the model, so that different input variations are mapped to the same representation (Wang et al., 2019; Mozes et al., 2020; Jones et al., 2020). While successful, this approach requires manual engineering and a reduction in model expressivity as the input space is significantly reduced. A similar approach (Zhou et al., 2019) has been to identify adversarial inputs and predict the original un-perturbed input.
227
+
228
+ Pretrained language-models as attacks In this work, we decouple the definition of the attack-space from the attack strategy itself, which is cast as a search algorithm. This allows us to systematically compare different attack strategies and methods to improve robustness in the same setting. An
229
+
230
+ orthogonal approach to ours was proposed by Garg and Ramakrishnan (2020) and Li et al. (2020), who used the fact that BERT was trained with the masked language modeling objective to predict possible semantic preserving adversarial perturbations over the input tokens, thereby coupling the definition of the attack space with the attack strategy. While this approach showed great promise in efficiently generating valid adversarial examples, it does not permit any external constraint on the attack space and thus is not comparable to attacks in this work. Future work can test whether robustness transfers across attack spaces and attack strategies by either (a) evaluating the robustness of models trained in this work against the aforementioned works (in their attack space), or (b) combine such attacks with online augmentation to train robust models and compare to the attacks proposed in our work.
231
+
232
+ # 7 Conclusions
233
+
234
+ We examine achieving robustness through discrete adversarial attacks. We find that the popular approach of offline augmentation is sub-optimal in both speed and accuracy compared to random sampling, and that online augmentation leads to impressive gains. Furthermore, we propose BFF, a new discrete attack based on best-first search, and show that it outperforms past work both in terms of robustness improvement and in terms of attack success rate.
235
+
236
+ Together, our contributions highlight the key factors for success in achieving robustness through adversarial attacks, and open the door to future work on better and more efficient methods for achieving robustness in natural language understanding.
237
+
238
+ # Acknowledgements
239
+
240
+ We thank Mor Geva, Tomer Wolfson, Jonathan Herzig, Inbar Oren, Yuval Kirstain, Uri Shaham and Omer Levy for their useful comments. This research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800).
241
+
242
+ # References
243
+
244
+ Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, Huan Zhang, Cho-Jui Hsieh, and
245
+
246
+ Mani B Srivastava. 2019. Genattack: Practical black-box attacks with gradient-free optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 1111-1119.
247
+ Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics.
248
+ Nicholas Carlini and D. Wagner. 2017. Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57.
249
+ Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.
250
+ Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. *BoolQ: Exploring the surprising difficulty of natural yes/no questions.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics.
251
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
252
+ Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. 2018. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265.
253
+ Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36, Melbourne, Australia. Association for Computational Linguistics.
254
+ Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6174-6181, Online. Association for Computational Linguistics.
255
+
256
+ Karan Goel, Nazneen Rajani, Jesse Vig, Samson Tan, Jason Wu, Stephan Zheng, Caiming Xiong, Mohit Bansal, and Christopher Re. 2021. Robustness gym: Unifying the nlp evaluation landscape. arXiv preprint arXiv:2101.04840.
257
+ Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
258
+ Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. 2018. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715.
259
+ Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. 2021. Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733.
260
+ Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. spaCy: Industrialstrength Natural Language Processing in Python.
261
+ Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4083-4093, Hong Kong, China. Association for Computational Linguistics.
262
+ Robin Jia, Aditi Raghunathan, Kerem Goksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4129-4142, Hong Kong, China. Association for Computational Linguistics.
263
+ Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177-2190, Online. Association for Computational Linguistics.
264
+ Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of
265
+
266
+ Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8018-8025. AAAI Press.
267
+ Erik Jones, Robin Jia, Aditi Raghunathan, and Percy Liang. 2020. Robust encodings: A framework for combating adversarial typos. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2752-2765, Online. Association for Computational Linguistics.
268
+ Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
269
+ Qi Lei, Lingfei Wu, Pin-Yu Chen, Alex Dimakis, Inderjit S. Dhillon, and Michael J. Witbrock. 2019. Discrete adversarial attacks and submodular optimization with applications to text classification. In Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019. mlsys.org.
270
+ Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6193–6202, Online. Association for Computational Linguistics.
271
+ Linyang Li and Xipeng Qiu. 2020. Tavat: Token-aware virtual adversarial training for language understanding. arXiv: Computation and Language.
272
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
273
+ Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
274
+ Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
275
+ Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, and Lewis D Griffin. 2020. Frequency-guided word substitutions for detecting textual adversarial examples. arXiv preprint arXiv:2004.05887.
276
+
277
+ Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506-519.
278
+ Judea Pearl. 1984. Heuristics: Intelligent Search Strategies for Computer Problem Solving, page 48. Addison-Wesley Longman Publishing Co., Inc., USA.
279
+ Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, and Ichiro Kobayashi. 2021. Targeted adversarial training for natural language understanding. arXiv preprint arXiv:2104.05847.
280
+ Luis Perez and Jason Wang. 2017. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621.
281
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
282
+ Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097, Florence, Italy. Association for Computational Linguistics.
283
+ Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, and Cho-Jui Hsieh. 2020. Robustness verification for transformers. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
284
+ Chenglei Si, Ziqing Yang, Yiming Cui, Wentao Ma, Ting Liu, and Shijin Wang. 2020a. Benchmarking robustness of machine reading comprehension models. arXiv preprint arXiv:2004.14004.
285
+ Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2020b. Better robustness by more coverage: Adversarial training with mixup augmentation for robust fine-tuning. arXiv preprint arXiv:2012.15699.
286
+ Patrice Y Simard, Yann A LeCun, John S Denker, and Bernard Victorri. 1998. Transformation invariance in pattern recognition—tangent distance and tangent propagation. In Neural networks: tricks of the trade, pages 239–274. Springer.
287
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
288
+
289
+ Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
290
+ Samson Tan, Shafiq Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! Combating linguistic discrimination with inflectional perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2920-2935, Online. Association for Computational Linguistics.
291
+ Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Robustness may be at odds with accuracy. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
292
+ Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153-2162, Hong Kong, China. Association for Computational Linguistics.
293
+ Xiaosen Wang, Hao Jin, and Kun He. 2019. Natural language adversarial attacks and defenses in word level. arXiv preprint arXiv:1909.06723.
294
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
295
+ Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066-6080, Online. Association for Computational Linguistics.
296
+ Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan Kankanhalli. 2020. Attacks which do not kill training make adversarial learning stronger. arXiv preprint arXiv:2002.11242.
297
+ Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-wei Chang, and Xuanjing Huang. 2020. Defense against
298
+
299
+ adversarial attacks in nlp via dirichlet neighborhood ensemble. arXiv preprint arXiv:2006.11627.
300
+ Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4904-4913, Hong Kong, China. Association for Computational Linguistics.
301
+ Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
302
+
303
+ # A Appendix
304
+
305
+ # A.1 Experimental Details
306
+
307
+ All of the code was written in python and is available at https://github.com/Mivg/ robust_transformers. The models are trained with the transformers library (Wolf et al., 2020). Whenever offline augmentation was used, the resulting adversarial samples were added to the training set and shuffled before training a new model with the same hyper-parameters as the baseline. Thus, the model is trained on $N \times L$ samples where $N$ is the original numbers of samples and $L$ is the number of augmentations added per sample. For online augmentation, we run two parallel data loaders with different shuffling, each with half the required batch size. We then attack the samples in one batch and concatenate the most successful attack to the other batch. The model is fed with the new constructed batch with identical weighting to the halves. Here, we consider a full epoch when every sample was passed through the model both as a perturbed and as an unperturbed sample. As such, the model is trained on $2N$ samples. For each dataset, we use the default train-dev split as described in the paper, and report the accuracy on the development set. We train with hyper-parameters as described below:
308
+
309
+ SST-2: We fine-tuned a pre-trained cased BERT-BASE (Devlin et al., 2019) with max seq length $= 128$ over Nvidia Titan XP GPU for three epochs with batch size of 32 and learning rate of $2e - 5$ .
310
+
311
+ IMDB: We fine-tuned a pre-trained cased BERT-BASE (Devlin et al., 2019) with max seq length=480 over Nvidia Titan XP GPU for three epochs with batch size of 48 and learning rate of $2e - 5$ .
312
+
313
+ BoolQ: We fine-tuned a pre-trained ROBERTALARGE (Liu et al., 2019) for BoolQ with max seq length $= 480$ over Nvidia GTX 3090 GPU for three epochs with batch size of 48 and learning rate of $1e - 5$ .
314
+
315
+ For each parameter choice reported in Table 2, we ran three different experiments with different random initialization, and reported the mean results. The respective standard deviations are given in Table 5. To finetune the models using the FreeLB (Zhu et al., 2020) method, we adapted the implementation from https://github.com/zhuchen03/FreeLB and used the following parameters:
316
+
317
+ SST-2: init-magnitude $= 0.6$ , adversarial-steps $= 1$
318
+
319
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Accuracy</td><td colspan="3">Robust Accuracy</td></tr><tr><td>SST-2</td><td>IMDB</td><td>BoolQ</td><td>SST-2</td><td>IMDB</td><td>BoolQ</td></tr><tr><td>Baseline</td><td>±0.1</td><td>±0.1</td><td>±1.3</td><td>±0.4</td><td>±0.6</td><td>±0.9</td></tr><tr><td>FREELB</td><td>±0.2</td><td>±0.1</td><td>±0.4</td><td>±0.5</td><td>±1.0</td><td>±1.1</td></tr><tr><td>RANDOFF-1</td><td>±0.3</td><td>±0.1</td><td>±1.8</td><td>±0.5</td><td>±1.4</td><td>±1.8</td></tr><tr><td>RANDOFF-4</td><td>±0.7</td><td>±0.1</td><td>±0.5</td><td>±0.6</td><td>±1.9</td><td>±0.5</td></tr><tr><td>RANDOFF-8</td><td>±0.2</td><td>±0.1</td><td>±0.8</td><td>±0.7</td><td>±2.1</td><td>±0.8</td></tr><tr><td>RANDOFF-12</td><td>±0.6</td><td>±0.1</td><td>±1.0</td><td>±0.5</td><td>±1.4</td><td>±1.0</td></tr><tr><td>TxFOFF</td><td>±0.6</td><td>-</td><td>-</td><td>±0.3</td><td>-</td><td>-</td></tr><tr><td>BFFOFF</td><td>±0.3</td><td>-</td><td>±0.3</td><td>±0.3</td><td>-</td><td>±1.8</td></tr><tr><td>RANDOM</td><td>±0.1</td><td>-</td><td>-</td><td>±0.3</td><td>-</td><td>-</td></tr><tr><td>TxFON</td><td>±0.0</td><td>-</td><td>-</td><td>±0.3</td><td>-</td><td>-</td></tr><tr><td>BFFON</td><td>±0.5</td><td>-</td><td>-</td><td>±0.6</td><td>-</td><td>-</td></tr></table>
320
+
321
+ Table 5: Standard deviation on the experiments reported in Table 2. Missing cells indicate a single-run was used due to the long training time.
322
+
323
+ ![](images/c0d5ed21beb70f4719e9b697381a74769e704474a2c230ed9b519301db260520.jpg)
324
+ Figure 5: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget.
325
+
326
+ 2, adversarial-learning-rate $= 0.1$ and $l_{2}$ norm with no limit on the norm.
327
+
328
+ IMDB: init-magnitude $= 0.2$ , adversarial-steps $= 4$ , adversarial-learning-rate $= 0.2$ and $l_{2}$ norm with no limit on the norm.
329
+
330
+ BoolQ: init-magnitude $= 0.2$ , adversarial-steps $= 4$ , adversarial-learning-rate $= 0.2$ and $l_{2}$ norm with no limit on the norm.
331
+
332
+ BFF implementation For the factorization phase of BFF, we use $\tau \sim \operatorname{Syn}(w)$ with uniform sampling. We find that while using an out-of-vocabulary masking token is useful to compute a word salience, it is less suitable here as we are interested in the model's over-sensitivity to perturbations in the exact phrasing of the word. Also,
333
+
334
+ in contrast to TxF which is optimistic and factorizes the attack space only once, BFF factorizes the space after every step. Namely, Optimistic greedy search plans the entire search path by evaluating all permissible single-word substitutions. Let $x_{w_i \to w}$ denote the utterance $x$ where the word $w_i$ is replaced with a synonym $w \in \operatorname{Syn}(w_i)$ . The optimistic greedy algorithm scores each word $w_i$ in the utterance with $s(w_i) := \min_{w \in \operatorname{syn}(w_i)} s_A(x_{w_i \to w})$ , that is, the score of a word is the score for its best substitution, and also stores this substitution. Then, it sorts utterance positions based on $s(w_i)$ in ascending order, which defines the entire search path: In each step, the algorithm moves to the next position based on the sorted list and uses the best substitution stored for that position. Fig. 5 shows the benefit from each of those modifications.
335
+
336
+ Budget Effect Intuitively, higher budgets better approximate an exhaustive search and thus the robustness evaluation as an upper bound should approach its true value. However, due to lack of backtracking in some of the attack strategies, they may plateau early on. In this work, we used $B = 1000$ for all training phases and $B = 2000$ for the robustness evaluation. Empirically, this gives a good estimate on the upper bound of model's robust accuracy, while constraining the computational power needed for the experiments. A natural question is how much tighter the bounds may be if a larger budget is given. Fig. 6 depicts an evaluation of strategies' success-rates over the same models as in Fig. 4 with a larger budget. As can be seen, while the RANDOM attack and TxF plateau, BFF variants as well as GENATTACK are able to exploit the larger budget to fool the model in more cases. This is especially true in IMDB where the search space is considerably larger. We expect this trend of tighter bounds to continue with ever larger budgets, though we note that the rate of improvements decreases with budget and that the ranking between strategies remains unchanged. Therefore, we conclude that drawing conclusions about strategies comparison and robustness improvements by evaluating with a budget of 2,000 suffices.
337
+
338
+ # A.2 Attack Space Implementation Details
339
+
340
+ As described in §5.2, we use the synonyms dictionary defined by Alzantot et al. (2018). In particular, we use the pre-computed set of those synonyms given by Jia et al. (2019) as our bases for $\operatorname{Syn}(w)$ . We pre-process the entire development and training
341
+
342
+ ![](images/afce736e7a74e6da5c686525f66c7bf7efc65181707a6127e17e677063133e6a.jpg)
343
+ Figure 6: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget.
344
+
345
+ data and store for each utterance, the set $\text{Syn}_{\Phi}(w)$ and avoid the need to employ large language models during training and robustness evaluation. For every word in an utterance $w_i \in x$ , and for every $\bar{w}_i \in \text{Syn}(w_i)$ we evaluate $\Phi(w_i, \bar{w}_i)$ as follows:
346
+
347
+ 1. With the same sequences as above, we validate that $POS(w_{i}) \equiv POS(\bar{w}_{i})$ according to spaCy's (Honnibal et al.) POS tagger.
348
+ 2. With a window of size 101, we validate that $\mathrm{PPL}(x) / \mathrm{PPL}(\bar{x})\geq 0.8$ where $\mathrm{PPL}(\cdot)$ is the perplexity of the sequence as given by a pre-trained GPT-2 model (Radford et al., 2019)
349
+ 3. For BoolQ only, we also use spaCy's POS tagger to tag all content words (namely NOUN, PROPN, ADV, and ADJ) in the question. We then restrict all those words from being perturbed in the passage.
350
+ 4. Following Jin et al. (2020), we take a window of size 15 around the word, and validate with USE (Cer et al., 2018) that the semantic similarity between the unperturbed sequence $(w_{i-7},\ldots,w_i,\ldots,w_{i+7})$ and the perturbed sequence $(w_{i-7},\ldots,\bar{w}_i,\ldots,w_{i+7})$ is at least 0.7.
351
+
352
+ # A.3 Genetic Attack Implementation Details
353
+
354
+ Our implementation of Gen-Attack presented by Alzantot et al. (2018) was based on https://github.com/nesl/nlp_adversarial_
355
+
356
+ examples/blob/master/attacks.py and used our attack space rather than the original attack space presented there. For evaluation we used the distribution hyperparameters as defined by the paper. Namely, population-size $p := 20$ , maximum generations: $g := 100$ and softmax-temperature $= 0.3$ . Note we did not need to limit the number of candidate synonyms considered as this was already done in the attack space construction. However, we have made the modifications to the original algorithm in order to adapt to our settings.
357
+
358
+ Maximal modification constraints While the original algorithm presented by Alzantot et al. (2019) contained a clipping phase where mutated samples were clipped to match a maximal norm constraint, the adapted version for discrete attacks presented in Alzantot et al. (2018) did not. As we wish to limit the allowed number of perturbation for any single input utterance and the crossover phase followed by the perturb sub-routine can easily overstep this limit, a post-perturb phase was added. Namely, in every generation creation, after the crossover and mutation (i.e. perturb) subroutines create a candidate child, if the total number of perturbed samples exceeds the limit, we randomly uniformly revert the perturbation in words until the limit is reached. This step introduced another level of randomness into the process. We experimented with reverting based on the probability to be replaced as used in the perturb sub-routine, but this resulted in sub-par results.
359
+
360
+ Improved Efficiency In addition to estimating the fitness function of each child in a generation which requires a forward pass through the attacked model, Alzantot et al. (2018) also used a greedy step in the perturb sub-routine to estimate the fitness of each synonym mutation for a chosen position. This results in an extremely high number of forward passes through the model, specifically $\mathcal{O}(g\cdot p\cdot (k + 1))$ which is orders of magnitude larger than our allowed budget of 2000. However, many of the passes are redundant, so by utilizing caching to previous results, the attack strategy can better utilize its allocated budget, resulting in significantly better success rate in with better efficiency.
361
+
362
+ # A.4 Attack Space in Prior Work
363
+
364
+ Examining the attack space proposed in Jin et al. (2020), which includes a larger synonym dictionary and a different filtering function $\Phi(\cdot)$ , we observe
365
+
366
+ that many adversarial examples are difficult to understand or are not label-preserving. Table 6 shows examples from an implementation of the attack space of the recent TEXTFOOLER (Jin et al., 2020). We observe that while in IMDB the labels remain mostly unchanged, many passages are difficult to understand. Moreover, we observe frequent label flips in datasets such as in SST-2 example, as well as perturbations in BoolQ that leave the question unanswerable.
367
+
368
+ <table><tr><td>Passage: Table of prime factors – The number 1 is called a unit. It has no incipient [prime] factors and is neither first [prime] nor composite.
369
+ Question: is 1 a prime factor of every number
370
+ Answer: False</td></tr><tr><td>Passage: Panama Canal – The nouvelle [new] locks commences [opened] for commercial vehicular [traffic] on 26 June 2016, and the first yacht [ship] to intersecting [cross] the canal using the third set of locks was a modern New Panamax vessel, the Chinese-owned container warships [ship] Cosco Shipping Panama. The original locks, now over 100 centuries [years] old, give [allow] engineer [engineers] best [greater] access for maintenance, and are hoped [projected] to continue workplace [operating] indefinitely.
371
+ Question: is the old panama canal still in use
372
+ Answer: True</td></tr><tr><td>Passage: Chevrolet Avalanche – The Chevrolet Avalanche is a four-door, five or eight [six] commuter [passenger] harvest [pickup] trucking [truck] stocks [sharing] GM&#x27;s long-wheelbase frame [chassis] used on the Chevrolet Suburban and Cadillac Escalade ESV. Breaking with a long-standing tradition, the Avalanche was not affordable [available] as a GMC, but only as a Chevrolet.
373
+ Question: is there a gmc version of the avalanche
374
+ Answer: False</td></tr><tr><td>Sentence: I&#x27;ve been waiting for this movie for SO many years! The best part is that it decedent [lives] up to my visions! This is a MUST SEE for any Tenacious D or true Jack Black fan. It&#x27;s just once [so] great to see JB, KG and Lee on the big screen! It&#x27;s not a authentic [true] story, but who cares. The D is the greatest band on earth! I had the soundtrack to the movie last week and heeded [listed] to it non-stop. To see the movie was unadulterated [pure] bliss for me and my hubby. We&#x27;ve both met Jack and Kyle after 2 different Tenacious D concerts and also saw them when they toured with Weezer. We left that concert after the D was done playing. Nobody can top their show! Long live the D!!! :D
375
+ Answer: True</td></tr><tr><td>Sentence: Sweet, kidding [entertaining] tale of a young 17 1/2 year old boy, controlled by an overbearing religious mother and withdrawn father, and how he finds himself through his work with a retired, eccentric and tragic actress. Very better [well] acted, especially by Julie Walters. Rupert Grint plays the role of the teenage boy well, showing his talent will last longer than the Harry Potter series of films. Laura Linney plays his ruthlessly strict mother without a hint of redemption, so there&#x27;s no room to like her at all. But the film is a awfully [very] antics [entertaining] film, made well by the British in the style of the likes of Keeping Mum and Calendar Girls.
376
+ Answer: True</td></tr><tr><td>Sentence: Enormous adjourned [suspension] of disbelief is required where Will&#x27;s &quot;genius&quot; is concerned. Not just in math-he is also very well reads [read] in economic history, able to out-shrink several shrinks, etc etc. No, no, no. I don&#x27;t buy it. While they&#x27;re at it, they might as well have him wearing a big &quot;S&quot; on his chest, flying faster than a jet plane and stopping bullets.&lt;br / &gt; &lt;br / &gt;Among other problems...real genius (shelving for the moment the problem of what it really is, and whether it deserves such mindless homage) doesn&#x27;t simply appear /ex nihil/o/. It isn&#x27;t ever so multi-faceted. And it is very virtually [rarely] appreciates [appreciated] by contemporaries.&lt;br / &gt;Better to have made Will a basketball prodigy. Except that Damon&#x27;s too short.
377
+ Answer: False</td></tr><tr><td>Sentence: it proves quite unconvincing [compelling] as an intense , brooding character study .
378
+ Answer: True</td></tr><tr><td>Sentence: an sensible [unwise] amalgam of broadcast news and vibes . an sensible amalgam of broadcast news and vibes .
379
+ Answer: False</td></tr><tr><td>Sentence: if you dig on david mamet &#x27;s mind tricks ... rent this movie and iike [enjoy] !
380
+ Answer: True</td></tr></table>
381
+
382
+ Table 6: Examples of adversarial examples, which are difficult to understand or not label-preserving, found for BoolQ/IMDB/SST-2 with the attack space from (Jin et al., 2020). In **bold** are the substituting words and in brackets the original word.
achievingmodelrobustnessthroughdiscreteadversarialtraining/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91633a02bc0b41581d4c4ae4ce8724d75371d4ff39d9ca5f451df4ed8e034ef7
3
+ size 782682
achievingmodelrobustnessthroughdiscreteadversarialtraining/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6d1f969d58f11f530737771f1d9311f9f0d1b3904916f33dbc1fb1bc27e91bd
3
+ size 543331
activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbebeeb3a46bd05e2c51ba901ae4a98c45c5cea22c278299368616e9869e22ab
3
+ size 85239
activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61cd3704b5abe2b3b6303d5f15c75364f2adc2e574d3b5f345e3ec4ff82b929e
3
+ size 102449
activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b77bf05440b0d21ec56e3611a2f320f6e7c86483f464ec429ee862f58918065b
3
+ size 586182
activeeaactivelearningforneuralentityalignment/full.md ADDED
@@ -0,0 +1,388 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ActiveEA: Active Learning for Neural Entity Alignment
2
+
3
+ Bing Liu<sup>1,2</sup>, Harrison Scells<sup>1</sup>, Guido Zuccon<sup>1</sup>, Wen Hua<sup>1</sup>, Genghong Zhao<sup>2</sup>
4
+
5
+ <sup>1</sup>The University of Queensland, Australia
6
+
7
+ $^{2}$ Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., China
8
+
9
+ {bing.liu, h.scells, g.zuccon, w.hua}@uq.edu.au
10
+
11
+ zhaogenghong@neusoft.com
12
+
13
+ # Abstract
14
+
15
+ Entity Alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs) and is an essential step of KG fusion. Current mainstream methods – neural EA models – rely on training with seed alignment, i.e., a set of pre-aligned entity pairs which are very costly to annotate. In this paper, we devise a novel Active Learning (AL) framework for neural EA, aiming to create highly informative seed alignment to obtain more effective EA models with less annotation cost. Our framework tackles two main challenges encountered when applying AL to EA:
16
+
17
+ (1) How to exploit dependencies between entities within the AL strategy. Most AL strategies assume that the data instances to sample are independent and identically distributed. However, entities in KGs are related. To address this challenge, we propose a structure-aware uncertainty sampling strategy that can measure the uncertainty of each entity as well as its impact on its neighbour entities in the KG.
18
+ (2) How to recognise entities that appear in one KG but not in the other KG (i.e., bachelors). Identifying bachelors would likely save annotation budget. To address this challenge, we devise a bachelor recognizer paying attention to alleviate the effect of sampling bias.
19
+
20
+ Empirical results show that our proposed AL strategy can significantly improve sampling quality with good generality across different datasets, EA models and amount of bachelors.
21
+
22
+ # 1 Introduction
23
+
24
+ Knowledge Graphs (KGs) store entities and their relationships with a graph structure and are used as knowledge drivers in many applications (Ji et al., 2020). Existing KGs are often incomplete but complementary to each other. A popular approach used to tackle this problem is KG fusion, which attempts to combine several KGs into a single, comprehensive one. Entity Alignment (EA) is an essential
25
+
26
+ ![](images/893f6bd1c1033f7f2447eb80ffc34b5377173614f88c09d605a0429fff520a78.jpg)
27
+ Figure 1: An example of Entity Alignment.
28
+
29
+ step for KG fusion: it identifies equivalent entities across different KGs, supporting the unification of their complementary knowledge. For example, in Fig. 1 Donald Trump and US in the first KG correspond to D.J. Trump and America respectively in the second KG. By aligning them, the political and business knowledge about Donald Trump can be integrated within one KG.
30
+
31
+ Neural models (Chen et al., 2017, 2018; Wang et al., 2018; Cao et al., 2019) are the current state-of-the-art in EA and are capable of matching entities in an end-to-end manner. Typically, these neural EA models rely on a seed alignment as training data which is very labour-intensive to annotate. However, previous EA research has assumed the availability of such seed alignment and ignored the cost involved with their annotation. In this paper, we seek to reduce the cost of annotating seed alignment data, by investigating methods capable of selecting the most informative entities for labelling so as to obtain the best EA model with the least annotation cost: we do so using Active Learning. Active Learning (AL) (Aggarwal et al., 2014) is a Machine Learning (ML) paradigm where the annotation of data and the training of a model are performed iteratively so that the sampled data is highly informative for training the model. Though many general AL strategies have been proposed (Settles, 2012; Ren et al., 2020), there are some unique challenges in applying AL to EA.
32
+
33
+ The first challenge is how to exploit the dependencies between entities. In the EA task, neighbouring entities (context) in the KGs naturally affect each other. For example, in the two KGs of Fig. 1, we can infer US corresponds to America if we already know that Donald Trump and D.J. Trump refer to the same person: this is because a single person can only be the president of one country. Therefore, when we estimate the value of annotating an entity, we should consider its impact on its context in the KG. Most AL strategies assume data instances are independent, identically distributed and cannot capture dependencies between entities (Aggarwal et al., 2014). In addition, neural EA models exploit the structure of KGs in different and implicit ways (Sun et al., 2020b). It is not easy to find a general way of measuring the effect of entities on others.
34
+
35
+ The second challenge is how to recognize the entities in a KG that do not have a counterpart in the other KG (i.e., bachelors). In the first KG of Fig. 1, Donald Trump and US are matchable entities while New York City and Republican Party are bachelors. Selecting bachelors to annotate will not lead to any aligned entity pair. The impacts of recognizing bachelors are twofold:
36
+
37
+ 1. From the perspective of data annotation, recognizing bachelors would automatically save annotation budget (because annotators will try to seek a corresponding entity for some time before giving up) and allow annotators to put their effort in labelling matchable entities. This is particularly important for the existing neural EA models, which only consider matchable entities for training: thus selecting bachelors in these cases is a waste of annotation budget.
38
+ 2. From the perspective of EA, bachelor recognition remedies the limitation of existing EA models that assume all entities to align are matchable, and would enable them to be better used in practice (i.e., real-life KGs where bachelors are popular).
39
+
40
+ To address these challenges, we propose a novel AL framework for EA. Our framework follows the typical AL process: entities are sampled iteratively, and in each iteration a batch of entities with the highest acquisition scores are selected. Our novel acquisition function consists of two components: a structure-aware uncertainty measurement module and a bachelor recognizer. The structure-aware uncertainty can reflect the uncertainty of a single
41
+
42
+ entity as well as the influence of that entity in the context of the KG, i.e., how many uncertainties it can help its neighbours eliminate. In addition, we design a bachelor recognizer, based on Graph Convolutional Networks (GCNs). Because the bachelor recognizer is trained with the sampled data and used to predict the remaining data, it may suffer from bias (w.r.t. the preference of sampling strategy) of these two groups of data. We apply model ensembling to alleviate this problem.
43
+
44
+ Our major contributions in this paper are:
45
+
46
+ 1. A novel AL framework for neural EA, which can produce more informative data for training EA models while reducing the labour cost involved in annotation. To our knowledge, this is the first AL framework for neural EA.
47
+ 2. A structure-aware uncertainty sampling strategy, which models uncertainty sampling and the relation between entities in a single AL strategy.
48
+ 3. An investigation of bachelor recognition, which can reduce the cost of data annotation and remedy the defect of existing EA models.
49
+ 4. Extensive experimental results that show our proposed AL strategy can significantly improve the quality of data sampling and has good generality across different datasets, EA models, and bachelor quantities.
50
+
51
+ # 2 Background
52
+
53
+ # 2.1 Entity Alignment
54
+
55
+ Entity alignment is typically performed between two KGs $\mathcal{G}^1$ and $\mathcal{G}^2$ , whose entity sets are denoted as $\mathcal{E}^1$ and $\mathcal{E}^2$ respectively. The goal of EA is to find the equivalent entity pairs $\mathcal{A} = \{(e^1, e^2) \in \mathcal{E}^1 \times \mathcal{E}^2 | e^1 \sim e^2\}$ , where $\sim$ denotes an equivalence relationship and is usually assumed to be a one-to-one mapping. In supervised and semi-supervised models, a subset of the alignment $\mathcal{A}^{seed} \subset \mathcal{A}$ , called seed alignment, are annotated manually beforehand and used as training data. The remaining alignment form the test set $\mathcal{A}^{test} = \mathcal{A} \setminus \mathcal{A}^{seed}$ . The core of an EA model $F$ is a scoring function $F(e^1, e^2)$ , which takes two entities as input and returns a score for how likely they match. The effectiveness of an EA model is essentially determined by $\mathcal{A}^{seed}$ and we thus denote it as $m(\mathcal{A}^{seed})$ .
56
+
57
+ # 2.2 Active Learning
58
+
59
+ An AL framework consists of two components: (1) an oracle (annotation expert), which provides labels for the queries (data instances to label), and
60
+
61
+ ![](images/4732199d79631663c8673e11223dfee9c11f318cfe08719c484e4b85821f926d.jpg)
62
+ Figure 2: Overview of ActiveEA.
63
+
64
+ (2) a query system, which selects the most informative data instances as queries. In pool-based scenario, there is a pool of unlabelled data $\mathcal{U}$ . Given a budget $B$ , some instances $\mathcal{U}_{\pi,B}$ are selected from the pool following a strategy $\pi$ and sent to the experts to annotate, who produce a training set $\mathcal{L}_{\pi,B}$ . We train the model on $\mathcal{L}_{\pi,B}$ and the effectiveness $m(\mathcal{L}_{\pi,B})$ of the obtained model reflects how good the strategy $\pi$ is. The goal is to design an optimal strategy $\pi_*$ such that $\pi_* = \operatorname{argmax}_{\pi} m(\mathcal{L}_{\pi,B})$ .
65
+
66
+ # 3 ActiveEA: Active Entity Alignment
67
+
68
+ # 3.1 Problem Definition
69
+
70
+ Given two KGs $\mathcal{G}^1, \mathcal{G}^2$ with entity sets $\mathcal{E}^1, \mathcal{E}^2$ , an EA model $F$ , a budget $B$ , the AL strategy $\pi$ is applied to select a set of entities $\mathcal{U}_{\pi,B}$ so that the annotators label the counterpart entities to obtain the labelled data $\mathcal{L}_{\pi,B}$ . $\mathcal{L}_{\pi,B}$ consists of annotations of matchable entities $\mathcal{L}_{\pi,B}^{+}$ , which form the seed alignment $\mathcal{A}_{\pi,B}^{seed}$ , and bachelors $\mathcal{L}_{\pi,B}^{-}$ . We measure the effectiveness $m(\mathcal{A}_{\pi,B}^{seed})$ of the AL strategy $\pi$ by training the EA model on $\mathcal{A}_{\pi,B}^{seed}$ and then evaluating it with $\mathcal{A}_{\pi,B}^{test} = \mathcal{A} \setminus \mathcal{A}_{\pi,B}^{seed}$ . Our goal is to design an optimal entity sampling strategy $\pi_*$ so that $\pi_* = \operatorname{argmax}_{\pi} m(\mathcal{A}_{\pi,B}^{seed})$ .
71
+
72
+ In our annotation setting, we select entities from one KG and then let the annotators identify their counterparts from the other KG. Under this setting, we assume the pool of unlabelled entities is initialized with $\mathcal{U} = \mathcal{E}^1$ . The labelled data will be like $\mathcal{L}_{\pi,B}^{+} = \{(e^{1} \in \mathcal{E}^{1}, e^{2} \in \mathcal{E}^{2})\}$ and $\mathcal{L}_{\pi,B}^{-} = \{(e^{1} \in \mathcal{E}^{1}, null)\}$ .
73
+
74
+ # 3.2 Framework Overview
75
+
76
+ The whole annotation process, as shown in Fig. 2, is carried out iteratively. In each iteration, the query system selects $N$ entities from $\mathcal{U}$ and sends them to the annotators. The query system includes (1) a structure-aware uncertainty measurement module $f^{su}$ , which combines uncertainty sampling with the structure information of the KGs, and (2) a
77
+
78
+ bachelor recognizer $f^b$ , which helps avoid selecting bachelor entities. The final acquisition $f^{\pi}$ used to select which entities to annotate is obtained by combining the outputs of these two modules. After the annotators assign the ground-truth counterparts to the selected entities, the new annotations are added to the labelled data $\mathcal{L}$ . With the updated $\mathcal{L}$ , the query system updates the EA model and the bachelor recognizer. This process repeats until no budget remains. To simplify the presentation, we omit the sampling iteration when explaining the details.
79
+
80
+ # 3.3 Structure-aware Uncertainty Sampling
81
+
82
+ We define the influence of an entity on its context as the amount of uncertainties it can help its neighbours remove. As such, we formulate the structure-aware uncertainty $f^{su}$ as
83
+
84
+ $$
85
+ \begin{array}{l} f ^ {s u} \left(e _ {i} ^ {1}\right) = \alpha \sum_ {e _ {i} ^ {1} \rightarrow e _ {j} ^ {1}, e _ {j} ^ {1} \in \mathcal {N} _ {i} ^ {\text {o u t}}} w _ {i j} f ^ {s u} \left(e _ {j} ^ {1}\right) \tag {1} \\ + (1 - \alpha) \frac {f ^ {u} (e _ {i} ^ {1})}{\sum_ {e ^ {1} \in \mathcal {E} ^ {1}} f ^ {u} (e ^ {1})}, \\ \end{array}
86
+ $$
87
+
88
+ where $\mathcal{N}_i^{out}$ is the outbound neighbours of entity $e_i^1$ (i.e. the entities referred to by $e_i^1$ ) and $w_{ij}$ measures the extent to which $e_i^1$ can help $e_j^1$ eliminate uncertainty. The parameter $\alpha$ controls the trade-off between the impact of entity $e_i^1$ on its context (first term in the equation) and the normalized uncertainty (second item). Function $f^u(e^1)$ refers to the margin-based uncertainty of an entity. For each entity $e^1$ , the EA model can return the matching scores $F(e^1, e^2)$ with all unaligned entities $e^2$ in $\mathcal{G}^2$ . Since these scores in existing works are not probabilities, we exploit the margin-based uncertainty measure for convenience, outlined in Eq. 2:
89
+
90
+ $$
91
+ f ^ {u} \left(e ^ {1}\right) = - \left(F \left(e ^ {1}, e _ {*} ^ {2}\right) - F \left(e ^ {1}, e _ {* *} ^ {2}\right)\right) \tag {2}
92
+ $$
93
+
94
+ where $F(e^{1}, e_{*}^{2})$ and $F(e^{1}, e_{**}^{2})$ are the highest and second highest matching scores respectively. A large margin represents a small uncertainty.
95
+
96
+ For each entity $e_j^1$ , we assume its inbound neighbours can help it clear all uncertainty. Then, we have $\sum_{e_i^1 \to e_j^1, e_i^1 \in \mathcal{N}_j^{in}} w_{ij} = 1$ , where $\mathcal{N}_j^{in}$ is the inbound neighbour set of $e_j^1$ . In this work, we assume all inbound neighbours have the same impact on $e_j^1$ . In this case, $w_{ij} = \frac{1}{\mathrm{degree}(e_j^1)}$ , where $\mathrm{degree}(\cdot)$ returns the in-degree of an entity.
97
+
98
+ Using matrix notion, Eq. 1 can be rewritten as
99
+
100
+ $$
101
+ \mathbf {f} ^ {s u} = \alpha \mathbf {W} \mathbf {f} ^ {s u} + (1 - \alpha) \frac {\mathbf {f} ^ {u}}{| \mathbf {f} ^ {u} |}
102
+ $$
103
+
104
+ where $\mathbf{f}^{su}$ is the vector of structure-aware uncertainties, $\mathbf{f}^u$ is the vector of uncertainties, and $\mathbf{W}$ is a matrix encoding influence between entities, i.e., $w_{ij} > 0$ if $e_i^1$ is linked to $e_j^1$ , otherwise 0.
105
+
106
+ As $\mathbf{W}$ is a stochastic matrix (Gagniuc, 2017), we solve Eq. 1 iteratively, which can be viewed as the power iteration method (Franceschet, 2011), similar to Pagerank (Brin and Page, 1998). Specifically, we initialize the structure-aware uncertainty vector as $\mathbf{f}_0^{su} = \mathbf{f}^u$ . Then we update $\mathbf{f}_t^{su}$ iteratively:
107
+
108
+ $$
109
+ \mathbf {f} _ {t} ^ {s u} = \alpha \mathbf {W} \mathbf {f} _ {t - 1} ^ {s u} + (1 - \alpha) \frac {\mathbf {f} ^ {u}}{| \mathbf {f} ^ {u} |}, t = 1, 2, 3, \ldots
110
+ $$
111
+
112
+ The computation ends when $|\mathbf{f}_t^{su} - \mathbf{f}_{t - 1}^{su}| < \epsilon$
113
+
114
+ # 3.4 Bachelor Recognizer
115
+
116
+ The bachelor recognizer is formulated as a binary classifier, which is trained with the labelled data and used to predict the unlabelled data. One challenge faced here is the bias between the labelled data and the unlabelled data caused by the sampling strategy (since it is not random sampling). We alleviate this issue with a model ensemble.
117
+
118
+ # 3.4.1 Model Structure
119
+
120
+ We apply two GCNs (Kipf and Welling, 2017; Hamilton et al., 2017) as the encoders to get the entity embeddings $\mathbf{H}^{1} = \mathbf{GCN}^{1}(\mathcal{G}^{1}),\mathbf{H}^{2} = \mathbf{GCN}^{2}(\mathcal{G}^{2})$ , where each row in $\mathbf{H}^1$ or $\mathbf{H}^2$ corresponds to a vector representation of a particular entity. The two GCN encoders share the same structure but have separate parameters. With each GCN encoder, each entity $e_i$ is first assigned a vector representation $\mathbf{h}_i^{(0)}$ . Then contextual features of each entity are extracted:
121
+
122
+ $$
123
+ \mathbf {h} _ {i} ^ {(l)} = \operatorname {n o r m} (\sigma (\sum_ {j \in \mathcal {N} _ {i} \cup \{i \}} \mathbf {V} ^ {(l)} \mathbf {h} _ {j} ^ {(l - 1)} + \mathbf {b} ^ {(l)})),
124
+ $$
125
+
126
+ where $l$ is the layer index, $\mathcal{N}_i$ is the neighbouring entities of entity $e_i$ , and $\sigma$ is the activation function, $\mathrm{norm}(\cdot)$ is a normalization function, and $\mathbf{V}^{(l)}, \mathbf{b}^{(l)}$ are the parameters in the $l$ -th layer. The representations of each entity $e_i$ obtained in all GCN layers are concatenated into a single representation: $\mathbf{h}_i = \mathrm{concat}(\mathbf{h}_i^{(0)}, \mathbf{h}_i^{(1)}, \dots, \mathbf{h}_i^{(L)})$ , where $L$ is the number of GCN layers.
127
+
128
+ After getting the representations of entities, we compute the similarities of each entity in $\mathcal{E}^1$ with all entities in $\mathcal{E}^2$ ( $\mathbf{S} = \mathbf{H}^1 \cdot \mathbf{H}^{2T}$ ) and obtain its corresponding maximum matching score as in
129
+
130
+ $f^{s}(e_{i}^{1}) = \max (\mathbf{S}_{i,:}).$ The entity $e_i^1$ whose maximum matching score is greater than a threshold $\gamma$ is considered to be a matchable entity as in $f^{b}(e_{i}^{1}) = \mathbb{1}_{f^{s}(e_{i}^{1}) > \gamma},$ otherwise a bachelor.
131
+
132
+ # 3.4.2 Learning
133
+
134
+ In each sampling iteration, we train the bachelor recognizer with existing annotated data $\mathcal{L}$ containing matchable entities $\mathcal{L}^{+}$ and bachelors $\mathcal{L}^{-}$ . Furthermore, $\mathcal{L}$ is divided into a training set $\mathcal{L}^t$ and a validation set $\mathcal{L}^v$ .
135
+
136
+ We optimize the parameters, including $\{\mathbf{V}^{(l)},\mathbf{b}^{(l)}\}_{1\leq l\leq L}$ of each GCN encoder and the threshold $\gamma$ , in two phases, sharing similar idea with supervised contrastive learning (Khosla et al., 2020). In the first phase, we optimize the scoring function $f^s$ by minimizing the constrastive loss shown in Eq.3.
137
+
138
+ $$
139
+ \begin{array}{l} \text {l o s s} = \sum_ {\left(e _ {i} ^ {1}, e _ {j} ^ {2}\right) \in \mathcal {L} ^ {t, +}} \| \mathbf {h} _ {i} ^ {1} - \mathbf {h} _ {j} ^ {2} \| \\ + \beta \sum_ {\left(e _ {i ^ {\prime}} ^ {1}, e _ {j ^ {\prime}} ^ {2}\right) \in \mathcal {L} ^ {t, n e g}} \left[ \lambda - \left\| \mathbf {h} _ {i ^ {\prime}} ^ {1} - \mathbf {h} _ {j ^ {\prime}} ^ {2} \right\| \right] _ {+} \tag {3} \\ \end{array}
140
+ $$
141
+
142
+ Here, $\beta$ is a balance factor, and $[\cdot]_{+}$ is $\max(0, \cdot)$ , and $\mathcal{L}^{t,neg}$ is the set of negative samples generated by negative sampling (Sun et al., 2018). For a given pre-aligned entity pair in $\mathcal{L}^{+}$ , each entity of it is substituted for $N^{neg}$ times. The distance of negative samples is expected to be larger than the margin $\lambda$ . In the second phase, we freeze the trained $f^{s}$ and optimize $\gamma$ for $f^{b}$ . It is easy to optimize $\gamma$ , e.g. by simple grid search, so that $f^{b}$ can achieve the highest performance on $\mathcal{L}^{v}$ (denoted as $q(f^{s}, \gamma, \mathcal{L}^{v}))$ using:
143
+
144
+ $$
145
+ \gamma^ {*} = \operatorname {a r g m a x} _ {\gamma} q (f ^ {s}, \gamma , \mathcal {L} ^ {v}).
146
+ $$
147
+
148
+ # 3.4.3 Model Ensemble for Sampling Bias
149
+
150
+ The sampled data may be biased, since they have been preferred by the sampling strategy rather than selected randomly. As a result, even if the bachelor recognizer is well trained with the sampled data it may perform poorly on data yet to sample. We apply a model ensemble to alleviate this problem. Specifically, we divide the $\mathcal{L}$ into $K$ subsets evenly. Then we apply $K$ -fold cross-validation to train $K$ scoring functions $\{f_1^s,\dots,f_K^s\}$ , each time using $K - 1$ subsets as the training set and the left out portion as validation set. Afterwards, we search for an effective $\gamma$ threshold:
151
+
152
+ $$
153
+ \gamma^ {*} = \operatorname {a r g m a x} _ {\gamma} \frac {1}{K} \sum_ {1 \leq k \leq K} q \left(f _ {k} ^ {s}, \gamma , \mathcal {L} _ {k} ^ {v}\right)
154
+ $$
155
+
156
+ At inference, we ensemble by averaging the $K$ scoring functions $f_{k}^{s}$ to form the final scoring function $f^{s}$ as in Eq. 4 and base $f^{b}$ on it.
157
+
158
+ $$
159
+ f ^ {s} \left(e _ {i} ^ {1}\right) = \frac {1}{K} \sum_ {1 \leq k \leq K} f _ {k} ^ {s} \left(e _ {i} ^ {1}\right) \tag {4}
160
+ $$
161
+
162
+ # 3.5 Final Acquisition Function
163
+
164
+ We combine our structure-aware uncertainty sampling with the bachelor recognizer to form the final acquisition function:
165
+
166
+ $$
167
+ f ^ {\pi} \left(e _ {i} ^ {1}\right) = f ^ {s u} \left(e _ {i} ^ {1}\right) f ^ {b} \left(e _ {i} ^ {1}\right)
168
+ $$
169
+
170
+ # 4 Experimental Setup
171
+
172
+ # 4.1 Sampling Strategies
173
+
174
+ We construct several baselines for comparison: rand random sampling used by existing EA work degree selects entities with high degrees.
175
+
176
+ pagerank (Brin and Page, 1998) measures the centrality of entities by considering their degrees as well as the importance of its neighbours.
177
+
178
+ betweenness (Freeman, 1977) refers to the number of shortest paths passing through an entity.
179
+
180
+ uncertainty sampling selects entities that the current EA model cannot predict with confidence. Note that in this work we measure uncertainty using Eq. 2 for fair comparison.
181
+
182
+ degree, pagerank and betweenness are purely topology-based and do not consider the current EA model. On the contrary, uncertainty is fully based on the current EA model without being able to capture the structure information of KG. We compare both our structure-aware uncertainty sampling (struct_uncert) and the full framework ActiveEA with the baselines listed above. We also examine the effect of Bayesian Transformation, which aims to make deep neural models represent uncertainty more accurately (Gal et al., 2017).
183
+
184
+ # 4.2 EA Models
185
+
186
+ We apply our ActiveEA framework to three different EA models, which are a representative spread of neural EA models and varied in KG encoding, considered information and training method (Liu et al., 2020; Sun et al., 2018):
187
+
188
+ BootEA (Sun et al., 2018) encodes the KGs with the translation model (Bordes et al., 2013), exploits the structure of KGs, and uses self-training.
189
+
190
+ Alinet (Sun et al., 2020a) also exploits the structure of KGs but with a GCN-based KG encoder, and is trained in a supervised manner.
191
+
192
+ RDGCN (Wu et al., 2019) trains a GCN in a supervised manner, as Alinet, but it can incorporate entities' attributes.
193
+
194
+ Our implementations and parameter settings of the models rely on OpenEA $^1$ (Sun et al., 2020b).
195
+
196
+ # 4.3 Datasets
197
+
198
+ We use three different datasets: D-W-15K V1 (DW), EN-DE-15K V1 (ENDE), and EN-FR-100K V1 (ENFR), obtained from OpenEA (Sun et al., 2020b). Each dataset contains two KGs and equivalent entity pairs. The KGs used in these datasets were sampled from real KGs, i.e. DBpedia (Lehmann et al., 2015), Wikidata (Vrandecic and Krötzsch, 2014), and YAGO (Rebele et al., 2016), which are widely used in EA community. These datasets differ in terms of KG sources, languages, sizes, etc. We refer the reader to Sun et al. (2020b) for more details.
199
+
200
+ Existing work on EA assumes all entities in the KGs are matchable, thus only sampling entities with counterparts when producing the datasets. For investigating the influence of bachelors on AL strategies, we synthetically modify the datasets by excluding a portion of entities from the second KG.
201
+
202
+ # 4.4 Evaluation Metrics
203
+
204
+ We use Hit@1 as the primary evaluation measure of the EA models. To get an overall evaluation of one AL strategy across different sized budgets, we plot the curve of a EA model's effectiveness with respect to the proportion of annotated entities, and calculate the Area Under the Curve (AUC).
205
+
206
+ # 4.5 Parameter Settings
207
+
208
+ We set $\alpha = 0.1$ , $\epsilon = 1e^{-6}$ for the structure-aware uncertainty. We use $L = 1$ GCN layer for our bachelor recognizer with 500 input and 400 output dimensions. We set $K = 5$ for its model ensemble and $\lambda = 1.5$ , $\beta = 0.1$ , $N^{neg} = 10$ for its training. The sampling batch size is set to $N = 100$ for 15K data and $N = 1000$ for 100K data.
209
+
210
+ # 4.6 Reproducibility Details
211
+
212
+ Our experiments are run on a GPU cluster. We allocate 50G memory and one 32GB nVidia Tesla V100 GPU for each job on 15K data, and 100G memory for each job on 100K data. The training and evaluation of ActiveEA take approximately 3h with Alinet on 15K data, 10h with BootEA on 15K
213
+
214
+ ![](images/0e7b85123ba3222c91749a6774968a6894d6d99f9ef692b56df5b82ef5241184.jpg)
215
+
216
+ ![](images/bda3231bc48686b818588655bbac0af2ffe2529ef1321e249b137baece035b17.jpg)
217
+
218
+ ![](images/4cba5c7c3a390a14420afc22a2b0d32883b969322b4a28eb2bf83578732c4b0e.jpg)
219
+
220
+ ![](images/5f8b7e8b1185c95dc3a76e2183d3db4119a0971a7936afb10216edf738e25d3f.jpg)
221
+
222
+ ![](images/eb82474157c1146c818f53fb636c20e48f147570d158224bc8e00dc2c914fdda.jpg)
223
+
224
+ ![](images/63395ed3d8db6727ed940e5f916f965349cd8f49b0a455919c0bc022ddad1ebb.jpg)
225
+
226
+ ![](images/2bd8dd0338d5659a228e24a21c15c625e3086152554ee0370a7a40fc4d42c051.jpg)
227
+
228
+ ![](images/d72d1bc0ec5d158ed7ee6f7bb7369fb011ed72de8274bf06ef2c12024889d6d5.jpg)
229
+
230
+ ![](images/119a128161229ef8303dc9603898baa832b3ed4ea2b98f7a48afb8a5d9543303.jpg)
231
+ rand degree pagerank betweenness uncertainty struct_untert ActiveEA
232
+
233
+ ![](images/a11a2ef80b4a2d1a019311ab0a0fe2778e02206ca1abca1068bda8d55a9a9508.jpg)
234
+ Percentage of Annotated Entities
235
+
236
+ ![](images/ddf54fb954690ae239b2902a8db12a18e9ed2167066d76cf18b23000e71bdc83.jpg)
237
+
238
+ ![](images/efd2acd2e93799c93aa121be77ede3faa6917be363c4ec1aa3a0a056c7798925.jpg)
239
+
240
+ ![](images/ce06a2faaf344939251edcae8bf1567644f575aa255bcc269cfe4aee6f2c1152.jpg)
241
+ Figure 3: HIT@1 of sampling strategies for all EA models on DW and ENDE, as annotation portion increases. Top row shows experiments that do not include bachelors; bottom row shows experiments that include $30\%$ bachelors. ActiveEA is equivalent to struct_uncert in absence of bachelors, and is thus shown only for the second row.
242
+
243
+ ![](images/eefb87dab4d0e2eb89e0f3b0a04de6500fdc917f8835d63675b257089f8b5203.jpg)
244
+
245
+ ![](images/c92c1be31c865089f47ff6b2bdfe5802a8cb3b9c7d0b564e40708d2ddb356a40.jpg)
246
+ Figure 4: Hit@1 for all sampling strategies on the Alinet EA model on ENFR. Left shows experiments without bachelors, right shows with $30\%$ bachelors.
247
+
248
+ data, 10h with RDGCN on 15K data, and 48h with Alinet on 100K data. Most baseline strategies take less time than ActiveEA on the same dataset except betweenness on 100K data, which takes more than 48h. We apply grid search for setting $\alpha$ and $N$ (shown in Sec. 5.4). Hyper-parameters of the bachelor recognizer are chosen by referring the settings of OpenEA and our manual trials. Code and datasets are available at https://github.com/UQ-Neusoft-Health-Data-Science/ActiveEA.
249
+
250
+ # 5 Experimental Results
251
+
252
+ # 5.1 Comparison with Baselines
253
+
254
+ Fig. 3 presents the overall performance of each strategy with three EA models on two datasets, each of which we also synthetically modify to include $30\%$ bachelors. We also report the AUC@0.5 values of these curves in Tab. 1. ActiveEA degenerates into struct_uncert when there is no bachelor.
255
+
256
+ Random Sampling. Random sampling usually performs poorly when the annotation proportion is small, while it becomes more competitive when the amount of annotations increases. But for most annotation proportions, random sampling exhibits a large gap in performance compared to the best method. This observation highlights the need to investigate data selection for EA.
257
+
258
+ Topology-based Strategies. The topology-based strategies are effective when few annotations are provided, e.g., $< 20\%$ . However, once annotations increase, the effectiveness of topology-based strategies is often worse than random sampling. This may be because these strategies suffer more from the bias between the training set and test set. Therefore, only considering the structural information of KGs has considerable drawbacks for EA.
259
+
260
+ Uncertainty Sampling. On the contrary, the un
261
+
262
+ <table><tr><td rowspan="2">Strategy</td><td colspan="4">BootEA</td><td colspan="4">AliNet</td><td colspan="4">RDGCN</td></tr><tr><td>DW (0%)</td><td>DW (30%)</td><td>ENDE (0%)</td><td>ENDE (30%)</td><td>DW (0%)</td><td>DW (30%)</td><td>ENDE (0%)</td><td>ENDE (30%)</td><td>DW (0%)</td><td>DW (30%)</td><td>ENDE (0%)</td><td>ENDE (30%)</td></tr><tr><td>rand</td><td>23.5n</td><td>17.0</td><td>28.1</td><td>21.3</td><td>19.4</td><td>16.7</td><td>26.0</td><td>23.7</td><td>25.8</td><td>25.0</td><td>41.3n</td><td>41.0</td></tr><tr><td>degree</td><td>19.5</td><td>16.0</td><td>24.0</td><td>20.0</td><td>17.1</td><td>15.2</td><td>22.2</td><td>20.5</td><td>23.3</td><td>22.9</td><td>39.1</td><td>39.4</td></tr><tr><td>pagerank</td><td>22.3</td><td>18.3</td><td>27.6</td><td>23.0</td><td>19.9</td><td>17.3</td><td>25.8</td><td>24.1</td><td>24.5</td><td>23.9</td><td>40.5</td><td>40.6</td></tr><tr><td>betweenness</td><td>20.5</td><td>16.3</td><td>26.1</td><td>21.1</td><td>17.8</td><td>15.6</td><td>23.7</td><td>22.3</td><td>23.2</td><td>22.7</td><td>40.2</td><td>40.3</td></tr><tr><td>uncertainty</td><td>23.9</td><td>16.1</td><td>29.8</td><td>21.2</td><td>21.6</td><td>15.4</td><td>28.2</td><td>22.2</td><td>24.7</td><td>23.9</td><td>40.9n</td><td>40.5</td></tr><tr><td rowspan="2">struct_uncert ActiveEA</td><td rowspan="2">26.3</td><td>20.8</td><td>33.6</td><td>27.4</td><td rowspan="2">23.1</td><td>19.1</td><td rowspan="2">30.6</td><td>26.8</td><td rowspan="2">26.5</td><td>25.6</td><td rowspan="2">41.9</td><td>41.0</td></tr><tr><td>26.7</td><td>31.5</td><td>31.5</td><td>25.7</td><td>32.8</td><td>28.1</td><td>42.3</td></tr></table>
263
+
264
+ Table 1: Overall performance (AUC@0.5 $(\%)$ ) for each sampling strategy. The highest performing strategy in each column is indicated in bold. We run each strategy 5 times; most results for ActiveEA show statistically significant differences over other methods (paired t-test with Bonferroni correction, $p < 0.05$ ), except the few cells indicated by $n$ .
265
+
266
+ certainty sampling strategy performs poorly when the proportion of annotations is small but improves after several annotations have been accumulated. One reason for this is that neural EA models cannot learn useful patterns with a small number of annotations. On datasets with bachelors, uncertainty sampling always performs worse than random sampling. Thus, it is clear that uncertainty sampling cannot be applied directly to EA.
267
+
268
+ Structure-aware Uncertainty Sampling. Structure-aware uncertainty is effective across all annotation proportions. One reason for this is that it combines the advantages of both topology-based strategies and uncertainty sampling. This is essential for AL as it is impossible to predict the amount of annotations required for new datasets.
269
+
270
+ ActiveEA. ActiveEA, which enhances structure-aware sampling with a bachelor recognizer, greatly improves EA when KGs contain bachelors.
271
+
272
+ # 5.1.1 Generality
273
+
274
+ The structure-aware uncertainty sampling mostly outperforms the baselines, while ActiveEA performs even better in almost all cases. ActiveEA also demonstrates generality across datasets, EA models, and bachelor proportions.
275
+
276
+ When the dataset has no bachelors, our uncertainty-aware sampling is exceeded by uncertainty sampling in few large-budget cases. However, the real-world datasets always have bachelors. In this case, our structure-aware uncertainty shows more obvious advantages.
277
+
278
+ In addition, the strategies are less distinguishable when applied to RDGCN. The reason is that RDGCN exploits the name of entities for prealignment and thus all strategies achieve good performance from the start.
279
+
280
+ ![](images/ee12a998a10876ac19aa3c7c1a49930f5bc6cc5a67c749108b2556ac6d482f2c.jpg)
281
+ Figure 5: Comparison demonstrating the effect of bachelors $(0\% -40\%)$ on the BootEA and Alinet models.
282
+
283
+ ![](images/78137d991cac7e7efc12b9593da6d175dfd7a19664f7d77dd7ae35b8d767ac31.jpg)
284
+
285
+ ![](images/fd59491837423d98762fab5598daeb1493ad2d7b8da8c33715b08045342b4326.jpg)
286
+ Figure 6: Comparison demonstrating the effectiveness of the bachelor recognizer and the effect of the model ensemble (ME) on BootEA and Alinet.
287
+
288
+ ![](images/4bb588c85ac62023bd713b637a9da7bbedb60e242e2a37d25075b8335b189464.jpg)
289
+
290
+ To assess the generality across datasets of different sizes, we evaluate the sampling strategies with Alinet using ENFR (100K entities), which is larger than DW and ENDE (15K entities). We choose Alinet because it is more scalable than BootEA and RDGCN (Zhao et al., 2020). Fig. 4 presents comparable results to the 15K datasets.
291
+
292
+ # 5.2 Effect of Bachelors
293
+
294
+ To investigate the effect of bachelors, we removed different amounts of entities randomly (each larger sample contains the subset from earlier samples) from $\mathcal{G}^2$ so that $\mathcal{G}^1$ had different percentages of bachelors. Fig. 5 shows the results of applying all strategies to these datasets. We further make the
295
+
296
+ ![](images/3aff202d4c18040458c2efc9c673090a3d0e7ca9f27c772d16a2830a13de3826.jpg)
297
+ Figure 7: Comparison demonstrating the effects different parameters have on our sampling strategies.
298
+
299
+ following four observations:
300
+
301
+ 1. The performance of all strategies except ActiveEA decrease as bachelors increase. How to avoid selecting bachelors is an important issue in designing AL strategies for EA.
302
+ 2. Among all strategies, uncertainty sampling is affected the most, while topology-based methods are only marginally affected.
303
+ 3. Our structure-aware uncertainty outperforms the baselines in all tested bachelor proportions.
304
+ 4. ActiveEA increases performance as the proportion of bachelors increases. The reason is: if $\mathcal{G}^1$ is fixed and the bachelors can be recognized successfully, a certain budget can lead to larger ratio of annotated matchable entities in datasets with more bachelors than in those with less bachelors.
305
+
306
+ # 5.3 Effectiveness of Bachelor Recognizer
307
+
308
+ Fig. 6 shows the effectiveness of our bachelor recognizer in the sampling process and the effect of model ensemble. The green curve shows the MicroF1 score of our bachelor recognizer using the model ensemble. Our bachelor recognizer achieves high effectiveness from the start of sampling, where there are few annotations. Each red dot represents the performance of the bachelor recognizer trained with a certain data partition without using the model ensemble. Performance varied because of the bias problem. Therefore, our model ensemble makes the trained model obtain high and stable performance.
309
+
310
+ # 5.4 Sensitivity of Parameters
311
+
312
+ To investigate the sensitivity of parameters, we ran our strategy with AliNet and BootEA on two DW variants with bachelor proportions of $0\%$ and $30\%$ .
313
+
314
+ The sensitivity w.r.t. $\alpha$ is shown in the top row of
315
+
316
+ ![](images/a19c5eaea573c006a2fbc0682e66dc9f3a97679192dfb1fc608a38aa34698ec8.jpg)
317
+ Figure 8: Effect of Bayesian Transformation on uncertainty and ActiveEA across the DW and ENDE datasets and different bachelor percentages.
318
+
319
+ Fig. 7. We observe that our method is not sensitive to $\alpha$ . The effectiveness fluctuates when $\alpha < 0.5$ and decreases when $\alpha > 0.5$ . This indicates uncertainty is more informative than structural information. When $\alpha = 0$ , our struct_uncert degenerates to uncertainty sampling (Eq. 2). In the upper left plot, we show the corresponding performance with dotted lines. Under most settings of $\alpha$ , the struct_uncert is much better than uncertainty sampling. This means that introducing structure information is beneficial.
320
+
321
+ The bottom row of Fig. 7 shows the effect of sampling batch size $N$ . The overall trend is that larger batch sizes decrease performance. This observation confirms the intuition that more frequent updates to the EA model lead to more precise uncertainty. Therefore, the choice of value of sampling batch size is a matter of trade-off between computation cost and sampling quality.
322
+
323
+ # 5.5 Examination of Bayesian Transformation
324
+
325
+ We enhanced the uncertainty sampling and ActiveEA with Bayesian Transformation, implemented with Monte Carlo (MC) dropout, and applied them to Alinet and RDGCN on DW and ENDE as in Sec. 5.1. Fig. 8 shows improvements with different settings of MC dropout rate. We find (1) the variation of effects on uncertainty sampling is greater than that on ActiveEA; (2) Bayesian Transformation with small dropout (e.g., 0.05) results in slight improvements to ActiveEA in most cases.
326
+
327
+ # 6 Related Works
328
+
329
+ Entity Alignment. Entity Alignment refers to the matching of entities across different KGs that refer to the same real-world object. Compared with Entity Resolution (Mudgal et al., 2018), which matches duplicate entities in relational data, EA deals with graph data and emphasizes on exploiting the structure of KGs. Neural models (Chen
330
+
331
+ et al., 2017, 2018; Wang et al., 2018; Cao et al., 2019) replaced conventional approaches (Jiménez-Ruiz and Grau, 2011; Suchanek et al., 2011) as the core methods used in recent years. Typically they rely on seed alignment as training data – this is expensive to annotate. Iterative training (i.e., self-training) has been applied to improve EA models by generating more training data automatically (Sun et al., 2018; Mao et al., 2020). These works concern better training methods with given annotated data. However, the problem of reducing the cost of annotation has been neglected. Berrenderf et al. (2021) have been the first to explore AL strategies for EA task. They compared several types of AL heuristics including node centrality, uncertainty, graph coverage, unmatched entities, etc. and they empirically showed the impact of sampling strategies on the creation of seed alignment. In our work, we highlight the limitations of single heuristics and propose an AL framework that can consider information structure, uncertainty sampling and unmatched entities at the same time. In addition, existing neural models assume all KGs entities have counterparts: this is a very strong assumption in reality (Zhao et al., 2020). We provide a solution to recognizing the bachelor entities, which is complementary to the existing models.
332
+
333
+ Active Learning. Active Learning is a general framework for selecting the most informative data to annotate when training Machine Learning models (Aggarwal et al., 2014). The pool-based sampling scenario is a popular AL setting where a base pool of unlabelled instances is available to query from (Settles, 2012; Aggarwal et al., 2014). Our proposed AL framework follows this scenario. Numerous AL strategies have been proposed in the general domain (Aggarwal et al., 2014). Uncertainty sampling is the most widely used because of its ease to implement and its robust effectiveness (Lewis, 1995; Cohn et al., 1996). However, there are key challenges that general AL strategies cannot solve when applying AL to EA. Most AL strategies are designed under the assumption that the data is independent and identically distributed. However, KGs entities in the AL task are correlated, as in other graph-based tasks, e.g., node classification (Bilgic et al., 2010) and link prediction (Ostapuk et al., 2019). In addition, bachelor entities cause a very special issue in EA. They may have low informativeness but high uncertainty. We
334
+
335
+ design an AL strategy to solve these special challenges. Few existing works (Qian et al., 2017; Malmi et al., 2017) have applied AL to conventional EA but do not consider neural EA models, which have now become of widespread use. Only Berrendorf et al. (2021) empirically explored general AL strategies for neural EA but did not solve the aforementioned challenges.
336
+
337
+ # 7 Conclusion
338
+
339
+ Entity Alignment is an essential step for KG fusion. Current mainstream methods for EA are neural models, which rely on seed alignment. The cost of labelling seed alignment is often high, but how to reduce this cost has been neglected. In this work, we proposed an Active Learning framework (named ActiveEA), aiming to produce the best EA model with the least annotation cost. Specifically, we attempted to solve two key challenges affecting EA that general AL strategies cannot deal with. Firstly, we proposed a structure-aware uncertainty sampling, which can combine uncertainty sampling with the structure information of KGs. Secondly, we designed a bachelor recognizer, which reduces annotation budget by avoiding the selection of bachelors. Specially, it can tolerate sampling biases. Extensive experimental showed ActiveEA is more effective than the considered baselines and has great generality across different datasets, EA models and bachelor percentages.
340
+
341
+ In future, we plan to explore combining active learning and self-training which we believe are complementary approaches. Self-training can generate extra training data automatically but suffers from incorrectly labelled data. This can be addressed by amending incorrectly labelled data using AL strategies.
342
+
343
+ # Acknowledgements
344
+
345
+ This research is supported by the Shenyang Science and Technology Plan Fund (No. 20-201-4-10), the Member Program of Neusoft Research of Intelligent Healthcare Technology, Co. Ltd.(No. NRMP001901)). Dr Wen Hua is the recipient of an Australian Research Council DECRA Research Fellowship (DE210100160). Dr Guido Zuccon is the recipient of an Australian Research Council DECRA Research Fellowship (DE180101579).
346
+
347
+ # References
348
+
349
+ Charu C. Aggarwal, Xiangnan Kong, Quanquan Gu, Jiawei Han, and Philip S. Yu. 2014. Active learning: A survey. In Charu C. Aggarwal, editor, Data Classification: Algorithms and Applications, pages 571-606. CRC Press.
350
+ Max Berrendorf, Evgeniy Faerman, and Volker Tresp. 2021. Active learning for entity alignment. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, volume 12656 of Lecture Notes in Computer Science, pages 48-62. Springer.
351
+ Mustafa Bilgic, Lilyana Mihalkova, and Lise Getoor. 2010. Active learning for networked data. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 79-86. Omnipress.
352
+ Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787-2795.
353
+ Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. Comput. Networks, 30(1-7):107-117.
354
+ Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1452-1461. Association for Computational Linguistics.
355
+ Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2018. Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 3998-4004. ijcai.org.
356
+ Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 1511-1517. ijcai.org.
357
+ David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. 1996. Active learning with statistical models. J. Artif. Intell. Res., 4:129-145.
358
+
359
+ Massimo Franceschet. 2011. Pagerank: standing on the shoulders of giants. Commun. ACM, 54(6):92-101.
360
+ Linton C Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry, pages 35-41.
361
+ Paul A Gagniuc. 2017. Markov chains: from theory to implementation and experimentation. John Wiley & Sons.
362
+ Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1183-1192. PMLR.
363
+ William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1024-1034.
364
+ Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Martinen, and Philip S. Yu. 2020. A survey on knowledge graphs: Representation, acquisition and applications. CoRR, abs/2002.00388.
365
+ Ernesto Jiménez-Ruiz and Bernardo Cuenca Grau. 2011. Logmap: Logic-based and scalable ontology matching. In *The Semantic Web - ISWC 2011 - 10th International Semantic Web Conference*, Bonn, Germany, October 23-27, 2011, Proceedings, Part I, volume 7031 of Lecture Notes in Computer Science, pages 273-288. Springer.
366
+ Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
367
+ Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
368
+ Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Soren Auer, and Christian Bizer. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195.
369
+ David D. Lewis. 1995. A sequential algorithm for training text classifiers: Corrigendum and additional data. SIGIR Forum, 29(2):13-19.
370
+
371
+ Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6355-6364. Association for Computational Linguistics.
372
+ Eric Malmi, Aristides Gionis, and Evimaria Terzi. 2017. Active network alignment: A matching-based approach. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 1687-1696. ACM.
373
+ Xin Mao, Wenting Wang, Huimin Xu, Man Lan, and Yuanbin Wu. 2020. MRAEA: an efficient and robust entity alignment approach for cross-lingual knowledge graph. In WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pages 420-428. ACM.
374
+ Sidharth Mudgal, Han Li, Theodoros Rekatsinas, AnHai Doan, Youngchoon Park, Ganesh Krishnan, Rohit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018, Houston, TX, USA, June 10-15, 2018, pages 19-34. ACM.
375
+ Natalia Ostapuk, Jie Yang, and Philippe Cudré-Mauroux. 2019. Activelink: Deep active learning for link prediction in knowledge graphs. In *The World Wide Web Conference*, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 1398-1408. ACM.
376
+ Kun Qian, Lucian Popa, and Prithviraj Sen. 2017. Active learning for large-scale entity resolution. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 1379-1388. ACM.
377
+ Thomas Rebele, Fabian M. Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, and Gerhard Weikum. 2016. YAGO: A multilingual knowledge base from wikipedia, wordnet, and geonames. In The Semantic Web - ISWC 2016 - 15th International Semantic Web Conference, Kobe, Japan, October 17-21, 2016, Proceedings, Part II, volume 9982 of Lecture Notes in Computer Science, pages 177-185.
378
+ Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. 2020. A survey of deep active learning. CoRR, abs/2009.00236.
379
+ Burr Settles. 2012. Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers.
380
+
381
+ Fabian M. Suchanek, Serge Abiteboul, and Pierre Senellart. 2011. PARIS: probabilistic alignment of relations, instances, and schema. Proc. VLDB Endow., 5(3):157-168.
382
+ Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4396-4402. ijcai.org.
383
+ Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020a. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 222-229. AAAI Press.
384
+ Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020b. A benchmarking study of embedding-based entity alignment for knowledge graphs. Proc. VLDB Endow., 13(11):2326-2340.
385
+ Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM, 57(10):78-85.
386
+ Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 349-357. Association for Computational Linguistics.
387
+ Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5278-5284. ijcai.org.
388
+ Xiang Zhao, Weixin Zeng, Jiuyang Tang, Wei Wang, and Fabian Suchanek. 2020. An experimental study of state-of-the-art entity alignment approaches. IEEE Annals of the History of Computing, (01):1-1.
activeeaactivelearningforneuralentityalignment/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f23d50d8d3ba8ce63c0ba1951bba075dad81acd606997004bde5ae83c0e70a05
3
+ size 501999
activeeaactivelearningforneuralentityalignment/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7473a1f42a18f3cdb608f987cd14e2cd265ba10ae63762eedd88475a5d31bc20
3
+ size 488918
activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61f5e8cebfa9d96d304abebb79a5ec4ebda6d38f6177bacffbd01a82a01b5243
3
+ size 96958
activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:688e89c21402ac35230dd3865d26f2dae39d552e85c0da7f55629d8ed06a14e8
3
+ size 117787
activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:105db2dffcce78184264b4626a2d2fd2111689f25539626d342a8572d54f806a
3
+ size 2146918
activelearningbyacquiringcontrastiveexamples/full.md ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning by Acquiring Contrastive Examples
2
+
3
+ Katerina Margatina† Giorgos Vernikos‡* Loic Barrault† Nikolaos Aletras† †University of Sheffield ‡EPFL *HEIG-VD
4
+
5
+ {k.margatina, l.barrault, n.aletras}@sheffield.ac.uk georgios.vernikos@epfl.ch
6
+
7
+ # Abstract
8
+
9
+ Common acquisition functions for active learning use either uncertainty or diversity sampling, aiming to select difficult and diverse data points from the pool of unlabeled data, respectively. In this work, leveraging the best of both worlds, we propose an acquisition function that opts for selecting contrastive examples, i.e. data points that are similar in the model feature space and yet the model outputs maximally different predictive likelihoods. We compare our approach, CAL (Contrastive Active Learning), with a diverse set of acquisition functions in four natural language understanding tasks and seven datasets. Our experiments show that CAL performs consistently better or equal than the best performing baseline across all tasks, on both in-domain and out-of-domain data. We also conduct an extensive ablation study of our method and we further analyze all actively acquired datasets showing that CAL achieves a better trade-off between uncertainty and diversity compared to other strategies.
10
+
11
+ # 1 Introduction
12
+
13
+ Active learning (AL) is a machine learning paradigm for efficiently acquiring data for annotation from a (typically large) pool of unlabeled data (Lewis and Catlett, 1994; Cohn et al., 1996; Settles, 2009). Its goal is to concentrate the human labeling effort on the most informative data points that will benefit model performance the most and thus reducing data annotation cost.
14
+
15
+ The most widely used approaches to acquiring data for AL are based on uncertainty and diversity, often described as the "two faces of AL" (Dasgupta, 2011). While uncertainty-based methods leverage the model predictive confidence to select difficult examples for annotation (Lewis and Gale, 1994; Cohn et al., 1996), diversity sampling exploits heterogeneity in the feature space by typically performing clustering (Brinker, 2003; Bodó
16
+
17
+ ![](images/e20917383a30fd61867accfe94810f8cb75d3fe9001ed6cef8f08042c5819b45.jpg)
18
+ Figure 1: Illustrative example of our proposed method CAL. The solid line (model decision boundary) separates data points from two different classes (blue and orange), the coloured data points represent the labeled data and the rest are the unlabeled data of the pool.
19
+
20
+ et al., 2011). Still, both approaches have core limitations that may lead to acquiring redundant data points. Algorithms based on uncertainty may end up choosing uncertain yet uninformative repetitive data, while diversity-based methods may tend to select diverse yet easy examples for the model (Roy and McCallum, 2001). The two approaches are orthogonal to each other, since uncertainty sampling is usually based on the model's output, while diversity exploits information from the input (i.e. feature) space. Hybrid data acquisition functions that combine uncertainty and diversity sampling have also been proposed (Shen et al., 2004; Zhu et al., 2008; Ducoffe and Precioso, 2018; Ash et al., 2020; Yuan et al., 2020; Ru et al., 2020).
21
+
22
+ In this work, we aim to leverage characteristics from hybrid data acquisition. We hypothesize that data points that are close in the model feature space (i.e. share similar or related vocabulary, or similar model encodings) but the model produces different predictive likelihoods, should be good candidates for data acquisition. We define such examples as contrastive (see example in Figure 1). For that purpose, we propose a new acquisition function that searches for contrastive examples in the pool of unlabeled data. Specifically, our method, Contrastive Active Learning (CAL) selects unlabeled
23
+
24
+ data points from the pool, whose predictive likelihoods diverge the most from their neighbors in the training set. This way, CAL shares similarities with diversity sampling, but instead of performing clustering it uses the feature space to create neighborhoods. CAL also leverages uncertainty, by using predictive likelihoods to rank the unlabeled data.
25
+
26
+ We evaluate our approach in seven datasets from four tasks including sentiment analysis, topic classification, natural language inference and paraphrase detection. We compare CAL against a full suite of baseline acquisition functions that are based on uncertainty, diversity or both. We also examine robustness by evaluating on out-of-domain data, apart from in-domain held-out sets. Our contributions are the following:
27
+
28
+ 1. We propose CAL, a new acquisition function for active learning that acquires contrastive examples from the pool of unlabeled data (§2);
29
+ 2. We show that CAL performs consistently better or equal compared to all baselines in all tasks when evaluated on in-domain and out-of-domain settings (§4);
30
+ 3. We conduct a thorough analysis of our method showing that CAL achieves a better trade-off between diversity and uncertainty compared to the baselines (§6).
31
+
32
+ We release our code online ${}^{1}$ .
33
+
34
+ # 2 Contrastive Active Learning
35
+
36
+ In this section we present in detail our proposed method, CAL: Contrastive Active Learning. First, we provide a definition for contrastive examples and how they are related to finding data points that are close to the decision boundary of the model (§2.1). We next describe an active learning loop using our proposed acquisition function (§2.2).
37
+
38
+ # 2.1 Contrastive Examples
39
+
40
+ In the context of active learning, we aim to formulate an acquisition function that selects contrastive examples from a pool of unlabeled data for annotation. We draw inspiration from the contrastive learning framework, that leverages the similarity between data points to push those from the same class closer together and examples from different classes further apart during training (Mikolov et al.,
41
+
42
+ 2013; Sohn, 2016; van den Oord et al., 2019; Chen et al., 2020; Gunel et al., 2021).
43
+
44
+ In this work, we define as contrastive examples two data points if their model encodings are similar, but their model predictions are very different (maximally disagreeing predictive likelihoods).
45
+
46
+ Formally, data points $x_{i}$ and $x_{j}$ should first satisfy a similarity criterion:
47
+
48
+ $$
49
+ d \left(\Phi \left(x _ {i}\right), \Phi \left(x _ {j}\right)\right) < \epsilon \tag {1}
50
+ $$
51
+
52
+ where $\Phi(.)\in \mathbb{R}^{d'}$ is an encoder that maps $x_{i},x_{j}$ in a shared feature space, $d(.)$ is a distance metric and $\epsilon$ is a small distance value.
53
+
54
+ A second criterion, based on model uncertainty, is to evaluate that the predictive probability distributions of the model $p(y|x_i)$ and $p(y|x_j)$ for the inputs $x_i$ and $x_j$ should maximally diverge:
55
+
56
+ $$
57
+ \operatorname {K L} \left(p \left(y | x _ {i}\right) | | p \left(y | x _ {j}\right)\right)\rightarrow \infty \tag {2}
58
+ $$
59
+
60
+ where KL is the Kullback-Leibler divergence between two probability distributions ${}^{2}$ .
61
+
62
+ For example, in a binary classification problem, given a reference example $x_{1}$ with output probability distribution (0.8, 0.2)<sup>3</sup> and similar candidate examples $x_{2}$ with (0.7, 0.3) and $x_{3}$ with (0.6, 0.4), we would consider as contrastive examples the pair $(x_{1}, x_{3})$ . However, if another example $x_{4}$ (similar to $x_{1}$ in the model feature space) had a probability distribution (0.4, 0.6), then the most contrastive pair would be $(x_{1}, x_{4})$ .
63
+
64
+ Figure 1 provides an illustration of contrastive examples for a binary classification case. All data points inside the circle (dotted line) are similar in the model feature space, satisfying Eq. 1. Intuitively, if the divergence of the output probabilities of the model for the gray and blue shaded data points is high, then Eq. 2 should also hold and we should consider them as contrastive.
65
+
66
+ From a different perspective, data points with similar model encodings (Eq. 1) and dissimilar model outputs (Eq. 2), should be close to the model's decision boundary (Figure 1). Hence, we hypothesize that our proposed approach to select
67
+
68
+ # Algorithm 1 Single iteration of CAL
69
+
70
+ Input: labeled data $\mathcal{D}_{\mathrm{lab}}$ , unlabeled data $\mathcal{D}_{\mathrm{pool}}$ , acquisition size $b$ , model $\mathcal{M}$ , number of neighbours $k$ , model representation (encoding) function $\Phi(.)$
71
+
72
+ 1 for $x_{p}$ in $\mathcal{D}_{\mathrm{pool}}$ do
73
+ 2 $\left\{(x_{l}^{(i)},y_{l}^{(i)})\right\} ,i = 1,\dots,k\gets \mathrm{KNN}\bigl {(}\Phi (x_{p}),\Phi (\mathcal{D}_{\mathbf{lab}}),k\bigr)$ find neighbours in $\mathcal{D}_{\mathbf{lab}}$
74
+ 3 $p(y|x_l^{(i)})\gets \mathcal{M}(x_l^{(i)}),i = 1,\ldots ,k$ compute probabilities
75
+ 4 $p(y|x_{p})\gets \mathcal{M}(x_{p})$
76
+ 5 $\begin{array}{rlr}{\mathrm{KL}}\big(p(y|x_l^{(i)})||p(y|x_p)\big),i = 1,\dots,k \end{array}$ compute divergence
77
+ 6 $s_{x_p} = \frac{1}{k}\sum_{i = 1}^{k}\mathrm{KL}\bigl (p(y|x_l^{(i)})||p(y|x_p)\bigr)$
78
+
79
+ 7 end
80
+
81
+ 8 $Q = \operatorname{argmax}_{x_p \in \mathcal{D}_{\mathrm{pool}}} s_{x_p}, |Q| = b$ 1 select batch
82
+
83
+ Output: $Q$
84
+
85
+ contrastive examples is related to acquiring difficult examples near the decision boundary of the model. Under this formulation, CAL does not guarantee that the contrastive examples lie near the model's decision boundary, because our definition is not strict. In order to ensure that a pair of contrastive examples lie on the boundary, the second criterion should require that the model classifies the two examples in different classes (i.e. different predictions). However, calculating the distance between an example and the model decision boundary is intractable and approximations that use adversarial examples are computationally expensive (Ducoffe and Precioso, 2018).
86
+
87
+ # 2.2 Active Learning Loop
88
+
89
+ Assuming a multi-class classification problem with $C$ classes, labeled data for training $\mathcal{D}_{\mathrm{lab}}$ and a pool of unlabeled data $\mathcal{D}_{\mathrm{pool}}$ , we perform AL for $T$ iterations. At each iteration, we train a model on $\mathcal{D}_{\mathrm{lab}}$ and then use our proposed acquisition function, CAL (Algorithm 1), to acquire a batch $Q$ consisting of $b$ examples from $\mathcal{D}_{\mathrm{pool}}$ . The acquired examples are then labeled<sup>4</sup>, they are removed from the pool $\mathcal{D}_{\mathrm{pool}}$ and added to the labeled dataset $\mathcal{D}_{\mathrm{lab}}$ , which will serve as the training set for training a model in the next AL iteration. In our experiments, we use a pretrained BERT model $\mathcal{M}$ (Devlin et al., 2019), which we fine-tune at each AL iteration using the current $\mathcal{D}_{\mathrm{lab}}$ . We begin the AL loop by training a model $\mathcal{M}$ using an initial labeled dataset $\mathcal{D}_{\mathrm{lab}}^5$ .
90
+
91
+ Find Nearest Neighbors for Unlabeled Candidates The first step of our contrastive acquisition function (cf. line 2) is to find examples that are similar in the model feature space (Eq. 1). Specifically, we use the [CLS] token embedding of BERT as our encoder $\Phi(.)$ to represent all data points in $\mathcal{D}_{\mathrm{lab}}$ and $\mathcal{D}_{\mathrm{pool}}$ . We use a K-Nearest-Neighbors (KNN) implementation using the labeled data $\mathcal{D}_{\mathrm{lab}}$ in order to query similar examples $x_{l} \in \mathcal{D}_{\mathrm{lab}}$ for each candidate $x_{p} \in \mathcal{D}_{\mathrm{pool}}$ . Our distance metric $d(.)$ is Euclidean distance. To find the most similar data points in $\mathcal{D}_{\mathrm{lab}}$ for each $x_{p}$ , we select the top $k$ instead of selecting a predefined threshold $\epsilon$ (Eq. 1)<sup>6</sup>. This way, we create a neighborhood $N_{x_{p}} = \{x_{p}, x_{l}^{(1)}, \ldots, x_{l}^{(k)}\}$ that consists of the unlabeled data point $x_{p}$ and its $k$ closest examples $x_{l}$ in $\mathcal{D}_{\mathrm{lab}}$ (Figure 1).
92
+
93
+ Compute Contrastive Score between Unlabeled Candidates and Neighbors In the second step, we compute the divergence in the model predictive probabilities for the members of the neighborhood (Eq. 2). Using the current trained model $\mathcal{M}$ to obtain the output probabilities for all data points in $N_{x_p}$ (cf. lines 3-4), we then compute the Kullback-Leibler divergence (KL) between the output probabilities of $x_p$ and all $x_l \in N_{x_p}$ (cf. line 5). To obtain a score $s_{x_p}$ for a candidate $x_p$ , we take the average of all divergence scores (cf. line 6).
94
+
95
+ Rank Unlabeled Candidates and Select Batch We apply these steps to all candidate examples $x_{p}\in \mathcal{D}_{\mathrm{pool}}$ and obtain a score $s_{x_p}$ for each. With
96
+
97
+ <table><tr><td>DATASET</td><td>TASK</td><td>DOMAIN</td><td>OOD DATASET</td><td>TRAIN</td><td>VAL</td><td>TEST</td><td>CLASSES</td></tr><tr><td>IMDB</td><td>Sentiment Analysis</td><td>Movie Reviews</td><td>SST-2</td><td>22.5K</td><td>2.5K</td><td>25K</td><td>2</td></tr><tr><td>SST-2</td><td>Sentiment Analysis</td><td>Movie Reviews</td><td>IMDB</td><td>60.6K</td><td>6.7K</td><td>871</td><td>2</td></tr><tr><td>AGNEWS</td><td>Topic Classification</td><td>News</td><td>-</td><td>114K</td><td>6K</td><td>7.6K</td><td>4</td></tr><tr><td>DBPEDIA</td><td>Topic Classification</td><td>News</td><td>-</td><td>20K</td><td>2K</td><td>70K</td><td>14</td></tr><tr><td>PUBMED</td><td>Topic Classification</td><td>Medical</td><td>-</td><td>180K</td><td>30.2K</td><td>30.1K</td><td>5</td></tr><tr><td>QNLI</td><td>Natural Language Inference</td><td>Wikipedia</td><td>-</td><td>99.5K</td><td>5.2K</td><td>5.5K</td><td>2</td></tr><tr><td>QQP</td><td>Paraphrase Detection</td><td>Social QA Questions</td><td>TWITTERPPDB</td><td>327K</td><td>36.4K</td><td>80.8K</td><td>2</td></tr></table>
98
+
99
+ Table 1: Dataset statistics.
100
+
101
+ our scoring function we define as contrastive examples the unlabeled data $x_{p}$ that have the highest score $s_{x_p}$ . A high $s_{x_p}$ score indicates that the unlabeled data point $x_{p}$ has a high divergence in model predicted probabilities compared to its neighbors in the training set (Eq. 1, 2), suggesting that it may lie near the model's decision boundary. To this end, our acquisition function selects the top $b$ examples from the pool that have the highest score $s_{x_p}$ (cf. line 8), that form the acquired batch $Q$ .
102
+
103
+ # 3 Experimental Setup
104
+
105
+ # 3.1 Tasks & Datasets
106
+
107
+ We conduct experiments on sentiment analysis, topic classification, natural language inference and paraphrase detection tasks. We provide details for the datasets in Table 1. We follow Yuan et al. (2020) and use IMDB (Maas et al., 2011), SST-2 (Socher et al., 2013), PUBMED (Dernoncourt and Lee, 2017) and AGNEWS from Zhang et al. (2015) where we also acquired DBPEDIA. We experiment with tasks requiring pairs of input sequences, using QQP and QNLI from GLUE (Wang et al., 2019). To evaluate robustness on out-of-distribution (OOD) data, we follow Hendrycks et al. (2020) and use SST-2 as OOD dataset for IMDB and vice versa. We finally use TWITTERPPDB (Lan et al., 2017) as OOD data for QQP as in Desai and Durrett (2020).
108
+
109
+ # 3.2 Baselines
110
+
111
+ We compare CAL against five baseline acquisition functions. The first method, ENTROPY is the most commonly used uncertainty-based baseline that acquires data points for which the model has the highest predictive entropy. As a diversity-based baseline, following Yuan et al. (2020), we use BERTKM that applies k-means clustering using the $l_{2}$ normalized BERT output embeddings of the fine-tuned model to select $b$ data points. We compare against BADGE (Ash et al., 2020), an acquisition function that aims to combine diversity and
112
+
113
+ uncertainty sampling, by computing gradient embeddings $g_{x}$ for every candidate data point $x$ in $\mathcal{D}_{\mathrm{pool}}$ and then using clustering to select a batch. Each $g_{x}$ is computed as the gradient of the cross-entropy loss with respect to the parameters of the model's last layer, aiming to be the component that incorporates uncertainty in the acquisition function<sup>7</sup>. We also evaluate a recently introduced cold-start acquisition function called ALPS (Yuan et al., 2020) that uses the masked language model (MLM) loss of BERT as a proxy for model uncertainty in the downstream classification task. Specifically, aiming to leverage both uncertainty and diversity, ALPS forms a surprisal embedding $s_{x}$ for each $x$ , by passing the unmasked input $x$ through the BERT MLM head to compute the cross-entropy loss for a random 15% subsample of tokens against the target labels. ALPS clusters these embeddings to sample $b$ sentences for each AL iteration. Lastly, we include RANDOM, that samples data from the pool from a uniform distribution.
114
+
115
+ # 3.3 Implementation Details
116
+
117
+ We use BERT-BASE (Devlin et al., 2019) adding a task-specific classification layer using the implementation from the HuggingFace library (Wolf et al., 2020). We evaluate the model 5 times per epoch on the development set following Dodge et al. (2020) and keep the one with the lowest validation loss. We use the standard splits provided for all datasets, if available, otherwise we randomly sample a validation set from the training set. We test all models on a held-out test set. We repeat all experiments with five different random seeds resulting into different initializations of the parameters of the model's extra task-specific output feedfor
118
+
119
+ ![](images/5b2df3228a291818aa46ff04b2661bda6d677314d7130ccca9ac35f24634b287.jpg)
120
+
121
+ ![](images/eaafb10928e52016e0657c606265a8b02200a063981921f1f5249e5a0786c07b.jpg)
122
+
123
+ ![](images/880321a7fb0eab7e64b7827f562f345719cb4cbd0088069209a281681d15cf21.jpg)
124
+
125
+ ![](images/4eb3c4197a2fd736ff05ccf9515849f2e35e5b0179687bd8930cccc834a586c6.jpg)
126
+
127
+ ![](images/c396d86fb3a97930c0a92ba995b5f52615300a2bd8d20a0b44fba3c9cf02b204.jpg)
128
+ Figure 2: In-domain (ID) test accuracy during AL iterations for different acquisition functions.
129
+
130
+ ![](images/48cfcbc0143605d3926fc8e4ca6bd1cdd5c5ae2bb13ea913aa66c6e12d4d2ea3.jpg)
131
+
132
+ ![](images/77e2733be909b6d63f6ddeb2082d877c8f6b4805123d9a3902670d62d0a1e011.jpg)
133
+
134
+ ![](images/a19b8122efb885800531fe47d2426b2bc9b447424a049accf7980fa1f5d9ca73.jpg)
135
+
136
+ ward layer and the initial $\mathcal{D}_{\mathrm{lab}}$ . For all datasets we use as budget the $15\%$ of $\mathcal{D}_{\mathrm{pool}}$ , initial training set $1\%$ and acquisition size $b = 2\%$ . Each experiment is run on a single Nvidia Tesla V100 GPU. More details are provided in the Appendix A.1.
137
+
138
+ # 4 Results
139
+
140
+ # 4.1 In-domain Performance
141
+
142
+ We present results for in-domain test accuracy across all datasets and acquisition functions in Figure 2. We observe that CAL is consistently the top performing method especially in DBPEDIA, PUBMED and AGNEWS datasets.
143
+
144
+ CAL performs slightly better than ENTROPY in IMDB, QNLI and QQP, while in SST-2 most methods yield similar results. ENTROPY is the second best acquisition function overall, consistently performing better than diversity-based or hybrid baselines. This corroborates recent findings from Desai and Durrett (2020) that BERT is sufficiently calibrated (i.e. produces good uncertainty estimates), making it a tough baseline to beat in AL.
145
+
146
+ BERTKM is a competitive baseline (e.g. SST-2, QNLI) but always underperforms compared to CAL and ENTROPY, suggesting that uncertainty is the most important signal in the data selection
147
+
148
+ process. An interesting future direction would be to investigate in depth whether and which (i.e. which layer) representations of the current (pretrained language models) works best with similarity search algorithms and clustering.
149
+
150
+ Similarly, we can see that BADGE, despite using both uncertainty and diversity, also achieves low performance, indicating that clustering the constructed gradient embeddings does not benefit data acquisition. Finally, we observe that ALPS generally underperforms and is close to RANDOM. We can conclude that this heterogeneous approach to uncertainty, i.e. using the pretrained language model as proxy for the downstream task, is beneficial only in the first few iterations, as shown in Yuan et al. (2020).
151
+
152
+ Surprisingly, we observe that for the SST-2 dataset ALPS performs similarly with the highest performing acquisition functions, CAL and ENTROPY. We hypothesize that due to the informal textual style of the reviews of SST-2 (noisy social media data), the pretrained BERT model can be used as a signal to query linguistically hard examples, that benefit the downstream sentiment analysis task. This is an interesting finding and a future research direction would be to investigate the correlation between the difficulty of an example in a
153
+
154
+ <table><tr><td>TRAIN (ID)</td><td>SST-2</td><td>IMDB</td><td>QQP</td></tr><tr><td>TEST (OOD)</td><td>IMDB</td><td>SST-2</td><td>TWITTERPPDB</td></tr><tr><td>RANDOM</td><td>76.28 ± 0.72</td><td>82.50 ± 3.61</td><td>85.86 ± 0.48</td></tr><tr><td>BERTKM</td><td>75.99 ± 1.01</td><td>84.98 ± 1.22</td><td>-</td></tr><tr><td>ENTROPY</td><td>75.38 ± 2.04</td><td>85.54 ± 2.52</td><td>85.06 ± 1.96</td></tr><tr><td>ALPS</td><td>77.06 ± 0.78</td><td>83.65 ± 3.17</td><td>84.79 ± 0.49</td></tr><tr><td>BADGE</td><td>76.41 ± 0.92</td><td>85.19 ± 3.01</td><td>-</td></tr><tr><td>CAL</td><td>79.00 ± 1.39</td><td>84.96 ± 2.36</td><td>86.20 ± 0.22</td></tr></table>
155
+
156
+ Table 2: Out-of-domain (OOD) accuracy of models trained with the actively acquired datasets created with different AL acquisition strategies.
157
+
158
+ downstream task with its perplexity (loss) of the pretrained language model.
159
+
160
+ # 4.2 Out-of-domain Performance
161
+
162
+ We also evaluate the out-of-domain (OOD) robustness of the models trained with the actively acquired datasets of the last iteration (i.e. $15\%$ of $\mathcal{D}_{\mathrm{pool}}$ or $100\%$ of the AL budget) using different acquisition strategies. We present the OOD results for SST-2, IMDB and QQP in Table 2. When we test the models trained with SST-2 on IMDB (first column) we observe that CAL achieves the highest performance compared to the other methods by a large margin, indicating that acquiring contrastive examples can improve OOD generalization. In the opposite scenario (second column), we find that the highest accuracy is obtained with ENTROPY. However, similarly to the ID results for SST-2 (Figure 2), all models trained on different subsets of the IMDB dataset result in comparable performance when tested on the small SST-2 test set (the mean accuracies lie inside the standard deviations across models). We hypothesize that this is because SST-2 is not a challenging OOD dataset for the different IMDB models. This is also evident by the high OOD accuracy, $85\%$ on average, which is close to the $91\%$ SST-2 ID accuracy of the full model (i.e. trained on $100\%$ of the ID data). Finally, we observe that CAL obtains the highest OOD accuracy for QQP compared to RANDOM, ENTROPY and ALPS. Overall, our empirical results show that the models trained on the actively acquired dataset with CAL obtain consistently similar or better performance than all other approaches when tested on OOD data.
163
+
164
+ # 5 Ablation Study
165
+
166
+ We conduct an extensive ablation study in order to provide insights for the behavior of every component of CAL. We present all AL experiments on the AGNEWS dataset in Figure 3.
167
+
168
+ ![](images/cc90505eb66b26fcd58eec3b4a7f8ed59d8f867fa48511072f7509ef1fa246ed.jpg)
169
+ Figure 3: In-domain (ID) test accuracy with different variants of CAL (ablation).
170
+
171
+ Decision Boundary We first aim to evaluate our hypothesis that CAL acquires difficult examples that lie close to the model's decision boundary. Specifically, to validate that the ranking of the constructed neighborhoods is meaningful, we run an experiment where we acquire candidate examples that have the minimum divergence from their neighbors opposite to CAL (i.e. we replace $\mathrm{argmax}(.)$ with $\mathrm{argmin}(.)$ in line 8 of Algorithm 1). We observe (Fig. 3 - CAL opposite) that even after acquiring $15\%$ of unlabeled data, the performance remains unchanged compared to the initial model (of the first iteration), even degrades. In effect, this finding denotes that CAL does select informative data points.
172
+
173
+ Neighborhood Next, we experiment with changing the way we construct the neighborhoods, aiming to improve computational efficiency. We thus modify our algorithm to create a neighborhood for each labeled example (instead of unlabeled). This way we compute a divergence score only for the neighbors of the training data points. However, we find this approach to slightly underperform (Fig. 3 - CAL per labeled example), possibly because only a small fraction of the pool is considered and thus the uncertainty of all the unlabeled data points is not taken into account.
174
+
175
+ Scoring function We also experiment with several approaches for constructing our scoring function (cf. line 6 in Algorithm 1). Instead of computing the KL divergence between the predicted probabilities of each candidate example and its labeled neighbors (cf. line 5), we used cross entropy between the output probability distribution and the gold labels of the labeled data. The intuition is to evaluate whether information of the actual label is more useful than the model's predictive probability distribution. We observe this scoring function to result in a slight drop in performance (Fig. 3 - Cross Entropy). We also experimented with various pooling operations to aggregate the KL divergence scores for each candidate data point. We found maximum and median (Fig. 3 - Max/Median) to perform similarly with the average (Fig. 3 - CAL), which is the pooling operation we decided to keep in our proposed algorithm.
176
+
177
+ Feature Space Since our approach is related to to acquiring data near the model's decision boundary, this effectively translates into using the [CLS] output embedding of BERT. Still, we opted to cover several possible alternatives to the representations, i.e. feature space, that can be used to find the neighbors with KNN. We divide our exploration into two categories: intrinsic representations from the current fine-tuned model and extrinsic using different methods. For the first category, we examine representing each example with the mean embedding layer of BERT (Fig. 3 - Mean embedding) or the mean output embedding (Fig. 3 - Mean output). We find both alternatives to perform worse than using the [CLS] token (Fig. 3 - CAL). The motivation for the second category is to evaluate whether acquiring contrastive examples in the input feature space, i.e. representing the raw text, is meaningful (Gardner et al., 2020)<sup>9</sup>. We thus examine contextual representations from a pretrained BERT language model (Fig. 3-BERT-pr [CLS]) (not fine-tuned in the task or domain) and non-contextualized TF-IDF vectors (Fig. 3-TF-IDF). We find both approaches, along with Mean embedding, to largely underperform compared to our approach that acquires ambiguous data near the model decision boundary.
178
+
179
+ 9This can be interpreted as comparing the effectiveness of selecting data near the model decision boundary vs. the task decision boundary, i.e. data that are similar for the task itself or for the humans (in terms of having the same raw input/vocabulary), but are from different classes.
180
+
181
+ # 6 Analysis
182
+
183
+ Finally, we further investigate CAL and all acquisition functions considered (bases), in terms of diversity, representativeness and uncertainty. Our aim is to provide insights on what data each method tends to select and what is the uncertainty-diversity trade-off of each approach. Table 3 shows the results of our analysis averaged across datasets. We denote with $L$ the labeled set, $U$ the unlabeled pool and $Q$ an acquired batch of data points from $U^{10}$ .
184
+
185
+ # 6.1 Diversity & Uncertainty Metrics
186
+
187
+ Diversity in input space (DIV.-I) We first evaluate the diversity of the actively acquired data in the input feature space, i.e. raw text, by measuring the overlap between tokens in the sampled sentences $Q$ and tokens from the rest of the data pool $U$ . Following Yuan et al. (2020), we compute DIV.-I as the Jaccard similarity between the set of tokens from the sampled sentences $Q$ , $\mathcal{V}_{\mathcal{Q}}$ , and the set of tokens from the unsampled sentences $\mathcal{U} \backslash \mathcal{Q}$ , $\mathcal{V}_{\mathcal{Q}'}$ , $\mathcal{J}(\mathcal{V}_{\mathcal{Q}}, \mathcal{V}_{\mathcal{Q}'}) = \frac{|\mathcal{V}_{\mathcal{Q}} \cap \mathcal{V}_{\mathcal{Q}'}|}{|\mathcal{V}_{\mathcal{Q}} \cup \mathcal{V}_{\mathcal{Q}'}|}$ . A high DIV.-I value indicates high diversity because the sampled and unsampled sentences have many tokens in common.
188
+
189
+ Diversity in feature space (DIV.-F) We next evaluate diversity in the (model) feature space, using the [CLS] representations of a trained BERT model $^{11}$ . Following Zhdanov (2019) and Ein-Dor et al. (2020), we compute DIV.-F of a set $Q$ as $\left(\frac{1}{|U|}\sum_{x_i\in U}\min_{x_j\in Q}d(\Phi (x_i),\Phi (x_j))\right)^{-1}$ , where $\Phi (x_{i})$ denotes the [CLS] output token of example $x_{i}$ obtained by the model which was trained using $L$ , and $d(\Phi (x_i),\Phi (x_j))$ denotes the Euclidean distance between $x_{i}$ and $x_{j}$ in the feature space.
190
+
191
+ Uncertainty (UNC.) To measure uncertainty, we use the model $\mathcal{M}_f$ trained on the entire training dataset (Figure 2 - Full supervision). As in Yuan et al. (2020), we use the logits from the fully trained model to estimate the uncertainty of an example, as it is a reliable estimate due to its high performance after training on many examples, while
192
+
193
+ <table><tr><td></td><td>DIV.-I</td><td>DIV.-F</td><td>UNC.</td><td>REPR.</td></tr><tr><td>RANDOM</td><td>0.766</td><td>0.356</td><td>0.132</td><td>1.848</td></tr><tr><td>BERTKM</td><td>0.717</td><td>0.363</td><td>0.145</td><td>2.062</td></tr><tr><td>ENTROPY</td><td>0.754</td><td>0.323</td><td>0.240</td><td>2.442</td></tr><tr><td>ALPS</td><td>0.771</td><td>0.360</td><td>0.126</td><td>2.038</td></tr><tr><td>BADGE</td><td>0.655</td><td>0.339</td><td>0.123</td><td>2.013</td></tr><tr><td>CAL</td><td>0.768</td><td>0.335</td><td>0.231</td><td>2.693</td></tr></table>
194
+
195
+ Table 3: Uncertainty and diversity metrics across acquisition functions, averaged for all datasets.
196
+
197
+ it offers a fair comparison across all acquisition strategies. First, we compute predictive entropy of an input $x$ when evaluated by model $\mathcal{M}_f$ and then we take the average over all sentences in a sampled batch $Q$ . We use the average predictive entropy to estimate uncertainty of the acquired batch $Q$ for each method $-\frac{1}{|Q|} \sum_{x \in Q} \sum_{c=1}^{C} p(y = c|x) \log p(y = c|x)$ . As a sampled batch $Q$ we use the full actively acquired dataset after completing our AL iterations (with 15% of the data).
198
+
199
+ Representativeness (REPR.) We finally analyze the representativeness of the acquired data as in Ein-Dor et al. (2020). We aim to study whether AL strategies tend to select outlier examples that do not properly represent the overall data distribution. We rely on the KNN-density measure proposed by Zhu et al. (2008), where the density of an example is quantified by one over the average distance between the example and its K most similar examples (i.e., K nearest neighbors) within $U$ , based on the [CLS] representations as in Div.-F. An example with high density degree is less likely to be an outlier. We define the representativeness of a batch $Q$ as one over the average KNN-density of its instances using the Euclidean distance with $K = 10$ .
200
+
201
+ # 6.2 Discussion
202
+
203
+ We first observe in Table 3 that ALPS acquires the most diverse data across all approaches. This is intuitive since ALPS is the most linguistically-informed method as it essentially acquires data that are difficult for the language modeling task, thus favoring data with a more diverse vocabulary. All other methods acquire similarly diverse data, except BADGE that has the lowest score. Interestingly, we observe a different pattern when evaluating diversity in the model feature space (using the [CLS] representations). BERTKM has the highest
204
+
205
+ DIV.-F score, as expected, while CAL and ENTROPY have the lowest. This supports our hypothesis that uncertainty sampling tends to acquire uncertain but similar examples, while CAL by definition constrains its search in similar examples in the feature space that lie close to the decision boundary (contrastive examples). As for uncertainty, we observe that ENTROPY and CAL acquire the most uncertain examples, with average entropy almost twice as high as all other methods. Finally, regarding representativeness of the acquired batches, we see that CAL obtains the highest score, followed by ENTROPY, with the rest AL strategies to acquire less representative data.
206
+
207
+ Overall, our analysis validates assumptions on the properties of data expected to be selected by the various acquisition functions. Our findings show that diversity in the raw text does not necessarily correlate with diversity in the feature space. In other words, low DIV.-F does not translate to low diversity in the distribution of acquired tokens (DIV.-I), suggesting that CAL can acquire similar examples in the feature space that have sufficiently diverse inputs. Furthermore, combining the results of our AL experiments (Figure 2) and our analysis (Table 3) we conclude that the best performance of CAL, followed by ENTROPY, is due to acquiring uncertain data. We observe that the most notable difference, in terms of selected data, between the two approaches and the rest is uncertainty (UNC.), suggesting perhaps the superiority of uncertainty over diversity sampling. We show that CAL improves over ENTROPY because our algorithm "guides" the focus of uncertainty sampling by not considering redundant uncertain data that lie away from the decision boundary and thus improving representativeness. We finally find that RANDOM is evidently the worst approach, as it selects the least diverse and uncertain data on average compared to all methods.
208
+
209
+ # 7 Related Work
210
+
211
+ Uncertainty Sampling Uncertainty-based acquisition for AL focuses on selecting data points that the model predicts with low confidence. A simple uncertainty-based acquisition function is least confidence (Lewis and Gale, 1994) that sorts data in descending order from the pool by the probability of not predicting the most confident class. Another approach is to select samples that maximize the predictive entropy. Houlsby et al. (2011)
212
+
213
+ propose Bayesian Active Learning by Disagreement (BALD), a method that chooses data points that maximize the mutual information between predictions and model's posterior probabilities. Gal et al. (2017) applied BALD for deep neural models using Monte Carlo dropout (Gal and Ghahramani, 2016) to acquire multiple uncertainty estimates for each candidate example. Least confidence, entropy and BALD acquisition functions have been applied in a variety of text classification and sequence labeling tasks, showing to substantially improve data efficiency (Shen et al., 2017; Siddhant and Lipton, 2018; Lowell and Lipton, 2019; Kirsch et al., 2019; Shelmanov et al., 2021; Margatina et al., 2021).
214
+
215
+ On the other hand, diversity or representative sampling is based on selecting batches of unlabeled examples that are representative of the unlabeled pool, based on the intuition that a representative set of examples once labeled, can act as a surrogate for the full data available. In the context of deep learning, Geifman and El-Yaniv (2017) and Sener and Savarese (2018) select representative examples based on core-set construction, a fundamental problem in computational geometry. Inspired by generative adversarial learning, Gissin and Shalev-Shwartz (2019) define AL as a binary classification task with an adversarial classifier trained to not be able to discriminate data from the training set and the pool. Other approaches based on adversarial active learning, use out-of-the-box models to perform adversarial attacks on the training data, in order to approximate the distance from the decision boundary of the model (Ducoffe and Precioso, 2018; Ru et al., 2020).
216
+
217
+ Hybrid There are several existing approaches that combine representative and uncertainty sampling. Such approaches include active learning algorithms that use meta-learning (Baram et al., 2004; Hsu and Lin, 2015) and reinforcement learning (Fang et al., 2017; Liu et al., 2018), aiming to learn a policy for switching between a diversity-based or an uncertainty-based criterion at each iteration. Recently, Ash et al. (2020) propose Batch Active learning by Diverse Gradient Embeddings (BADGE) and Yuan et al. (2020) propose Active Learning by Processing Surprisal (ALPS), a cold-start acquisition function specific for pretrained language models. Both methods construct representations for the unlabeled data based on uncertainty, and then use them for clustering; hence combining
218
+
219
+ both uncertainty and diversity sampling. The effectiveness of AL in a variety of NLP tasks with pretrained language models, e.g. BERT (Devlin et al., 2019), has empirically been recently evaluated by Ein-Dor et al. (2020), showing substantial improvements over random sampling.
220
+
221
+ # 8 Conclusion & Future Work
222
+
223
+ We present CAL, a novel acquisition function for AL that acquires contrastive examples; data points which are similar in the model feature space and yet the model outputs maximally different class probabilities. Our approach uses information from the feature space to create neighborhoods for each unlabeled example, and predictive likelihood for ranking the candidate examples. Empirical experiments on various in-domain and out-of-domain scenarios demonstrate that CAL performs better than other acquisition functions in the majority of cases. After analyzing the actively acquired datasets obtained with all methods considered, we conclude that entropy is the hardest baseline to beat, but our approach improves it by guiding uncertainty sampling in regions near the decision boundary with more informative data.
224
+
225
+ Still, our empirical results and analysis show that there is no single acquisition function to outperform all others consistently by a large margin. This demonstrates that there is still room for improvement in the AL field.
226
+
227
+ Furthermore, recent findings show that in specific tasks, as in Visual Question Answering (VQA), complex acquisition functions might not outperform random sampling because they tend to select collective outliers that hurt model performance (Karamcheti et al., 2021). We believe that taking a step back and analyzing the behavior of standard acquisition functions, e.g. with Dataset Maps (Swayamdipta et al., 2020), might be beneficial. Especially, if similar behavior appears in other NLP tasks too.
228
+
229
+ Another interesting future direction for CAL, related to interpretability, would be to evaluate whether acquiring contrastive examples for the task (Kaushik et al., 2020; Gardner et al., 2020) is more beneficial than contrastive examples for the model, as we do in CAL.
230
+
231
+ # Acknowledgments
232
+
233
+ KM and NA are supported by Amazon through the Alexa Fellowship scheme.
234
+
235
+ # References
236
+
237
+ Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gradient lower bounds. In Proceedings of the International Conference on Learning Representations.
238
+ Yoram Baram, Ran El-Yaniv, and Kobi Luz. 2004. Online choice of active learning algorithms. Journal of Machine Learning Research, 5:255-291.
239
+ Zalán Bodó, Zsolt Minier, and Lehel Csató. 2011. Active learning with clustering. In Proceedings of the Active Learning and Experimental Design workshop In conjunction with AISTATS 2010, volume 16, pages 127-139.
240
+ Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the International Conference on Machine Learning, pages 59-66.
241
+ Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, volume 119, pages 1597-1607.
242
+ David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. 1996. Active learning with statistical models. Journal of Artificial Intelligence Research, 4(1):129-145.
243
+ Sanjoy Dasgupta. 2011. Two faces of active learning. Theoretical Computer Science, 412(19):1767-1781. Algorithmic Learning Theory (ALT 2009).
244
+ Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308-313.
245
+ Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 295-302.
246
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186.
247
+ Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. ArXiv.
248
+ Melanie Ducoffe and Frederic Precioso. 2018. Adversarial active learning for deep networks: a margin based approach.
249
+
250
+ Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active learning for BERT: An empirical study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 7949-7962.
251
+ Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 595-605.
252
+ Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, volume 48, pages 1050-1059.
253
+ Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian active learning with image data. In Proceedings of the International Conference on Machine Learning, volume 70, pages 1183-1192.
254
+ Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307-1323.
255
+ Yonatan Geifman and Ran El-Yaniv. 2017. Deep active learning over the long tail. CoRR, abs/1711.00941.
256
+ Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. CoRR, abs/1907.06347.
257
+ Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In Proceedings of the International Conference on Learning Representations.
258
+ Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 2744-2751.
259
+ Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian active learning for classification and preference learning. ArXiv.
260
+ Wei-Ning Hsu and Hsuan-Tien Lin. 2015. Active learning by learning. In Proceedings of the Conference of the Association for the Advancement of Artificial Intelligence, pages 2659-2665.
261
+
262
+ Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher Manning. 2021. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing, pages 7265-7281.
263
+ Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In Proceedings of the International Conference on Learning Representations.
264
+ Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. 2019. BatchBALD: Efficient and diverse batch acquisition for deep bayesian active learning. In Proceedings of the Conference on Neural Information Processing Systems, pages 7026-7037.
265
+ Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1224-1234.
266
+ David D. Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Machine Learning Proceedings 1994, pages 148-156.
267
+ David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In *In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval*.
268
+ Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning how to actively learn: A deep imitation learning approach. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874-1883.
269
+ David Lowell and Zachary C Lipton. 2019. Practical obstacles to deploying active learning. Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing, pages 21-30.
270
+ Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150.
271
+ Katerina Margatina, Loic Barrault, and Nikolaos Aletras. 2021. Bayesian active learning with pretrained language models. CoRR, abs/2104.08320.
272
+
273
+ Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the International Conference on Neural Information Processing Systems, page 3111-3119.
274
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024-8035.
275
+ Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through sampling estimation of error reduction. In Proceedings of the International Conference on Machine Learning, page 441-448.
276
+ Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Weinan Zhang, Yong Yu, and Lei Li. 2020. Active sentence learning by adversarial uncertainty sampling in discrete space. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4908-4917, Online. Association for Computational Linguistics.
277
+ Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In Proceedings of the International Conference on Learning Representations.
278
+ Burr Settles. 2009. Active learning literature survey. Computer sciences technical report.
279
+ Artem Shelmanov, Dmitri Puzyrev, Lyubov Kupriyanova, Denis Belyakov, Daniil Larionov, Nikita Khromov, Olga Kozlova, Ekaterina Artemova, Dmitry V. Dylov, and Alexander Panchenko. 2021. Active learning for sequence tagging with deep pre-trained models and Bayesian uncertainty estimates. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 1698-1712.
280
+ Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew-Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 589-596.
281
+ Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the Workshop on Representation Learning for NLP, pages 252-256.
282
+ Aditya Siddhant and Zachary C Lipton. 2018. Deep bayesian active learning for natural language processing: Results of a Large-Scale empirical study.
283
+
284
+ In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2904-2909.
285
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1631-1642.
286
+ Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.
287
+ Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 9275-9293.
288
+ Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2019. Representation learning with contrastive predictive coding.
289
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
290
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45.
291
+ Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. 2020. Cold-start active learning through self-supervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935-7948, Online. Association for Computational Linguistics.
292
+ Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28, pages 649-657. Curran Associates, Inc.
293
+ Fedor Zhdanov. 2019. Diverse mini-batch active learning.
294
+
295
+ Jingbo Zhu, Huizhen Wang, Tianshun Yao, and Benjamin K Tsou. 2008. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the International Conference on Computational Linguistics, pages 1137-1144.
296
+
297
+ # A Appendix
298
+
299
+ # A.1 Data & Hyperparameters
300
+
301
+ In this section we provide details of all the datasets we used in this work and the hyperparparameters used for training the model. For QNLI, IMDB and SST-2 we randomly sample $10\%$ from the training set to serve as the validation set, while for AGNEWS and QQP we sample $5\%$ . For the DBPEDIA dataset we undersample both training and validation datasets (from the standard splits) to facilitate our AL simulation (i.e. the original dataset consists of 560K training and 28K validation data examples). For all datasets we use the standard test set, apart from SST-2, QNLI and QQP datasets that are taken from the GLUE benchmark (Wang et al., 2019) we use the development set as the held-out test set and subsample a development set from the training set.
302
+
303
+ For all datasets we train BERT-BASE (Devlin et al., 2019) from the HuggingFace library (Wolf et al., 2020) in Pytorch (Paszke et al., 2019). We train all models with batch size 16, learning rate $2e - 5$ , no weight decay, AdamW optimizer with epsilon $1e - 8$ . For all datasets we use maximum sequence length of 128, except for IMDB that contain longer input texts, where we use 256. To ensure reproducibility and fair comparison between the various methods under evaluation, we run all experiments with the same five seeds that we randomly selected from the range [1, 9999]. We evaluate the model 5 times per epoch on the development set following Dodge et al. (2020) and keep the one with the lowest validation loss. We use the code provided by Yuan et al. (2020) for ALPS, BADGE and BERTKM.
304
+
305
+ # A.2 Efficiency
306
+
307
+ In this section we compare the efficiency of the acquisition functions considered in our experiments. We denote $m$ the number of labeled data in $\mathcal{D}_{\mathrm{lab}}$ , $n$ the number of unlabeled data in $\mathcal{D}_{\mathrm{pool}}$ , $C$ the number of classes in the downstream classification task, $d$ the dimension of embeddings, $t$ is fixed number of iterations for k-MEANS, $l$ the maximum sequence length and $k$ the acquisition size. In our experiments, following (Yuan et al., 2020), $k = 100$ , $d = 768$ , $t = 10$ , and $l = 128^{12}$ . ALPS requires $\mathcal{O}(tknl)$ considering that the surprisal embeddings are computed. BERTKM and BADGE, the
308
+
309
+ most computationally heavy approaches, require $\mathcal{O}(knd)$ and $\mathcal{O}(Cknd)$ respectively, given that gradient embeddings are computed for BADGE $^{13}$ . On the other hand, ENTROPY only requires $n$ forward passes though the model, in order to obtain the logits for all the data in $\mathcal{D}_{\mathrm{pool}}$ . Instead, our approach, CAL, first requires $m + n$ forward passes, in order to acquire the logits and the CLS representations of the data (in $\mathcal{D}_{\mathrm{pool}}$ and $\mathcal{D}_{\mathrm{lab}}$ ) and then one iteration for all data in $\mathcal{D}_{\mathrm{pool}}$ to obtain the scores.
310
+
311
+ We present the runtimes in detail for all datasets and acquisition functions in Tables 4 and 5. First, we define the total acquisition time as a sum of two types of times; inference and selection time. Inference time is the time that is required in order to pass all data from the model to acquire predictions or probability distributions or model encodings (representations). This is explicitly required for the uncertainty-based methods, like ENTROPY, and our method CAL. The remaining time is considered selection and essentially is the time for all necessary computations in order to rank and select the $b$ most important examples from $\mathcal{D}_{\mathrm{pool}}$ .
312
+
313
+ We observe in Table 4 that the diversity-based functions do not require this explicit inference time, while for ENTROPY it is the only computation that is needed (taking the argmax of a list of uncertainty scores is negligible). CAL requires both inference and selection time. We can see that inference time of CAL is a bit higher than ENTROPY because we do $m + n$ forward passes instead of $n$ , that is equivalent to both $\mathcal{D}_{\mathrm{pool}}$ and $\mathcal{D}_{\mathrm{lab}}$ instead of only $\mathcal{D}_{\mathrm{pool}}$ . The selection time for CAL is the for-loop as presented in our Algorithm 1. We observe that it is often less computationally expensive than the inference step (which is a simple forward pass through the model). Still, there is room for improvement in order to reduce the time complexity of this step.
314
+
315
+ In Table 5 we present the total time for all datasets (ordered with increasing $\mathcal{D}_{\mathrm{pool}}$ size) and the average time for each acquisition function, as a means to rank their efficiency. Because we do not apply all acquisition functions to all datasets we compute three different average scores in order to ensure fair comparison. AVG.-ALL is the average time across all 7 datasets and is used to compare RANDOM, ALPS, ENTROPY and CAL. AVG.-3 is the average time across the first 3 datasets (IMDB, SST-2 and DBPEDIA) and is used to compare all
316
+
317
+ <table><tr><td></td><td>DBPEDIA</td><td>IMDB</td><td>SST-2</td><td>QNLI</td><td>AGNEWS</td><td>PUBMED</td><td>QQP</td></tr><tr><td>RANDOM</td><td>(0,0)</td><td>(0,0)</td><td>(0,0)</td><td>(0,0)</td><td>(0,0)</td><td>(0,0)</td><td>(0,0)</td></tr><tr><td>ALPS</td><td>(0,181)</td><td>(0,222)</td><td>(0,733)</td><td>(0,1607)</td><td>(0,2309)</td><td>(0,5878)</td><td>(0,14722)</td></tr><tr><td>BERTKM</td><td>(0,467)</td><td>(0,431)</td><td>(0,4265)</td><td>(0,8138)</td><td>(0,9344)</td><td>(0,25965)</td><td>(-,-)</td></tr><tr><td>BADGE</td><td>(0,12871)</td><td>(0,3816)</td><td>(0,25640)</td><td>(-,-)</td><td>(-,-)</td><td>(-,-)</td><td>(-,-)</td></tr><tr><td>ENTROPY</td><td>(103,1)</td><td>(107,0)</td><td>(173,0)</td><td>(331,0)</td><td>(402,0)</td><td>(596,0)</td><td>(1070,0)</td></tr><tr><td>CAL</td><td>(133,49)</td><td>(212,61)</td><td>(464,244)</td><td>(528,376)</td><td>(656,628)</td><td>(1184,1445)</td><td>(1541,2857)</td></tr></table>
318
+
319
+ Table 4: Runtimes (in seconds) for all datasets and acquisition functions. In each cell of the table we present a tuple $(i,s)$ where $i$ is the inference time and $s$ the selection time. Inference time is the time for the model to perform a forward pass for all the unlabeled data in $\mathcal{D}_{\mathrm{pool}}$ and selection time is the time that each acquisition function requires to rank all candidate data points and select $b$ for annotation (for a single iteration). Since we cannot report the runtimes for every model in the AL pipeline (at each iteration the size of $\mathcal{D}_{\mathrm{pool}}$ changes), we provide the median.
320
+
321
+ <table><tr><td></td><td>DBPEDIA</td><td>IMDB</td><td>SST-2</td><td>QNLI</td><td>AGNEWS</td><td>PUBMED</td><td>QQP</td><td>AVG.-ALL</td><td>AVG.-3</td><td>AVG.-6</td></tr><tr><td>RANDOM</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>ALPS</td><td>181</td><td>222</td><td>733</td><td>1607</td><td>2309</td><td>5878</td><td>14722</td><td>3664</td><td>378</td><td>1821</td></tr><tr><td>BERTKM</td><td>467</td><td>431</td><td>4265</td><td>8138</td><td>9344</td><td>25965</td><td>-</td><td>-</td><td>1721</td><td>8101</td></tr><tr><td>BADGE</td><td>12871</td><td>3816</td><td>25640</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>14109</td><td>-</td></tr><tr><td>ENTROPY</td><td>104</td><td>107</td><td>173</td><td>331</td><td>402</td><td>596</td><td>1070</td><td>397</td><td>128</td><td>285</td></tr><tr><td>CAL</td><td>182</td><td>273</td><td>708</td><td>904</td><td>1284</td><td>2629</td><td>4398</td><td>1482</td><td>387</td><td>996</td></tr></table>
322
+
323
+ Table 5: Runtimes (in seconds) for all datasets and acquisition functions. In each cell of the table we present the total acquisition time (inference and selection). AVG.-ALL shows the average acquisition time for each acquisition function for all datasets, AVG.-6. for all datasets except QQP and AVG.-3 for the 3 first datasets only (DBPEDIA, IMDB, SST-2).
324
+
325
+ acquisition functions. Finally, AVG.-6 is the average time across all datasets apart from QQP and is used to compare RANDOM, ALPS, BERTKM, ENTROPY and CAL.
326
+
327
+ We first observe that ENTROPY is overall the most efficient acquisition function. According to the AVG.-ALL column, we observe that CAL is the second most efficient function, followed by ALPS. According to the AVG.-6 we observe the same pattern, with BERTKM to be the slowest method. Finally, we compare all acquisition functions in the 3 smallest (in terms of size of $\mathcal{D}_{\mathrm{pool}}$ ) datasets and find that ENTROPY is the fastest method followed by ALPS and CAL that require almost 3 times more computation time. The other clustering methods, BERTKM and BADGE, are significantly more computationally expensive, requiring respectively 13 and $100(!)$ times more time than ENTROPY.
328
+
329
+ Interestingly, we observe the effect of the acquisition size (2% of $\mathcal{D}_{\mathrm{pool}}$ in our case) and the size of $\mathcal{D}_{\mathrm{pool}}$ in the clustering methods. As these parameters increase, the computation of the corresponding acquisition function increases dramatically. For example, we observe that in the 3 smallest datasets that ALPS requires similar time to CAL. However,
330
+
331
+ when we increase $b$ and $m$ (i.e. as we move from DBPEDIA with $20K$ examples in $D_{\mathrm{pool}}$ to QNLI with $100K$ etc - see Table 1) we observe that the acquisition time of ALPS becomes twice as much as that of CAL. For instance, in QQP with acquisition size 3270 we see that ALPS requires 14722 seconds on average, while CAL 4398. This shows that even though our approach is more computationally expensive as the size of $D_{\mathrm{pool}}$ increases, the complexity is linear, while for the other hybrid methods that use clustering, the complexity grows exponentially.
332
+
333
+ # A.3 Reproducibility
334
+
335
+ All code for data preprocessing, model implementations, and active learning algorithms is made available at https://github.com/mourga/contrastive-active-learning. For questions regarding the implementation, please contact the first author.
activelearningbyacquiringcontrastiveexamples/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99b4fe9c74f5b62e32008959ff7b167ff1f0cf0ce5bb31dd7d1689093c1f429f
3
+ size 374596
activelearningbyacquiringcontrastiveexamples/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:275480ecad34c022ee4611b14967171f50931cd10a645a6546462d326931114d
3
+ size 531096
adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d3c23221e0b2532d8ca53914db0944d261f020f21ab9664262b47dc60c641eb
3
+ size 105134
adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85368e47a9b999224613ba8c77fc0ce94110e76d381bd9efd0e631d824394e2e
3
+ size 123827
adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07524abce34d126484f9ef8632e1a1a2a9d65c02041e0c1abbf96920f191c51a
3
+ size 2701060
adapterdropontheefficiencyofadaptersintransformers/full.md ADDED
@@ -0,0 +1,394 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AdapterDrop: On the Efficiency of Adapters in Transformers
2
+
3
+ Andreas Rücklé* and Gregor Geigle and Max Glockner, Tilman Beck and Jonas Pfeiffer and Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP) Department of Computer Science, Technische Universität Darmstadt www.ukp.tu-darmstadt.de
4
+
5
+ # Abstract
6
+
7
+ Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training light-weight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that AdapterDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.
8
+
9
+ # 1 Introduction
10
+
11
+ While transfer learning has become the go-to method for solving NLP tasks (Pan and Yang, 2010; Torrey and Shavlik, 2010; Ruder, 2019; Howard and Ruder, 2018; Peters et al., 2018), transformer-based models are notoriously deep requiring millions or even billions of parameters (Radford et al., 2018; Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Brown et al., 2020). This results in slow inference and large storage requirements.
12
+
13
+ At least three independent lines of research have recently evolved to tackle these shortcomings. (1) Smaller and faster models that are either distilled or trained from scratch (Sanh et al., 2019; Sun et al., 2020; Bai et al., 2021; Wang et al., 2020). (2) Robustly trained transformers in which the model depth can be reduced at run-time, thereby decreasing inference time dynamically (Fan et al., 2020; Elbayad et al., 2020; Xin et al., 2020; Hou et al., 2020). (3) Adapters, which, instead of fully fine-tuning the model, only train a newly introduced set of weights at every layer, thereby sharing
14
+
15
+ the majority of parameters between tasks (Houlsby et al., 2019; Bapna and First, 2019; Pfeiffer et al., 2020a). Adapters have been shown to work well for machine translation (Bapna and First, 2019), cross-lingual transfer (Pfeiffer et al., 2020b, 2021b; Üstün et al., 2020; Vidoni et al., 2020; Ansell et al., 2021), community QA (Rücklé et al., 2020), and task composition for transfer learning (Stickland and Murray, 2019; Pfeiffer et al., 2021a; Lauscher et al., 2020; Wang et al., 2021; Poth et al., 2021). Despite their recent popularity, the computational efficiency of adapters has not been explored beyond parameter efficiency.
16
+
17
+ We close this gap and establish the computational efficiency of two adapter architectures at training and inference time. We investigate different strategies to further improve the efficiency of adapter-based models by incorporating ideas from all three directions mentioned above. Our strategies rely on dropping out adapters from transformers, at training and inference time, resulting in models that are dynamically adjustable regarding the available computational resources. Our approaches are agnostic to the pre-trained transformer model (e.g., base, large), which makes them broadly applicable.
18
+
19
+ # Contributions:
20
+
21
+ 1. We are the first to establish the computational efficiency of adapters compared to full fine-tuning. We show that the training steps of adapters can be up to $60\%$ faster than full model fine-tuning with common hyperparameter choices, while being $4 - 6\%$ slower at inference. Hence, adapters are a suitable choice for researchers interested in achieving faster training times, or when requiring extensive hyperparameter tuning.
22
+
23
+ 2. We propose AdapterDrop, the efficient and dynamic removal of adapters with minimal impact on the task performances. We show that dropping adapters from lower transformer layers considerably improves the inference speed in
24
+
25
+ <table><tr><td rowspan="2">Setting</td><td rowspan="2">Adapter</td><td colspan="4">Relative speed (for Seq.Len./Batch)</td></tr><tr><td>128/16</td><td>128/32</td><td>512/16</td><td>512/32</td></tr><tr><td rowspan="2">Training</td><td>Houlsby</td><td>1.48</td><td>1.53</td><td>1.36</td><td>1.33</td></tr><tr><td>Pfeiffer</td><td>1.57</td><td>1.60</td><td>1.41</td><td>1.37</td></tr><tr><td rowspan="2">Inference</td><td>Houlsby</td><td>0.94</td><td>0.94</td><td>0.96</td><td>0.96</td></tr><tr><td>Pfeiffer</td><td>0.95</td><td>0.95</td><td>0.96</td><td>0.96</td></tr></table>
26
+
27
+ Table 1: Relative speed of adapters compared to fully fine-tuned models. For example, 1.6 for training with the Pfeiffer adapter means that we can perform 1.6 training steps with this adapter in the time of one training step with full model fine-tuning.
28
+
29
+ multi-task settings. For example, with adapters dropped from the first five layers, AdapterDrop is $39\%$ faster when performing inference on 8 tasks simultaneously. This can be beneficial for researchers working on models that need to make multiple predictions on each input.
30
+
31
+ 3. We prune adapters from adapter compositions in AdapterFusion (Pfeiffer et al., 2021a) and retain only the most important adapters after transfer learning, resulting in faster inference while maintaining the task performances entirely. This is suitable for settings with little labeled training data, where AdapterFusion can achieve ample improvements over standard single task models.
32
+
33
+ # 2 Efficiency of Adapters
34
+
35
+ We first establish the computational efficiency of adapters without AdapterDrop. As illustrated in Figure 1, significant differences exist in the forward and backward pass when fine-tuning adapters compared to fully fine-tuning the model. In the forward pass, adapters add complexity with the additional components; however, it is not necessary to backpropagate through the entire model during the backward pass. We compare the training and inference speed of full model fine-tuning against the adapter architectures of Houlsby et al. (2019) and Pfeiffer et al. (2021a) (depicted in Figure 1) using the AdapterHub.ml framework (Pfeiffer et al., 2020a). We conduct our measurements with the transformer configuration of BERT base and verify them with different GPUs. $^{1}$
36
+
37
+ We provide measurements corresponding to common experiment configurations in Table 1.
38
+
39
+ Training. Adapters can be considerably faster compared to full model fine-tuning— $60\%$ faster
40
+
41
+ in some configurations. The two adapter architectures differ only marginally in terms of training efficiency: due to its simpler architecture, training steps of the Pfeiffer adapters are slightly faster. The magnitude of the differences depends on the input size; the available CUDA cores are the primary bottleneck. $^2$ We do not observe any particular differences between adapters and full fine-tuning regarding the training convergence. $^3$
42
+
43
+ The training speedup can be explained by the decreased overhead of gradient computation. Most of the parameters are frozen when using adapters and it is not necessary to backpropagate through the first components (see Figure 1).
44
+
45
+ Inference. The two adapter architectures are 94- $96\%$ as fast as fully fine-tuned models, which varies depending on the input size. This can have a considerable impact when deployed at scale.
46
+
47
+ # 3 AdapterDrop
48
+
49
+ We have established that adapters are more efficient in terms of training time, however, there is a perpetuate need for sustainable and efficient models (Strubell et al., 2019). Backpropagating through as few layers as possible would further improve the efficiency of training adapters. The efficiency for inference can be improved by sharing representations at lower transformer layers when simultaneously performing inference for multiple tasks—in other words, when performing multiple independent classifications on the same input. We establish this in Table 2, finding that models are up to $8.4\%$ faster with every shared layer (16 tasks).
50
+
51
+ Motivated by these observations, we propose AdapterDrop: Dynamically removing adapters from lower transformer layers (depicted in Figure 1). AdapterDrop is similar to dropping out entire transformer layers (Fan et al., 2020), however, specialized to adapter settings—where lower layers often have a small impact on the task performances (Houlsby et al., 2019).
52
+
53
+ We study two training methods for AdapterDrop: (1) Specialized AdapterDrop: Removing adapters from the first $n$ transformer layers, where $n$ is fixed during training. This yields separate models for each possible $n$ . (2) Robust AdapterDrop: Drawing the integer $n$ randomly from [0, 11] for each
54
+
55
+ ![](images/0f472eaacbab9297062e5d23d62fafc47b7292547e4c894a18c8a77a4f7d4dab.jpg)
56
+ Figure 1: Standard adapter fine-tuning vs. Adapter-Drop fine-tuning. The left model includes adapters at every layer whereas the right model has adapters dropped at the first layer. The arrows to the right of each model indicate the information flow for the Forward and Backward pass through the model.
57
+
58
+ <table><tr><td>Simultaneous Tasks</td><td>2</td><td>4</td><td>8</td><td>16</td></tr><tr><td>Speedup (each layer)</td><td>4.3%</td><td>6.6%</td><td>7.8%</td><td>8.4%</td></tr></table>
59
+
60
+ Table 2: Speedup for each shared transformer layer when performing inference for multiple tasks simultaneously (details are given in Appendix G.2)
61
+
62
+ training batch. $^{4}$ This yields one robust model that is applicable to a varying number of dropped layers. We study the effectiveness of AdapterDrop on the devsets of the GLUE benchmark (Wang et al., 2018) using RoBERTa base (Liu et al., 2019). $^{5}$
63
+
64
+ Figure 2 shows that specialized AdapterDrop maintains good results even with several dropped layers. With the first five layers dropped, specialized AdapterDrop maintains $97.1\%$ of the original performance (averaged over all eight GLUE tasks; see Table 8). Moreover, robust AdapterDrop achieves comparable results, and with five layers dropped it maintains $95.4\%$ of the original performance (on avg). The advantage of robust over specialized AdapterDrop is that the robust variant can be dynamically scaled. Based on current available computational resources, robust AdapterDrop can (de)activate layers with the same set of parameters, whereas specialized AdapterDrop needs to be trained for every setting explicitly.
65
+
66
+ The efficiency gains can be large. When performing inference for multiple tasks simultaneously, we measure inference speedups of $21 - 42\%$ with five
67
+
68
+ ![](images/2e589ffd99f21743caa53dbb20e7070c4c39aaa62b6eb1daf2980c739f732ff3.jpg)
69
+ Figure 2: Task performances in relation to dropped layers during evaluation (Figure 13 shows all tasks). 'Standard adapter' is trained with no dropped layers.
70
+
71
+ dropped layers—depending on the number of simultaneous tasks (Table 2).<sup>6</sup> Training of our robust adapters is also more efficient, which increases the speed of training steps by $26\%$ .<sup>7</sup>
72
+
73
+ # 4 Efficiency of AdapterFusion
74
+
75
+ AdapterFusion (Pfeiffer et al., 2021a) leverages the knowledge of several adapters from different tasks and learns an optimal combination of the adapters' output representations for a single target task (see Figure 3). AdapterFusion (AF) is particularly useful for small training sets where learning adequate models is difficult. Despite its effectiveness, AF is computationally expensive because all included adapters are passed through sequentially. $^{8}$
76
+
77
+ Table 3 shows that the differences can be substantial for both training and inference. For instance, compared to a fully fine-tuned model, AF with eight adapters is around $47\%$ slower at training time and $62\%$ slower at inference.[9]
78
+
79
+ # 5 AdapterDrop for AdapterFusion
80
+
81
+ There exists considerable potential for improving the efficiency of AF, especially at inference time. We address this with two variants of AdapterDrop
82
+
83
+ <table><tr><td rowspan="2">Adapters</td><td colspan="2">AF vs. Full FT</td><td colspan="2">AF vs. Adapter</td></tr><tr><td>Training</td><td>Inference</td><td>Training</td><td>Inference</td></tr><tr><td>2</td><td>0.92</td><td>0.64</td><td>0.57</td><td>0.68</td></tr><tr><td>8</td><td>0.53</td><td>0.38</td><td>0.33</td><td>0.40</td></tr><tr><td>16</td><td>0.33</td><td>0.24</td><td>0.21</td><td>0.26</td></tr></table>
84
+
85
+ Table 3: Relative speed of AdapterFusion (with 2/8/16 adapters) compared to a fully fine-tuned model and compared to a single-task adapter (right). Measured with a batch size of 32, and a sequence length of 128.
86
+
87
+ ![](images/405cb206fe76e76a325be921cddd59c50b9ae6fe5e22c5a89606b558224d2bbf.jpg)
88
+ Figure 3: Standard AdapterFusion vs. AdapterFusion pruning, each with 3 adapters initially. The left model includes all adapters at every layer whereas the right model has one adapter pruned at every layer.
89
+
90
+ for AF by (1) removing entire AF layers; (2) pruning the least important adapters from AF models.
91
+
92
+ # 5.1 Removing AdapterFusion Layers
93
+
94
+ We fuse the adapters from all eight GLUE tasks and observe the largest gains of AF on RTE and CoLA. We additionally train robust AF models with the same procedure as in §3. We investigate from how many lower layers we can remove AF at test time while still outperforming the corresponding single-task adapter (without AdapterDrop).
95
+
96
+ Figure 4 shows that AF performs better than the
97
+
98
+ ![](images/2e63812e48f1761e456720b578b90800a5733a254b8c5168595c6b5eacaba391.jpg)
99
+ Figure 4: Comparison of AdapterFusion with (orange) and without (blue) AdapterDrop training during inference when omitting early AF layers.
100
+
101
+ ![](images/911ff9993454205541555b6a7a700af09e76942e24352e53722bfe9ccfeeef01.jpg)
102
+ Figure 5: Task performance of AdapterFusion Pruning. AF is trained with eight adapters, and we gradually remove the least important from the model.
103
+
104
+ single-task adapter on RTE until removing AF from the first five layers. This improves the inference efficiency by $26\%$ . On CoLA, we observe a different trend. Removing AF from the first layer results in more noticeable performance decreases, achieving lower task performances than the single-task adapter. This is in line with recent work showing that some linguistic tasks heavily rely on information from the first layers (Vulic et al., 2020). We deliberately highlight that AdapterDrop might not be suitable for all tasks. However, Figure 13 shows that CoLA represents the most extreme case. Nevertheless, our results suggest that researchers need to be cautious when removing AdapterFusion layers as there may exist a considerable performance/efficiency tradeoff.
105
+
106
+ # 5.2 AdapterFusion Pruning
107
+
108
+ The inference efficiency of AF largely depends on the number of fused adapters, see Table 3. We can, therefore, achieve efficiency improvements by pruning adapters from the trained AF models (depicted in Figure 3). Our hypothesis is that we can safely remove adapters if they are not usually activated by AF, which means that they do not contribute much to the output representations. In each fusion layer, we record the average adapter activations—their relative importance—using all instances of the respective AF training set. We then remove the adapters with lowest activations.
109
+
110
+ Figure 5 demonstrates that we can remove most adapters in AF without affecting the task performance. With two remaining adapters, we achieve comparable results to the full AF models with eight adapters and improve the inference speed by $68\%$ .
111
+
112
+ We therefore recommend performing Adaper-Fusion pruning before deploying these models in practice. This is a simple yet effective technique
113
+
114
+ to achieve efficiency gains even when aiming at maintaining performance entirely.
115
+
116
+ # 6 Conclusion
117
+
118
+ Adapters have emerged as a suitable alternative to full model fine-tuning, and their most widely claimed computational advantage is the small model size. In this work, we have demonstrated that the advantages of adapters go far beyond mere parameter efficiency. Even without our extensions, the training steps of two common adapter architectures are up to $60\%$ faster. However, these improvements come at the cost of $4 - 6\%$ slower inference speed. Thus, if training is more important, adapters can be advantageous over full model fine-tuning.
119
+
120
+ AdapterDrop expands these advantages by dropping a variable number of adapters from lower transformer layers. We dynamically reduce the computational overhead at run-time when performing inference over multiple tasks and maintain task performances to a large extent. This benefits researchers working on models that need to make multiple independent predictions on a single input.
121
+
122
+ Finally, we also investigated the computational efficiency of AdapterFusion models. We find that dropping entire AdapterFusion layers comes at a considerable performance/efficiency tradeoff, whereas pruning of the least activated adapters in each layer can improve the model efficiency while maintaining performance entirely.
123
+
124
+ We believe that our work can be widely extended and that there exist many more directions to obtain efficient adapter-based models. For instance, we could explore more efficient pre-trained adapters, $^{11}$ sharing the adapter weights across layers, $^{12}$ or pruning adapters from AdapterFusion at training time. $^{13}$ In the Appendix to this paper, we present preliminary results for several related ideas, which may serve as a starting point for future work.
125
+
126
+ # Acknowledgments
127
+
128
+ This work has received financial support from multiple sources. (1) The German Federal Ministry of
129
+
130
+ Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. (2) The European Regional Development Fund (ERDF) and the Hessian State Chancellery – Hessian Minister of Digital Strategy and Development under the promotional reference 20005482 (TexPrax). (3) The German Research Foundation (DFG) as part of the Research Training Group KRITIS No. GRK 2222. (4) The German Federal Ministry of Education and Research (BMBF) as part of the Software Campus program under the promotional reference 01|S17050. (5) The LOEWE initiative (Hesse, Germany) within the emergenCITY center. (6) The German Research Foundation (DFG) as part of the UKP-SQuARE project (grant GU 798/29-1). Finally, we gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
131
+
132
+ # References
133
+
134
+ Alan Ansell, Edoardo Maria Ponti, Jonas Pfeiffer, Sebastian Ruder, Goran Glavaš, Ivan Vulić, and Anna Korhonen. 2021. MAD-G: Multilingual Adapter Generation for Efficient Cross-Linguual Transfer. In Findings of the Association for Computational Linguistics: EMNLP 2021.
135
+ Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jin Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. 2021. BinaryBERT: Pushing the limit of BERT quantization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL 2021), pages 4334-4348.
136
+ Ankur Bapna and Orhan First. 2019. Simple, Scalable Adaptation for Neural Machine Translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP 2019), pages 1538-1548.
137
+ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165.
138
+
139
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 4171-4186.
140
+ Maha Elbayad, Jiatao Gu, Edouard Grave, and Michael Auli. 2020. Depth-adaptive transformer. In 8th International Conference on Learning Representations (ICLR 2020).
141
+ Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing Transformer Depth on Demand with Structured Dropout. In 8th International Conference on Learning Representations, (ICLR 2020).
142
+ Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. 2020. Dynabert: Dynamic BERT with adaptive width and depth. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020).
143
+ Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzkebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-Efficient Transfer Learning for NLP. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019).
144
+ Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, (ACL 2018), pages 328-339.
145
+ Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations (ICLR 2020).
146
+ Anne Lauscher, Olga Majewska, Leonardo F. R. Ribeiro, Iryna Gurevych, Nikolai Rozanov, and Goran Glavaš. 2020. Common sense or world knowledge? investigating adapter-based knowledge injection into pretrained transformers. In Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 43-49.
147
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.
148
+ Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359.
149
+
150
+ Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (NAACL 2018), pages 2227-2237.
151
+ Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021a. AdapterFusion: Non-Destructive Task Composition for Transfer Learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021), pages 487-503.
152
+ Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020a. AdapterHub: A Framework for Adapting Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pages 46-54.
153
+ Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2020b. MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 7654-7673.
154
+ Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2021b. UNKs Everywhere: Adapting Multilingual Language Models to New Scripts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Online, November, 2021.
155
+ Clifton Poth, Jonas Pfeiffer, Andreas Rückle, and Iryna Gurevych. 2021. What to pre-train on? efficient intermediate task selection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021).
156
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training. Technical report, OpenAI.
157
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Technical report, OpenAI.
158
+ Andreas Rücklé, Jonas Pfeiffer, and Iryna Gurevych. 2020. MultiCQA: Exploring the Zero-Shot Transfer of Text Matching Models on a Massive Scale. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 2471-2486.
159
+ Sebastian Ruder. 2019. Neural Transfer Learning for Natural Language Processing. Ph.D. thesis, National University of Ireland, Galway.
160
+
161
+ Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
162
+ Asa Cooper Stickland and Iain Murray. 2019. BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning. In Proceedings of the 36th International Conference on Machine Learning, (ICML 2019), pages 5986-5995.
163
+ Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Conference of the Association for Computational Linguistics, (ACL 2019), pages 3645-3650.
164
+ Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pages 2158-2170.
165
+ Lisa Torrey and Jude Shavlik. 2010. Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pages 242-264. IGI Global.
166
+ Ahmet Üstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language Adaptation for Truly Universal Dependency Parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 2302-2315.
167
+ Marko Vidoni, Ivan Vulić, and Goran Glavaš. 2020. Orthogonal language and task adapters in zero-shot cross-lingual transfer. arXiv preprint arXiv:2012.06460.
168
+ Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing Pretrained Language Models for Lexical Semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 7222-7240.
169
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355.
170
+ Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418.
171
+ Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression
172
+
173
+ of pre-trained transformers. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020).
174
+ Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. Deebert: Dynamic early exiting for accelerating BERT inference. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pages 2246-2251.
175
+
176
+ # A Measuring Computational and Task Performance
177
+
178
+ # A.1 Computational Efficiency
179
+
180
+ We use Python 3.6, PyTorch 1.5.1, CUDA 10.1 for all measurements. We repeat them with two different GPUs: NVIDIA Tesla V100 PCIe (32GB) and a NVIDIA Titan X Pascal (12GB). We make use of the torch.cuda.Event class and torch.cuda.synchronize to measure only the exact period of time of a training (or inference) step. $^{14}$ For both inference and training, we repeat the respective step 300 times. We report the median to mitigate the impact of outliers caused by GPU warmup.
181
+
182
+ Relativ speed. We define the relative speed of an adapter compared full model fine-tuning as: $\frac{S_a}{S_f}$ where $S_{a}$ and $S_{f}$ are the time of one step with the adapter model and the fully fine-tuned model, respectively. For example, a relative speed of 1.5 means that the adapter model can perform 1.5 steps in the time the fully fine-tuned model performs one step.
183
+
184
+ Speedup. Speedup describes the positive change in relative speed of an adapter model when using AdapterDrop (or another method). A speedup of $p\%$ means that the adapter model with AdapterDrop requires only $(1 - p / 100) \times$ of the runtime than the adapter model without AdapterDrop.
185
+
186
+ The speedup of AdapterDrop (and AdapterFusion) are additive. If dropping one layer results in $p\%$ speedup, dropping two layers results in $2p\%$ speedup, etc.
187
+
188
+ # A.2 Task Performances
189
+
190
+ We study the task performances of adapter models on the popular GLUE benchmark (Wang et al., 2018). Following Devlin et al. (2019), we exclude
191
+
192
+ the WNLI because of the problematic data construction.15 We perform our analyses using RoBERTa base (Liu et al., 2019) as our pre-trained model and report the mean and standard deviation over three runs of the best development performance evaluated after every epoch. We train larger data sets (SST-2, MNLI, QNLI, and QQP) for 10 epochs and the rest of the data sets for 20 epochs. We use a batch size of 32 and, if not otherwise noted, the default hyperparameters for adapter fine-tuning as in (Pfeiffer et al., 2021a).
193
+
194
+ # B Adapter Initialization and Convergence
195
+
196
+ Besides measuring training and inference time, we are interested in (1) how using adapters compare to standard RoBERTa-base with regards to downstream task convergence, and (2) if initializing adapters with pre-trained weights using masked language modeling can lead to faster convergence.
197
+
198
+ First, we compare RoBERTa-base with adapter models using the architecture proposed by Pfeiffer et al. (2021a). Second, we pretrain an adapter with masked language modeling (MLM) using documents from the English Wikipedia. $^{16}$ The results for both experiments are visualized in Figure 12. When comparing RoBERTa-base with randomly initialized adapters, We find that adapters do not come at the cost of requiring more training steps for convergence (1). For several of the eight GLUE tasks, we observe similar convergence behavior with the standard RoBERTa-base model and its counterpart using adapters.
199
+
200
+ Further, we observe across all tasks that initializing the adapter weights with MLM pre-training does not have a substantial impact on the downstream task convergence (compared to a randomly initialized adapter). Thus, we find no evidence that pre-training of adapters with our masked language modeling objective leads to better convergence performance in our experiments (2).
201
+
202
+ # C Detailed Results: AdapterDrop Task Performances
203
+
204
+ We plot the detailed task performances of AdapterDrop with the different training strategies in Figure 13. The relative differences of AdapterDrop to
205
+
206
+ a standard adapter with no AdapterDrop are given in Table 8.
207
+
208
+ # D Adapter with Cross-Layer Parameter Sharing
209
+
210
+ We can further reduce the number of parameters required for each task by sharing the weights of the adapters across all transformer layers. This is similar to weight sharing in ALBERT (Lan et al., 2020), but specialized on adapters and can therefore be applied to a wide range of pre-trained models.
211
+
212
+ We use the Pfeiffer adapter architecture in our experiments with the same hyperparameters as in Appendix A.2. Because cross-layer parameter sharing reduces the capacity of adapter models, we study the impact of the adapter compression rate. The compression rate refers to the down-projection factor in the adapter's bottleneck layer and thus impacts the its capacity (the compression rate specifies by how much 'FF Down' in Figure 1 compresses the representations). The standard compression rate is 16, and smaller values result in a larger model capacity.
213
+
214
+ Table 6 shows that cross-layer parameter sharing with the same compression rate of 16 largely maintains the performance compared to separate weights with an average difference of $2.35\%$ . With a smaller compression rate of 4, we close this gap by more than $50\%$ while still requiring $66\%$ fewer parameters.[17] The resulting models are lightweight: our shared adapter with a compression rate of 16 requires only 307KB storage space.
215
+
216
+ # E Training AdapterFusion with Dropout
217
+
218
+ We investigate the random dropout of adapters from AdapterFusion during training (using our eight task adapters as in §4) to improve the speed of training steps. Each layer randomly selects different adapters to drop out. This means that the model itself may still use the knowledge from all tasks, although not in the layers individually.
219
+
220
+ Table 7 shows the results for the four smallest GLUE tasks in terms of training data size. The speedup that we achieve with AdapterFusion dropout can be substantial: with a dropout rate of $75\%$ (i.e., dropping out 6 out of our 8 adapters) each training step is $74\%$ faster on average (with a sequence length of 128, a batch size of 32). We observe no clear trend in terms of task performances. Fusion dropout leads to consistent decreases on
221
+
222
+ RTE and CoLA, only a small impact on STS-B (no difference when dropping out $25\%$ of adapters), and yields improvements on MRPC.
223
+
224
+ The effectiveness of Fusion dropout, thus, depends on the individual downstream task. Nevertheless, we believe that this methods could be suitable, e.g., for resource-constrained settings.
225
+
226
+ # F Detailed Results: Removing AdapterFusion Layers
227
+
228
+ The computational overhead of AF can be reduced during inference by decreasing the number of adapters. We investigate how dropping AF layers impacts the performance on the four smallest GLUE tasks (MRPC, STS-B, CoLA, RTE) and visualize the results in Figure 7.
229
+
230
+ In this experiment we compare the performance of AF with and without AdapterDrop during training. For both, we use standard adapters as well as adapters created via AdapterDrop as basis for AF. Unsurprisingly, the performance of AF without AdapterDrop within the adapters or fusion drops fastest on all four datasets. Using AdapterDrop when creating the adapters, applying AdapterDrop on AF, or the combination of both significantly reduces the performance drop when omitting fusion layers during inference. On RTE and MRPC, multiple AF layers can be omitted while still performing en par with or better compared to a single task adapter. We further find this robustness to be task dependent. Even AF with AdapterDrop shows a steep fall in performance on RTE and CoLA, while being relatively stable on MRPC and STS-B, even with most layers omitted.
231
+
232
+ # G Detailed Efficiency Measurements
233
+
234
+ In this section, we present detailed results of our efficiency measurements for V100 and TitanX GPUs.
235
+
236
+ # G.1 Adapters
237
+
238
+ We present the efficiency results for adapters and fully fine-tuned models in Figure 6, where we plot the required time (absolute numbers) during training and inference. The relative speed of adapters compared to fully fine-tuned models is given in Table 9.
239
+
240
+ # G.2 AdapterDrop
241
+
242
+ Multi-task inference. In Figure 8, we plot the speed of adapters in a multi-task setting compared
243
+
244
+ to fully fine-tuned models with sequential processing of inputs. In Table 11, we present the relative speed of adapters in this setting and show the speedup gained with AdapterDrop for each dropped layer. The average speedup in Table 2 is calculated as the average speedup over the batch sizes 16, 32 and 64 in Table 11.
245
+
246
+ Training adapters with dropped layers. Table 5 shows the speedup of AdapterDrop when training a single adapter. The average speedup for training with AdapterDrop is $4.7\%$ per layer for the V100 and $4.5\%$ for the TitanX. This is the average result over batch sizes 16, 32, 64 and sequence length 64, 128, 256, and 256 (see Table 5).
247
+
248
+ # G.3 AdapterFusion
249
+
250
+ We plot the speed of AdapterFusion with different numbers of included adapters in Figure 9. In Table 10, we present the relative speed of AdapterFusion compared to a fully-finetuned model and a model with one adapter. This also shows the computational overhead (slowdown) that results from adding more adapters to AdapterFusion.
251
+
252
+ # G.4 AdapterDrop for AdapterFusion
253
+
254
+ Table 4 shows the speedup gained with AdapterDrop for AdapterFusion during training and inference. Figure 10 shows the required time as a function of the dropped layers.
255
+
256
+ # H Parallel Implementation of AdapterFusion
257
+
258
+ AdapterHub's implementation of AdapterFusion passes through each task adapter sequentially. We hypothesized that a better efficiency can be achieved with parallel processing of adapters. We implement the parallel computation of the different adapters by reformulation the linear layers as two convolutions.
259
+
260
+ The first convolution is a convolution with a kernel size equal to the hidden dimension of the transformer and output channels equal to the number of adapters times the downprojection dimension of the adapters. The second convolution is a grouped convolution<sup>18</sup> which processes the channels in blocks the size of the downprojection dimension. It outputs channels equal to the number of adapters times the hidden dimension.
261
+
262
+ <table><tr><td rowspan="3">Adapters</td><td colspan="4">Speedup (per dropped layer)</td></tr><tr><td colspan="2">Inference</td><td colspan="2">Training</td></tr><tr><td>V100</td><td>TitanX</td><td>V100</td><td>TitanX</td></tr><tr><td>2</td><td>3.0%</td><td>3.1%</td><td>6.3%</td><td>6.4%</td></tr><tr><td>4</td><td>4.0%</td><td>4.1%</td><td>6.8%</td><td>6.8%</td></tr><tr><td>8</td><td>5.2%</td><td>5.2%</td><td>7.3%</td><td>7.3%</td></tr><tr><td>16</td><td>6.3%</td><td>6.3%</td><td>7.8%</td><td>-</td></tr></table>
263
+
264
+ Table 4: The speedup for each dropped layer for AdapterFusion during training and inference. Measurements were conducted with a batch size of 32 and sequence length of 128. Missing values are due to insufficient GPU memory.
265
+
266
+ <table><tr><td rowspan="2">Batch Size</td><td rowspan="2">Seq. Len</td><td colspan="2">Speedup</td></tr><tr><td>V100</td><td>TitanX</td></tr><tr><td>16</td><td>64</td><td>4.6%</td><td>4.4%</td></tr><tr><td>16</td><td>128</td><td>4.6%</td><td>4.6%</td></tr><tr><td>16</td><td>256</td><td>4.8%</td><td>4.6%</td></tr><tr><td>16</td><td>512</td><td>4.7%</td><td>-</td></tr><tr><td>32</td><td>64</td><td>4.6%</td><td>4.5%</td></tr><tr><td>32</td><td>128</td><td>4.7%</td><td>4.5%</td></tr><tr><td>32</td><td>256</td><td>4.6%</td><td>4.7%</td></tr><tr><td>32</td><td>512</td><td>4.8%</td><td>-</td></tr><tr><td>64</td><td>64</td><td>4.7%</td><td>4.5%</td></tr><tr><td>64</td><td>128</td><td>4.6%</td><td>4.5%</td></tr><tr><td>64</td><td>256</td><td>4.7%</td><td>-</td></tr><tr><td>64</td><td>512</td><td>-</td><td>-</td></tr></table>
267
+
268
+ We show in Figure 11 and in Table 12 that the iterative implementation is faster than the parallel implementation for larger input sizes (e.g., batch sizes greater than). This indicates that once the input can no longer be processed entirely in parallel on the GPU (due to limited CUDA cores) the iterative implementation seems to be more efficient.
269
+
270
+ Table 5: Speedup for each dropped layer during training with AdapterDrop on the V100 and TitanX.
271
+
272
+ <table><tr><td></td><td>Standard</td><td colspan="3">Cross-Layer Parameter Sharing</td></tr><tr><td>Compression rate = 16</td><td>1.33</td><td>4</td><td>16</td><td></td></tr><tr><td>SST-2</td><td>94.7 ±0.3</td><td>94.2 ±0.3</td><td>94.2 ±0.1</td><td>94.1 ±0.4</td></tr><tr><td>QNLI</td><td>93.0 ±0.2</td><td>92.4 ±0.1</td><td>93.1 ±0.1</td><td>90.6 ±1.4</td></tr><tr><td>MNLI</td><td>87.3 ±0.1</td><td>87.0 ±0.1</td><td>87.1 ±0.0</td><td>86.2 ±0.2</td></tr><tr><td>QQP</td><td>90.6 ±0.0</td><td>90.8 ±0.1</td><td>90.2 ±0.0</td><td>88.6 ±0.5</td></tr><tr><td>CoLA</td><td>62.6 ±0.9</td><td>60.3 ±1.6</td><td>60.8 ±0.4</td><td>57.2 ±1.0</td></tr><tr><td>MRPC</td><td>88.4 ±0.1</td><td>88.2 ±0.7</td><td>88.5 ±1.1</td><td>86.8 ±0.5</td></tr><tr><td>RTE</td><td>75.9 ±2.2</td><td>69.4 ±0.5</td><td>71.5 ±2.7</td><td>71.5 ±1.0</td></tr><tr><td>STS-B</td><td>90.3 ±0.1</td><td>89.5 ±0.1</td><td>89.7 ±0.3</td><td>89.0 ±0.7</td></tr><tr><td>Average</td><td>85.35</td><td>83.98</td><td>84.39</td><td>83.0</td></tr><tr><td>Params</td><td>884k</td><td>884k</td><td>295k</td><td>74k</td></tr></table>
273
+
274
+ Table 6: Task performance scores of the standard approach with separate adapter weights vs. cross-layer parameter sharing. The compression rate denotes the factor by which 'FF Down' in Figure 1 compresses the representations. The number of parameters is given without classification heads.
275
+
276
+ <table><tr><td></td><td colspan="4">Fusion Dropout</td></tr><tr><td></td><td>0%</td><td>25%</td><td>50%</td><td>75%</td></tr><tr><td>CoLA</td><td>63.9 ±0.6</td><td>62.9 ±0.8</td><td>62.4 ±0.7</td><td>60.4 ±0.2</td></tr><tr><td>MRPC</td><td>88.4 ±0.1</td><td>89.2 ±0.5</td><td>89.2 ±0.4</td><td>89.3 ±0.1</td></tr><tr><td>RTE</td><td>85.4 ±0.7</td><td>82.8 ±1.9</td><td>82.1 ±0.3</td><td>80.9 ±1.1</td></tr><tr><td>STS-B</td><td>90.2 ±0.1</td><td>90.2 ±0.1</td><td>90.1 ±0.1</td><td>89.9 ±0.1</td></tr><tr><td>Speedup (8)</td><td>-</td><td>15.9%</td><td>39.4%</td><td>73.7%</td></tr><tr><td>Speedup (16)</td><td>-</td><td>22.5%</td><td>58.2%</td><td>120.6%</td></tr></table>
277
+
278
+ Table 7: Development scores of AdapterFusion (compression rate 16x) with or without fusion dropout during training. Fusion dropout of $50\%$ means that each adapter has a $50\%$ chance of not being used as input to the fusion layer. The speedup depends on the total number of adapters used in AdapterFusion (8 adapters in our setting here, 16 used by Pfeiffer et al. (2021a))
279
+
280
+ <table><tr><td rowspan="2"></td><td colspan="12">Dropped Layers</td></tr><tr><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td><td>11</td></tr><tr><td>Standard adapter</td><td>100.0</td><td>98.5</td><td>97.1</td><td>95.3</td><td>92.0</td><td>89.0</td><td>82.2</td><td>74.6</td><td>64.5</td><td>54.5</td><td>49.3</td><td>43.3</td></tr><tr><td>Specialized AdapterDrop (12 models)</td><td>100.0</td><td>99.5</td><td>98.9</td><td>98.2</td><td>97.6</td><td>97.1</td><td>95.9</td><td>95.3</td><td>95.1</td><td>94.3</td><td>92.5</td><td>82.9</td></tr><tr><td>Robust AdapterDrop</td><td>98.5</td><td>97.7</td><td>97.3</td><td>96.8</td><td>96.1</td><td>95.4</td><td>94.5</td><td>93.3</td><td>92.2</td><td>89.9</td><td>85.9</td><td>62.0</td></tr></table>
281
+
282
+ Table 8: Model performances with AdapterDrop in relation to a standard adapter with no dropped layers. We report the percentage of retained task performance compared to the standard adapter with no dropped layers during evaluation. The results are averaged over all eight GLUE task. A value of 97.1 for specialized AdapterDrop with five dropped layers means that the model achieves $97.1\%$ of the performance compared to the standard adapter with no dropped layers. Performance scores for each task can be found in Figure 13.
283
+
284
+ <table><tr><td rowspan="3">Sequence Len.</td><td rowspan="3">Batch Size</td><td colspan="4">V100</td><td colspan="4">TitanX</td></tr><tr><td colspan="2">Training</td><td colspan="2">Inference</td><td colspan="2">Training</td><td colspan="2">Inference</td></tr><tr><td>Houlsby</td><td>Pfeiffer</td><td>Houlsby</td><td>Pfeiffer</td><td>Houlsby</td><td>Pfeiffer</td><td>Houlsby</td><td>Pfeiffer</td></tr><tr><td>64</td><td>16</td><td>0.98</td><td>1.70</td><td>0.92</td><td>0.94</td><td>1.61</td><td>1.69</td><td>0.93</td><td>0.94</td></tr><tr><td>64</td><td>32</td><td>1.70</td><td>1.81</td><td>0.94</td><td>0.95</td><td>1.48</td><td>1.55</td><td>0.93</td><td>0.94</td></tr><tr><td>64</td><td>64</td><td>1.46</td><td>1.54</td><td>0.94</td><td>0.95</td><td>1.40</td><td>1.46</td><td>0.94</td><td>0.94</td></tr><tr><td>64</td><td>128</td><td>1.48</td><td>1.55</td><td>0.95</td><td>0.96</td><td>1.37</td><td>1.42</td><td>0.94</td><td>0.94</td></tr><tr><td>128</td><td>16</td><td>1.48</td><td>1.57</td><td>0.94</td><td>0.95</td><td>1.45</td><td>1.52</td><td>0.93</td><td>0.94</td></tr><tr><td>128</td><td>32</td><td>1.53</td><td>1.60</td><td>0.94</td><td>0.95</td><td>1.38</td><td>1.44</td><td>0.94</td><td>0.95</td></tr><tr><td>128</td><td>64</td><td>1.47</td><td>1.53</td><td>0.95</td><td>0.96</td><td>1.35</td><td>1.40</td><td>0.94</td><td>0.95</td></tr><tr><td>128</td><td>128</td><td>1.42</td><td>1.48</td><td>0.95</td><td>0.96</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>256</td><td>16</td><td>1.42</td><td>1.49</td><td>0.94</td><td>0.95</td><td>1.34</td><td>1.38</td><td>0.94</td><td>0.95</td></tr><tr><td>256</td><td>32</td><td>1.40</td><td>1.46</td><td>0.95</td><td>0.96</td><td>1.31</td><td>1.36</td><td>0.94</td><td>0.96</td></tr><tr><td>256</td><td>64</td><td>1.40</td><td>1.45</td><td>0.95</td><td>0.96</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>256</td><td>128</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>512</td><td>16</td><td>1.36</td><td>1.41</td><td>0.96</td><td>0.96</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>512</td><td>32</td><td>1.33</td><td>1.37</td><td>0.96</td><td>0.96</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>512</td><td>64</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>512</td><td>128</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
285
+
286
+ Table 9: Relative speed of adapters compared to fully fine-tuned models. Missing values are due to insufficient GPU memory.
287
+
288
+ <table><tr><td rowspan="3">Seq. Len</td><td rowspan="3">Batch Size</td><td colspan="6">V100</td><td colspan="6">TitanX</td></tr><tr><td colspan="2">vs. FF</td><td colspan="2">vs. Adap.</td><td colspan="2">Slowdown</td><td colspan="2">vs. FF</td><td colspan="2">vs. Adap</td><td colspan="2">Slowdown</td></tr><tr><td>Tr.</td><td>Inf.</td><td>Tr.</td><td>Inf.</td><td>Tr.</td><td>Inf.</td><td>Tr.</td><td>Inf.</td><td>Tr.</td><td>Inf.</td><td>Tr.</td><td>Inf.</td></tr><tr><td>64</td><td>16</td><td>0.77</td><td>0.62</td><td>0.45</td><td>0.66</td><td>8.2%</td><td>10.6%</td><td>0.88</td><td>0.62</td><td>0.52</td><td>0.66</td><td>10.3%</td><td>10.2%</td></tr><tr><td>64</td><td>32</td><td>1.03</td><td>0.64</td><td>0.57</td><td>0.68</td><td>12.0%</td><td>11.1%</td><td>0.80</td><td>0.61</td><td>0.52</td><td>0.64</td><td>11.2%</td><td>11.0%</td></tr><tr><td>64</td><td>64</td><td>0.87</td><td>0.64</td><td>0.57</td><td>0.67</td><td>12.6%</td><td>12.0%</td><td>0.76</td><td>0.61</td><td>0.52</td><td>0.65</td><td>11.6%</td><td>11.4%</td></tr><tr><td>128</td><td>16</td><td>0.91</td><td>0.65</td><td>0.58</td><td>0.69</td><td>12.0%</td><td>11.0%</td><td>0.80</td><td>0.61</td><td>0.53</td><td>0.65</td><td>10.9%</td><td>10.8%</td></tr><tr><td>128</td><td>32</td><td>0.92</td><td>0.64</td><td>0.57</td><td>0.68</td><td>12.5%</td><td>11.8%</td><td>0.76</td><td>0.62</td><td>0.53</td><td>0.66</td><td>11.4%</td><td>11.1%</td></tr><tr><td>128</td><td>64</td><td>0.87</td><td>0.65</td><td>0.57</td><td>0.68</td><td>12.5%</td><td>11.6%</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>256</td><td>16</td><td>0.88</td><td>0.66</td><td>0.59</td><td>0.69</td><td>12.1%</td><td>11.3%</td><td>0.77</td><td>0.65</td><td>0.56</td><td>0.68</td><td>10.8%</td><td>10.4%</td></tr><tr><td>256</td><td>32</td><td>0.86</td><td>0.68</td><td>0.59</td><td>0.70</td><td>11.9%</td><td>11.3%</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>256</td><td>64</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>512</td><td>16</td><td>0.87</td><td>0.69</td><td>0.62</td><td>0.72</td><td>11.2%</td><td>10.1%</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>512</td><td>32</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>512</td><td>64</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
289
+
290
+ Table 10: Relative speed of AdapterFusion for different sequence lengths and batch sizes. We compute the training (Tr.) speed and inference (Inf.) speed with two adapters in AdapterFusion. We compare this to: FF, a fully fine-tuned model; Adap, an adapter model (Pfeiffer architecture). The slowdown denotes the computational overhead of each additional adapter composed in AdapterFusion (calculated as the average slowdown for adding one adapter to AF consisting of 2-16 adapters). Missing values are due to insufficient GPU memory.
291
+
292
+ ![](images/98800071f50f5970d5da16115fd7e9612d83ed328e68940a4611f18a1b71be7b.jpg)
293
+ (a) V100 Inference
294
+
295
+ ![](images/0cf129d94e38093c820c65ddfb5f7da35ce2dc87ffb8ded909b92b051974896d.jpg)
296
+ (b) TitanX Inference
297
+
298
+ ![](images/a88539103bf783721d63710e31f71681ad21387826d3cd991184f40b3eea1719.jpg)
299
+ (c) V100 Training
300
+ Figure 6: The absolute time for each inference or training step. We compare a transformer model without adapters and an adapter model with Pfeiffer or Houlsby architectures. We note that for small inputs, i.e., batch size 1 or 8, the time does not increase with the sequence length because the GPU is not working at capacity. Figure (b) with batch size 1 shows the transition from working under and working at capacity.
301
+
302
+ ![](images/7499f3f5c338e49b86b54bfd197ec83de6ec12d750610cd5ebe00a691a9080a6.jpg)
303
+ (d) TitanX Training
304
+
305
+ ![](images/d5730d01e387205e6f71f7d3dd57a4c7d3d2d0bfcc71b4af6d76402aea22114e.jpg)
306
+ Figure 7: Performance of AF by the number of dropped AF layers. We show the results for AF and the used adapters (both with and without AdapterDrop), and compare the performance with a standard single task adapter.
307
+
308
+ ![](images/9fbc50cdad157818478e74d12f65be39096158e7bbc59e00bde0f171dba1f342.jpg)
309
+
310
+ ![](images/8e52e86a167df85de053c97416ce92dc6268895c1a2798a97c8bc0b688acbbe6.jpg)
311
+ Standard Fusion Adapter Training
312
+ —— Single Task Adapter --- AdapterDrop
313
+
314
+ ![](images/cdf94ece171caf38f82883daab3aeb3901c7f1c50d2bf523cc904f6063d7f51e.jpg)
315
+ AdapterDrop Training Standard
316
+
317
+ <table><tr><td>Device</td><td>Batch Size</td><td>Adapters</td><td>Inference</td><td>Speedup</td></tr><tr><td rowspan="16">V100</td><td>1</td><td>2</td><td>1.25</td><td>2.6%</td></tr><tr><td>1</td><td>4</td><td>1.97</td><td>3.7%</td></tr><tr><td>1</td><td>8</td><td>2.80</td><td>4.9%</td></tr><tr><td>1</td><td>16</td><td>2.97</td><td>6.5%</td></tr><tr><td>16</td><td>2</td><td>1.13</td><td>4.1%</td></tr><tr><td>16</td><td>4</td><td>1.14</td><td>6.5%</td></tr><tr><td>16</td><td>8</td><td>1.20</td><td>7.7%</td></tr><tr><td>16</td><td>16</td><td>1.16</td><td>8.4%</td></tr><tr><td>32</td><td>2</td><td>1.08</td><td>4.5%</td></tr><tr><td>32</td><td>4</td><td>1.14</td><td>6.6%</td></tr><tr><td>32</td><td>8</td><td>1.11</td><td>7.9%</td></tr><tr><td>32</td><td>16</td><td>1.11</td><td>8.5%</td></tr><tr><td>64</td><td>2</td><td>1.08</td><td>4.3%</td></tr><tr><td>64</td><td>4</td><td>1.05</td><td>6.7%</td></tr><tr><td>64</td><td>8</td><td>1.06</td><td>7.9%</td></tr><tr><td>64</td><td>16</td><td>1.06</td><td>8.4%</td></tr><tr><td rowspan="4">TitanX</td><td>32</td><td>2</td><td>1.07</td><td>4.4%</td></tr><tr><td>32</td><td>4</td><td>1.09</td><td>6.6%</td></tr><tr><td>32</td><td>8</td><td>1.09</td><td>7.8%</td></tr><tr><td>32</td><td>16</td><td>1.06</td><td>8.4%</td></tr><tr><td rowspan="4">CPU</td><td>1</td><td>2</td><td>0.98</td><td>4.2%</td></tr><tr><td>1</td><td>4</td><td>1.03</td><td>6.5%</td></tr><tr><td>1</td><td>8</td><td>1.05</td><td>7.7%</td></tr><tr><td>1</td><td>16</td><td>1.06</td><td>8.4%</td></tr></table>
318
+
319
+ Table 11: The relative inference speed of simultaneous processing of multiple tasks with adapters compared to sequential processing of tasks with fully fine-tuned models. Gray columns show the speedup of AdapterDrop for every additional dropped layer. All measurements use a sequence length of 128. Batch size 1 for the V100 is an outlier in both speedup and relative speed compared to the other results due to the small input size (compare with Figure 8).
320
+
321
+ <table><tr><td rowspan="2">Adapters</td><td rowspan="2">Seq. Len</td><td rowspan="2">Batch Size</td><td colspan="2">Rel. Speed</td></tr><tr><td>V100</td><td>TitanX</td></tr><tr><td>2</td><td>100</td><td>1</td><td>0.93</td><td>0.94</td></tr><tr><td>3</td><td>100</td><td>1</td><td>0.89</td><td>0.88</td></tr><tr><td>5</td><td>100</td><td>1</td><td>0.77</td><td>0.76</td></tr><tr><td>10</td><td>100</td><td>1</td><td>0.60</td><td>1.29</td></tr><tr><td>2</td><td>100</td><td>16</td><td>1.02</td><td>1.44</td></tr><tr><td>3</td><td>100</td><td>16</td><td>1.12</td><td>1.58</td></tr><tr><td>5</td><td>100</td><td>16</td><td>1.17</td><td>1.80</td></tr><tr><td>10</td><td>100</td><td>16</td><td>1.27</td><td>2.14</td></tr><tr><td>2</td><td>100</td><td>32</td><td>1.01</td><td>1.48</td></tr><tr><td>3</td><td>100</td><td>32</td><td>1.17</td><td>1.62</td></tr><tr><td>5</td><td>100</td><td>32</td><td>1.23</td><td>1.85</td></tr><tr><td>10</td><td>100</td><td>32</td><td>1.32</td><td>2.24</td></tr><tr><td>2</td><td>200</td><td>1</td><td>0.93</td><td>1.24</td></tr><tr><td>3</td><td>200</td><td>1</td><td>0.88</td><td>1.37</td></tr><tr><td>5</td><td>200</td><td>1</td><td>0.77</td><td>1.55</td></tr><tr><td>10</td><td>200</td><td>1</td><td>0.52</td><td>1.87</td></tr><tr><td>2</td><td>200</td><td>16</td><td>1.01</td><td>1.46</td></tr><tr><td>3</td><td>200</td><td>16</td><td>1.17</td><td>1.59</td></tr><tr><td>5</td><td>200</td><td>16</td><td>1.23</td><td>1.82</td></tr><tr><td>10</td><td>200</td><td>16</td><td>1.32</td><td>2.21</td></tr><tr><td>2</td><td>200</td><td>32</td><td>1.00</td><td>1.11</td></tr><tr><td>3</td><td>200</td><td>32</td><td>1.18</td><td>1.17</td></tr><tr><td>5</td><td>200</td><td>32</td><td>1.26</td><td>-</td></tr><tr><td>10</td><td>200</td><td>32</td><td>1.34</td><td>-</td></tr><tr><td>2</td><td>300</td><td>1</td><td>0.93</td><td>1.37</td></tr><tr><td>3</td><td>300</td><td>1</td><td>0.88</td><td>1.50</td></tr><tr><td>5</td><td>300</td><td>1</td><td>0.91</td><td>1.70</td></tr><tr><td>10</td><td>300</td><td>1</td><td>0.94</td><td>2.03</td></tr><tr><td>2</td><td>300</td><td>16</td><td>1.00</td><td>1.48</td></tr><tr><td>3</td><td>300</td><td>16</td><td>1.16</td><td>1.63</td></tr><tr><td>5</td><td>300</td><td>16</td><td>1.22</td><td>1.88</td></tr><tr><td>10</td><td>300</td><td>16</td><td>1.32</td><td>-</td></tr><tr><td>2</td><td>300</td><td>32</td><td>1.00</td><td>-</td></tr><tr><td>3</td><td>300</td><td>32</td><td>1.20</td><td>-</td></tr><tr><td>5</td><td>300</td><td>32</td><td>1.27</td><td>-</td></tr><tr><td>10</td><td>300</td><td>32</td><td>1.36</td><td>-</td></tr><tr><td>2</td><td>400</td><td>1</td><td>1.04</td><td>1.39</td></tr><tr><td>3</td><td>400</td><td>1</td><td>1.09</td><td>1.51</td></tr><tr><td>5</td><td>400</td><td>1</td><td>1.10</td><td>1.74</td></tr><tr><td>10</td><td>400</td><td>1</td><td>1.10</td><td>2.08</td></tr><tr><td>2</td><td>400</td><td>16</td><td>1.00</td><td>-</td></tr><tr><td>3</td><td>400</td><td>16</td><td>1.18</td><td>-</td></tr><tr><td>5</td><td>400</td><td>16</td><td>1.25</td><td>-</td></tr><tr><td>10</td><td>400</td><td>16</td><td>1.34</td><td>-</td></tr><tr><td>2</td><td>400</td><td>32</td><td>1.00</td><td>-</td></tr><tr><td>3</td><td>400</td><td>32</td><td>1.20</td><td>-</td></tr><tr><td>5</td><td>400</td><td>32</td><td>1.27</td><td>-</td></tr><tr><td>10</td><td>400</td><td>32</td><td>-</td><td>-</td></tr></table>
322
+
323
+ Table 12: Relative speed of AdapterFusion with the iterative implementation versus the parallel implementation with different batch sizes, sequence lengths and numbers of adapters for the V100 and TitanX. The parallel implementation is faster if the input is sufficiently small (batch size 1 or 2 adapters) as the GPU is not working at capacity and is able to use the parallel implementation.
324
+
325
+ ![](images/11d79d54df503e9d43132c60e297316bc1b5eb909dbe7b147eae485375b3a1c0.jpg)
326
+
327
+ ![](images/d86ea30c0e28e12d9222b959e15bfeb67c5d6eee6f24f48fa36692151517ea79.jpg)
328
+
329
+ Dropped Layers
330
+ ![](images/0ed1c3818acc3d0011be8246483201d1e452aff68bb47db6d198a3f0faad50b8.jpg)
331
+ 2 adapter 8 adapter Parallelized NFF models 4 adapter 16 adapter
332
+
333
+ ![](images/fa301a74207234ca4dc98e2da55c8569ce08650732c4c0d43292336c6f46fa85.jpg)
334
+
335
+ Figure 8: The absolute time required for performing inference for multiple tasks on the same input. The measurements are conducted with a sequence length of 128. N FF models denotes $N$ fully fine-tuned models, executed sequentially. Parallelized denotes the time required by N fully fine-tuned models running fully parallelized. Batch size 1 on the V100 is an outlier compared to the other results with a smaller speedup for each dropped layer but a higher relative speed compared to the fine-tuned models due to the small input size.
336
+ ![](images/b6cf38d1a31287f696032dc005a652ba00a57bf7a07955f25d976802759716b3.jpg)
337
+ AdapterFusion No adapter 1adapter(no fusion)
338
+
339
+ ![](images/7064666a453d8f7a9849ec70f0789cc5fdae8bf8f0449cea204e485438f0ff2d.jpg)
340
+
341
+ (a) V100
342
+ (b) TitanX
343
+ ![](images/8d017de02685be67c492199a6c8c203717947aa752f285918b36ee429f65d93b.jpg)
344
+ Number of adapters AdapterFusion No adapter 1 adapter (no fusion)
345
+
346
+ ![](images/8956b7d2682880482b85c79f636ddf0697b9816c420702697a81e2de14868162.jpg)
347
+ Figure 9: Absolute time measurements for AdapterFusion at inference (left) and training (right) as a function of the number of adapters. The measurements were conducted with a batch size of 32 (V100) and 16 (TitanX), and a sequence length of 128.
348
+
349
+ ![](images/42146d3f23f6e42d8d22d3fcaccc8f14f2a9ef6867dfa0b39784b66ea9bca6a1.jpg)
350
+
351
+ ![](images/c21711dbddd46ad491a1b4bd083ef404d5c9e2b9088208ed798ec55dcaae46e8.jpg)
352
+ Figure 10: Absolute time measurements for AdapterFusion with AdapterDrop at inference (left) and training (right) as a function of the number of dropped layers. The measurements were conducted with a batch size of 32 and a sequence length of 128. We additionally plot the time of an adapter (without AdapterDrop) and a model without adapters to provide a more thorough comparison.
353
+
354
+ ![](images/627bfb0d71d4311bf012e7d077cfa3741e7d3e246833b2850c8f2cdf3368643a.jpg)
355
+ (a) V100
356
+
357
+ ![](images/6eb0ee0e56f58066f56c653b7d8f761d2370d0fc9e92b75d7ccebc94f5a0da04.jpg)
358
+ (b) TitanX
359
+ Figure 11: The difference in inference time between iterative and parallel implementations of AdapterFusion. Negative values indicate that the iterative implementation is faster. We calculate the difference as $t_i - t_p$ , where $t_i, t_p$ are the times for iterative and parallel implementation, respectively. In Figure (a), the parallel implementation is faster if the input is sufficiently small as the GPU is not working at capacity and is able to use the parallel implementation.
360
+
361
+ ![](images/b1ee77193e72a88555533411ff9474c15b2115d668889d4ac33919317b949874.jpg)
362
+
363
+ ![](images/0072dd3e5d9fe813f669e25049832d3ccc58762c38e2ec58addb4971332acb22.jpg)
364
+
365
+ ![](images/b9db4a235227346af34be0c6b92aac8df4ff39107443694bec7e8ac410006593.jpg)
366
+
367
+ ![](images/fb93917854df3e8f0bf5824d9e328177f8fe6dbce34b18edf6926ca61c67a5ef.jpg)
368
+
369
+ ![](images/5376885305bbf38a611274575e3942403a3f4d82a15f6bf7c6e7e3c5dc599e41.jpg)
370
+
371
+ ![](images/3b5ef371d7acd8226178a04c7d0fcbcc166606eeaaa3b62d10583b0922fd7ec4.jpg)
372
+ Steps
373
+
374
+ ![](images/cd70ec9eb769ac47599e1dba7ea52481394d0a6064edaf90d9fdd071a96bdf59.jpg)
375
+ Figure 12: Evaluation performance of fine-tuning RoBERTA-base in comparison with different initialization strategies for adapters (randomly initialized vs. pre-trained on masked language modeling task). Training was conducted for 10k steps with a learning rate of 5e-05 for RoBERTa-base and 0.0001 for adapters, respectively.
376
+
377
+ ![](images/4396bd99457e2c373145188c58b2356014d7ee42b4df36a534c51fb9451600d7.jpg)
378
+
379
+ ![](images/0ae730f3045190630eb5b52d16ab68a7e7a420cf31bc0a454be32db5600be53b.jpg)
380
+
381
+ ![](images/9900c0a7f03841706ec092758ab41c1bac243764fd3052134a861f8fe9bf0732.jpg)
382
+
383
+ ![](images/2849cd09ce8bbaa49edc8a28db59a79cd90656a587715d8ecb4ae8645b7dd6c5.jpg)
384
+
385
+ ![](images/ccca22990865db8bf4b6f08b95b692cb56f6ea63c7a1a53c92f6ae321ef87bd4.jpg)
386
+
387
+ ![](images/bf56f924029424492fc3fae1625e8bcbf41533587e7cc3339f691d0978035c2f.jpg)
388
+
389
+ ![](images/f3725945fd90d2bd43ba9d2d13c7615c5cd8a9f6eefa67b5ef293b7f878100a4.jpg)
390
+
391
+ ![](images/2e7023ac71c3308ef82706b9d1b31b6894522ccc162468d6254176d2be526427.jpg)
392
+ Figure 13: The AdapterDrop task performances for all eight GLUE tasks in relation to the dropped layers. '12 specialized adapters' refers to the performance of individual models trained for each AdapterDrop setting separately (i.e., 12 models); 'Standard adapter' refers to the adapter that is trained with no dropped layers; AdapterDrop training refers to the adapter that is trained with our proposed training procedure.
393
+
394
+ ![](images/7118cf81ff9ba09e2cececf713880517340c6c4f3a78dc0df3572e202b01ed9c.jpg)
adapterdropontheefficiencyofadaptersintransformers/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f7474e5644b0dde41ef1b0a0a2b1309efaa32324db27ded234345b647b4e3e3
3
+ size 1255152
adapterdropontheefficiencyofadaptersintransformers/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd62f23370e01fcad41060cd7e1319ab18afe0726d62019322be9be9ef9021a8
3
+ size 467007
adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b42dd9cb1cf80be3a6245b2141f9f227a8c412f2c552421c75220bf93f7ae23
3
+ size 69251
adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8dd99ffa73fb0a8ed3a9326ccebe337d99a68d40a3cd6aaebabca7f65f7a9002
3
+ size 86675
adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:868c3dc1bf5f1426e4bc745fd8aa54897c486d1d006a7116436f9864e0c66dd9
3
+ size 582603
adaptivebridgebetweentrainingandinferencefordialoguegeneration/full.md ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adaptive Bridge between Training and Inference for Dialogue Generation
2
+
3
+ Haoran Xu $^{1,2*}$ , Hainan Zhang $^{3\dagger}$ , Yanyan Zou $^{3}$ , Hongshen Chen $^{3}$ , Zhuoye Ding $^{3}$ , Yanyan Lan $^{4}$
4
+
5
+ $^{1}$ Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
6
+
7
+ <sup>2</sup> University of Chinese Academy of Sciences, Beijing, China
8
+
9
+ <sup>3</sup>Data Science Lab, JD.com, Beijing, China
10
+
11
+ $^{4}$ Institute of AI Industry Research, Tsinghua University, Beijing, China
12
+
13
+ xuhaoran18s@ict.ac.cn,zhanghainan1990@163.com,zouyanyan6@jd.com
14
+
15
+ ac@chenhongshen.com,dingzhuoye@jd.com,langyanyan@tsinghua.edu.cn
16
+
17
+ # Abstract
18
+
19
+ Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics. Therefore, due to the much bigger gap between various ground-truth responses and the generated synthetic response, exposure bias is more challenging in dialogue generation task. What's more, as MLE encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may further lead to the common response generation problem, such as "I don't know" and "HaHa?" In this paper, we propose a novel adaptive switching mechanism, which learns to automatically transit between ground-truth learning and generated learning regarding the word-level matching score, such as the cosine similarity. Experimental results on both Chinese STC dataset and English Reddit dataset, show that our adaptive method achieves a significant improvement in terms of metric-based evaluation and human evaluation, as compared with the state-of-the-art exposure bias approaches. Further analysis on NMT task also shows that our model can achieve a significant improvement.
20
+
21
+ # 1 Introduction
22
+
23
+ Auto-regressive models(ARM) are widely used for natural language generation(NLG) tasks, such as machine translation (Sutskever et al., 2014; Wu et al., 2018), dialogue response generation (Li et al., 2017), image captioning (Lin et al., 2014; Vinyals et al., 2015) and video description (Donahue et al., 2015). They utilize the encoder-decoder framework to predict the next token conditioned
24
+
25
+ <table><tr><td colspan="2">Dialogue 1</td></tr><tr><td>context</td><td>听说广州已成避暑胜地 (I heard that Guangzhou has become a summer resort)</td></tr><tr><td>response1</td><td>确实,这边很凉快。(Indeed, it&#x27;s cool here.)</td></tr><tr><td>response2</td><td>晚上睡觉都没开风扇了。 (There is no need to turn on the fan at night.)</td></tr><tr><td colspan="2">Dialogue 2</td></tr><tr><td>context</td><td>哈哈,看看可爱的小猫咪(Ha ha, look at this lovely kitten)</td></tr><tr><td>response1</td><td>这是什么品种的猫哇?好可爱,我也想要 (What kind of cat is this? So cute, I want it too.)</td></tr><tr><td>response2</td><td>好想要一只这样的猫,可以陪我儿子玩 (I really want a cat like this to play with my son.)</td></tr><tr><td>response3</td><td>哇,好可爱哇,我屋也有一只这样的小猫 (Wow, It&#x27;s so cute, I have a kitten like this in my house.)</td></tr></table>
26
+
27
+ Table 1: The two Dialogues in STC dataset, and the red part of responses in Dialogue 2 are the common words.
28
+
29
+ on the previous tokens, and minimize the cross-entropy between the generation and ground-truths as their objective function. Specifically, at training time, the ground-truth is utilized as the previous tokens, which forces the model directly to learn the distribution of ground truths. But at inference, the previous tokens come from the ARM decoder itself, which is different from the input distribution at training time.
30
+
31
+ Although this discrepancy, named exposure bias, has been studied in some classic NLG tasks, such as neural machine translation(NMT) (Bengio et al., 2015; Venkatraman et al., 2015; Zhang et al., 2019), it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In human dialogue, given the context, people can reply many relevant and appropriate responses, not only with various expressions but also with different topics. Take the Dialogue 1 in Table 1 as an example, given the context "I heard that Guangzhou has become a summer resort", the response 1 and response 2 are in the same topic but with different tokens. In this various expression situation, like NMT task, data distribution and model distribution are easy to fit, relatively, even with exposure bias problem. However, in different topics situation, data distribution is often
32
+
33
+ different from the model, because it is too divergent and covers various word distribution of each topic. Through our data analysis, we find that in dialogue generation task, the various ground-truth responses and the generated sentences have a bigger gap than in NMT tasks. We calculate the overlap measures at word-level and semantic-level, i.e., BLEU and cosine similarity, between the generated sentence and the ground-truth sentences. The results show that on NMT WMT'14 dataset, the BLEU and similarity are 27.38 and 0.96, respectively, while on dialogue Reddit dataset, the BLEU and similarity are 2.17 and 0.81, respectively. We can see that the overlap measures of the dialogue generation task are significantly lower than that of the NMT task, which indicates the severity of the exposure bias problem in the dialogue generation.
34
+
35
+ What's more, as Maximum Likelihood Estimation(MLE) encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may aggravate the common response problem of the generation model, due to the strict matching between the generated response and the ground-truth responses. Take the Dialogue 2 in Table 1 as an example, the response 1 is "What kind of cat is this? So cute, I want it too.", the response 2 is "I really want a cat like this to play with my son." and the response 3 is "Wow, it's so cute, I have a kitten like this in my house". If we train the model with word-level strict matching between the generated response and the ground-truth, it can only learn the common words, i.e., "So cute, I want it", but ignore the specific parts, i.e., "What kind of cat is this?". Therefore, it is beneficial to improve the strict matching mechanism for the dialogue generation task.
36
+
37
+ In this paper, we propose a novel Adaptive switch mechanism as a Bridge(AdapBridge), which introduces the generator distribution to the training phase and learns to automatically transit between the ground-truth learning and the generated learning, with respect to the word-level matching scores, such as the cosine similarity. Specifically, at each training step, we calculate the cosine similarity for each generated word with respect to all its ground-truths. If the matching score is bigger than the threshold, the generated word is fed to the decoder, while if lower, the ground-truth is fed for training. The threshold is increasing as the training epoch grows. With this adaptive sampling scheme,
38
+
39
+ the switch mechanism can consider the generation quality of every word, i.e., relevance between the generated word and the ground-truth, to decide whether utilizing the generated learning or not.
40
+
41
+ We evaluate the proposed models on two public datasets, i.e. the Chinese STC and the English Reddit dataset. Experimental results show that our models significantly outperform the state-of-the-art exposure bias models with respect to both metric-based evaluations and human judgments. Further analysis on NMT task also shows that our model can achieve a significant improvement.
42
+
43
+ The main contributions of this paper include:
44
+
45
+ - We study the exposure bias problem in dialogue generation task, one of the one-to-varying generation scenarios. And find that the exposure bias may further lead to the common response generation problem.
46
+ - We propose the adaptive switch mechanism with word-level matching scores to determine the training input source, in order to resolve the common response problem.
47
+ - We evaluate AdapBridge on two public dialogue datasets and conduct rigorous experiments to demonstrate the effectiveness of our proposed models. Further analysis on NMT task also shows that our model can achieve a significant improvement.
48
+
49
+ # 2 Related Work
50
+
51
+ This section briefly introduces recent research progresses related to this work in literature.
52
+
53
+ To solve the exposure bias problem in autoregressive or seq2seq models (Sutskever et al., 2014; Welleck et al., 2019; Holtzman et al., 2019), Venkatraman et al. (2015) tried to use data as Demonstrator(DAD) to augment the training set through the tokens predicted by the model, so as to make the training set to meet the test distribution. The method of Scheduled Sampling(SS) proposed by Bengio et al. (2015) attempted to randomly sample the previous generated words to replace the ground-truth words for the model input during training time. Zhang et al. (2019) made a further exploration of this method by sampling the previous words with decay not only from word-level oracle but also from the sentence-level oracle with a semantic metric. The main idea of this kind of method is to introduce the model's prediction information to its input at training time, and reduce
54
+
55
+ ![](images/60429db626b371a643b838cbd988d1f45fb804478f7766c410fcf6927aee42fa.jpg)
56
+ Figure 1: The illustration of our AdapBridge Model.
57
+
58
+ the discrepancy between training and inference to alleviate the exposure bias problem. In comparison to those methods and related ideas (Qi et al., 2020; Goodman et al., 2020), our proposed method adaptively determines whether the input words of model during training are ground truth or predicted by scoring each generated word.
59
+
60
+ Alternative based on Reinforcement Learning(RL) (Williams, 1992) methods have been explored for generation tasks, in particular for NMT. Mixed Incremental Cross-Entropy Reinforce (MIXER) (Ranzato et al., 2016) leverage hybrid loss function which combines both cross-entropy and reinforce to directly optimized the metrics used at test time, such as BLEU or ROUGE. There are many other similar works (Shen et al., 2016; Wu et al., 2016; Shao et al., 2018). More recently, text generation via Generative Adversarial Networks(GAN) (Goodfellow et al., 2014) called Text GANs has attracted of researchers (Nie et al., 2019; Zhou et al., 2019; Wu et al., 2021; Scialom et al., 2020). They framed the problem under the GAN paradigm, which uses the RL-Based (Williams, 1992) algorithms to get the gradient estimation, as the text generation is discrete. However, both RL and Text GANs cannot be avoided the high variance of gradient estimation caused by sparse rewards, which consequently makes the training process unstable and limits improvements.
61
+
62
+ Different from traditional methods, our proposed model can adaptively determine whether the current input word is from ground truth or from generation with the word-level matching scores.
63
+
64
+ # 3 Proposed Method
65
+
66
+ Given a context sentence $X^{k} = \{x_{1}^{k}, x_{2}^{k}, \dots, x_{S_{k}}^{k}\}$ , and a target response sentence $Y^{k} = \{y_{1}^{k}, y_{2}^{k}, \dots, y_{T_{k}}^{k}\}$ , where $S_{k}$ and $T_{k}$ are the word length of context and response, respectively. The dialogue generation model based on sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) framework, directly models the response probability:
67
+
68
+ $$
69
+ P \left(Y ^ {k} \mid X ^ {k}, \theta\right) = \prod_ {t = 1} ^ {T _ {k}} p \left(y _ {t} ^ {k} \mid y _ {< t} ^ {k}, X ^ {k}, \theta\right) \tag {1}
70
+ $$
71
+
72
+ where $\theta$ are the parameters of model and $y_{<t}^{k}$ denote the previous ground-truth words. Given a set of training examples $D = \{X^{k}, Y^{k}\}_{k=1}^{N}$ , the stander training objective is to minimize the negative log-likelihood of all the training data:
73
+
74
+ $$
75
+ \theta = \underset {\theta} {\operatorname {a r g m i n}} \{L (\theta) \} \tag {2}
76
+ $$
77
+
78
+ where
79
+
80
+ $$
81
+ \begin{array}{l} L (\theta) = \sum_ {k = 1} ^ {N} - \log P \left(Y ^ {k} \mid X ^ {k}, \theta\right) \tag {3} \\ = \sum_ {k = 1} ^ {N} \sum_ {t = 1} ^ {T _ {k}} - l o g p (y _ {t} ^ {k} | y _ {< t} ^ {k}, X ^ {k}, \theta) \\ \end{array}
82
+ $$
83
+
84
+ where $\mathbf{N}$ is the number of training examples.
85
+
86
+ Different from training time, during inference, the probability of each target word $p(y_{t}^{k}|y_{<t}^{k},X^{k},\theta)$ in Equation 1 is conditioned on the previous generated words $y_{<t}^{k}$ rather than the ground-truth $y_{<t}^{k}$ , as the ground truth words are not available in real inference time. This discrepancy is called exposure bias.
87
+
88
+ # 3.1 AdapBridge
89
+
90
+ The architecture of our model is illustrated in Figure 1. We first define a sampling function:
91
+
92
+ $$
93
+ y _ {< t} ^ {k} ^ {\wedge} = \mathbf {S a m F u n} \left(y _ {< t} ^ {k}, y _ {< t} ^ {k *}\right) \tag {4}
94
+ $$
95
+
96
+ where $y_{<t}^{k}$ and $y_{<t}^{k*}$ are the inputs of sampling function, representing the ground truth words and generated words, respectively, and $y_{<t}^{k\wedge} = \{y_{1}^{k\wedge}, y_{2}^{k\wedge}, \dots, y_{t-1}^{k\wedge}\}$ denotes the inputs of decoder after sampled by the sampling function at $t - th$ time step, which may contain both ground truth and generated words.
97
+
98
+ In this framework, to predict $t - th$ target word $y_{t}^{k^{*}}$ , we can follow those steps:
99
+
100
+ ![](images/1830486629dcdf783c7fe377145ef90733d7666cc711df3cb0b127480bd44e33.jpg)
101
+ Figure 2: The Illustration of the sampling function Adapt-Bridge. The word embedding shares weights with encoder and decoder of model showed in Figure 1. $I(> \beta)$ is a Indicator function, if input is upper $\beta$ , output will be 1, otherwise, output will be 0. $\alpha$ and $\beta$ are both increasing as the training epochs grows.
102
+
103
+ - Decoders predict $t - 1$ words $y_{<t}^{k}$ as the previous generated words.
104
+ - Use the sequences $y_{<t}^{k}$ and $y_{<t}^{k}$ (ground truth words) as inputs of SamFun(Equation 4), and get the outputs $y_{<t}^{k}$ of this function.
105
+ - Replace the inputs $y_{<t}^{k}$ of model in Equation 1 with $y_{<t}^{k}^{\wedge}$ , then predict the t-th word $y_{t}^{k^{*}}$ .
106
+
107
+ The SamFun can be any function, i.e. random sampling. The process is to replace the corresponding ground-truth words in $y_{<t}^{k}$ with the generated words $y_{<t}^{k*}$ . We propose a novel SamFun called AdapBridge, which can be seen in Figure 2.
108
+
109
+ The main idea of AdapBridge is simple: we first use the model to generate all words with Equation 1, and then compute the pairwise cosine similarity between the generated words and the ground truth. If the generated word is learned good enough (similar to the ground truth or a synonym), the max cosine similarity of this word will be close to 1 and upper a threshold $\beta$ showed in Figure 2. Therefore, we can use this generated word to replace the ground truth word, which introduces the generator distribution to the training phase. The summary of the algorithm is illustrated in Table 2.
110
+
111
+ # 3.2 Sampling with Increase
112
+
113
+ The threshold $\alpha$ and $\beta$ determine the frequency of sampling function and the similarity between generated words with ground truth, respectively. Note that, when $\alpha = 0$ , the training type is same as before, while when $\alpha = 1$ , the model is trained as inference. If $\alpha$ is set too low $(\approx 0)$ , the inputs of decoder will almost be ground-truth, and will not be able to cope with the unknown words predicted in the reference. On the other hand, if $\alpha$ is set too high $(\approx 1)$ at the beginning of training, the model will yield tokens randomly, because the model is not well trained, which may lead to slow convergence. Similarity, because model can not generate high cosine similarity score at the beginning, it is necessary to set the $\beta$ low to ensure that part of ground truth words can be replaced by generated words, and increase its value as the training steps or epochs growing. In this sense, $\alpha$ and $\beta$ should be both increase with the training time grows. Note that, the probability $\alpha$ in our method determine whether to execute the transition mechanism. And the threshold $\beta$ determine which words in the generated sentence should be replaced, with respect to the cosine similarity score.
114
+
115
+ # Algorithm 1 AdapBridge
116
+
117
+ # Input:
118
+
119
+ Sequence of generated words $y_{<t}^{k}$
120
+ Sequence of ground-truth words $y_{<t}^{k}$
121
+ Word embedding of model with size of shared vocabulary; Number of epoch $n$ .
122
+
123
+ # Output:
124
+
125
+ Inputs of decoder after sampled $y_{< t}^{k}$ 1: Initialize $P\gets \{p_1,p_2,\dots ,p_{|e_{< t}^{k}*|}\}$ Calculate $\alpha$ and $\beta$ with $n$ Get a random number $m$ between [0,1].
126
+
127
+ 2: if $m < \alpha$ then
128
+ 3: $e_{<t}^{k*} \gets \mathbf{E}(y_{<t}^{k*}), e_{<t}^{k} \gets \mathbf{E}(y_{<t}^{k})$
129
+ 4: for $i = 1,\dots ,|e_{< t}^{k}|$ do
130
+ 5: for $j = 1,\dots ,|e_{< t}^{k}|$ do
131
+ 6: $\mathbf{SimMat}(i,j)\gets \frac{e_i^{k^*}\cdot e_j^k}{|e_i^{k^*}||e_j^k|}$
132
+ 7: end for
133
+ 8: $s_i \gets \mathbf{MaxSim}(\mathbf{SimMat}(i,j))$
134
+ 9: $p_i\gets I_{>\beta}(s_i)$
135
+ 10: end for
136
+ 11: $y_{< t}^{k}\wedge \leftarrow y_{< t}^{k}{}^{*}\otimes P + y_{< t}^{k}\otimes (1 - P)$
137
+ 12: else
138
+ 13: $y_{< t}^{k}\wedge \leftarrow y_{< t}^{k}$
139
+ 14: end if
140
+ 15: return $y_{<t}^{k}$
141
+
142
+ Table 2: Algorithm of AdapBridge
143
+
144
+ # 3.2.1 Increase Function of $\alpha$
145
+
146
+ We define $\alpha$ with an increase function dependent on the number of training epochs $n$ :
147
+
148
+ $$
149
+ \alpha = 1 - \frac {k}{k + \exp ((n - w) / k)} \tag {5}
150
+ $$
151
+
152
+ where $k \geq 1$ is hyper-parameter, which determines the speed of convergence. In addition, we add a parameter $w$ , which makes $\alpha$ close to 0 during the first $w$ epochs of training. It is usually set to half number of all training epochs to ensure that the model is trained enough to generate reasonable words. The curve of this function can be seen in Figure 3.
153
+
154
+ # 3.2.2 Increase Function of $\beta$
155
+
156
+ At the beginning of training, the words cosine similarity scores generated by the model are generally low, so reducing $\beta$ appropriately could help. On the other hand, at the end of training, $\beta$ should increase, as the model was already trained good
157
+
158
+ ![](images/36c3b04c66d3b0454e9d807b4f67088e399c135923673fa5ca9b3e9881456299.jpg)
159
+ Figure 3: The curve of $\alpha$ and $\beta$ . Where $k = 5, w = 32$ in Equation 5, and $\gamma = 0.9$ in Equation 6
160
+
161
+ enough to generate meaningful words after the first $w$ epochs. However, $\beta$ can not start from zero as $\alpha$ showed in Figure 3. Intuitively, even at the beginning, it should have a certain threshold to ensure that the generated words which will replace the ground truth words have a high quality.
162
+
163
+ We thus propose to use a schedule to increase $\beta$ as a function of $\alpha$ calculated with Equation 5, the formula as follows:
164
+
165
+ $$
166
+ \beta = \gamma + (1 - \gamma) * \alpha \tag {6}
167
+ $$
168
+
169
+ where $\gamma$ is the lowest similarity score threshold. In the entire training process, one ground truth word can be replaced only when the score of the generated word is at least greater than $\gamma$ . The function is strictly monotone increasing the same as the Equation 5, and its curve can be seen in Figure 3.
170
+
171
+ # 4 Experiments
172
+
173
+ In this section, we evaluate our proposed model on both Chinese STC and the English Reddit dataset.
174
+
175
+ # 4.1 Experimental Settings
176
+
177
+ We first introduce datasets, baselines, parameters setting and evaluation measures.
178
+
179
+ # 4.1.1 Datasets
180
+
181
+ We use two public one-to-many single turn dialogue datasets in our experiments. The Chinese Weibo Dataset, named STC<sup>1</sup>, consists 4,391,266, 23,532 and 21,161 dialogue context-response pairs for training, validation and testing sets, respectively. We remove those pairs whose context contains response or contrary, and obtain 4,295,557, 23,039 and 20,749 pairs for three data sets. The average number of responses corresponding to each context
182
+
183
+ in STC is 19.7. The English Reddit dialogue corpus is extracted from the Reddit comments-posts by the script $^2$ , named Reddit. The original data consists of 6 million dialogues from all even months of 2011. We use the official script to tokenize, and the duplicates and sentences with length less than 3 or longer than 64 are removed. If the number of responses of one context is more than 20, we will randomly select 20 responses and ignore others, and the average number of responses corresponding to each context in Reddit is 6.2. Finally, we randomly split the data to training, validation and testing sets, which contains 1,107,860, 23,183, 12,429 pairs, respectively.
184
+
185
+ # 4.1.2 Baselines and Parameters Setting
186
+
187
+ Three baselines are used for comparison, including Transformer-based model (Vaswani et al., 2017), Random Sampling with word(RS-word) and sentence(RS-Sentence) (Zhang et al., 2019).
188
+
189
+ For STC, we utilize the Chinese word as input, and set the vocabulary size as 10,599. For Reddit, context-response pairs are encoded using byte-pair encoding(BPE) (Sennrich et al., 2016) with vocabularies of 11,527 tokens. For a fair comparison among all baseline models and our model, the dimension of all word embedding is 512, and beam size in testing is 5. The transformer model has 6 layers in both encoder and decoder, while 8 heads in multi-head attention. All parameters are initialized by the uniform distribution over $[-0.1, 0.1]$ . We adopt the optimizer Adam (Kingma and Ba, 2015) with $\beta_1 = 0.9$ , $\beta_2 = 0.98$ and with a weight decay of $\epsilon = 10^{-8}$ . We set the learning rate as 0.0007 and the maximum tokens of a batch as 8192 with the update frequency 2. We run all models on 4 Tesla P40 GPU cards with PyTorch<sup>3</sup>. The code will be released when this paper is accepted.
190
+
191
+ # 4.1.3 Evaluation Measures
192
+
193
+ The evaluation of quantitative metrics and human judgements are used in our experiments. Specifically, quantitative metrics contains traditional metrics, such as PPL and BLEU score (Papineni et al., 2002), and Distinct (Li et al., 2016) metric which is recently proposed to evaluate the degree of diversity of the generated responses by calculating the number of distinct unigrams and bigrams in the generated responses. We also evaluate each generated response by calculating BLEU score with
194
+
195
+ all reference responses, and use the highest BLEU score to represent the quality of generated response. The average of all highest BLEU score in the testing set is named AH-BLEU. In addition, BLEU score is calculated by using the toolkit of NLTK<sup>4</sup>.
196
+
197
+ For human evaluation, given 300 randomly sampled contexts and their responses which are generated by different models, three annotators (all CS majors students) are required to give the score of those context-response pairs, e.g. 3, 2, 1 means relevant, common and no-relevant, respectively, based on the coherence of the generated responses with respect to the contexts. The mean score is the average of all scores given by the three annotators with context-response pairs generated by a model. Meanwhile, in order to get the relative score of different models, we also evaluate the ground-truth context-response pairs by human evaluation.
198
+
199
+ # 4.2 Experimental Results
200
+
201
+ In this section, we demonstrate our experimental results on the two public datasets.
202
+
203
+ # 4.2.1 Metric-based Evaluation
204
+
205
+ Table 3 shows the quantitative evaluation results. From this table, we can see that models with switch mechanism, such as RS-Word, RS-Sentence and AdapBridge, outperform the traditional Transformer-based model in terms of BLEU, Distinct-1 and Distinct-2 evaluations. The results show that the switch mechanism plays an important role in the dialogue generation task.
206
+
207
+ RS-Word and RS-Sentence both replace the ground truth tokens by the generated tokens with a random scheduled sampling. However, their performances are both worse than our proposed model, as our model considers the relevance between the generated words and the ground truth with word-level matching scores. For the BLEU score on STC dataset as an example, the BLEU-4 score of AdapBridge is 2.17, which is better than that of RS-Word and RS-Sentence, i.e., 2.05 and 2.12. Especially, our model achieves best AH-BLEU-2 score on both two datasets, which is the significant performance gains. It shows that the responses of our model have higher quantity than other baselines.
208
+
209
+ The diversity of responses can be evaluated by Distinct score. As shows in Table 3, our AdapBridge achieves significant performance gains. Take the results of Reddit in Table 3 as an example, the proposed AdapBridge model improves the
210
+
211
+ <table><tr><td colspan="7">STC Datasets</td></tr><tr><td>Model</td><td>PPL</td><td>BLEU-2(%)</td><td>BLEU-4(%)</td><td>DIS-1(%)</td><td>DIS-2(%)</td><td>AH-BLEU-2(%)</td></tr><tr><td>Transformer</td><td>28.86</td><td>3.74</td><td>1.37</td><td>0.23</td><td>0.90</td><td>14.43</td></tr><tr><td>RS-Word Oracle</td><td>28.91</td><td>5.12</td><td>2.05</td><td>0.33</td><td>1.25</td><td>15.21</td></tr><tr><td>RS-Sentence Oracle</td><td>26.75</td><td>5.50</td><td>2.12</td><td>0.35</td><td>1.38</td><td>15.52</td></tr><tr><td>AdapBridge</td><td>29.36</td><td>5.35</td><td>2.17</td><td>0.43</td><td>1.74</td><td>16.38</td></tr></table>
212
+
213
+ <table><tr><td colspan="7">Reddit Datasets</td></tr><tr><td>Model</td><td>PPL</td><td>BLEU-2(%)</td><td>BLEU-4(%)</td><td>DIS-1(%)</td><td>DIS-2(%)</td><td>AH-BLEU-2(%)</td></tr><tr><td>Transformer</td><td>40.83</td><td>3.99</td><td>0.77</td><td>0.79</td><td>2.91</td><td>7.03</td></tr><tr><td>RS-Word Oracle</td><td>43.11</td><td>3.78</td><td>0.81</td><td>1.42</td><td>5.19</td><td>7.43</td></tr><tr><td>RS-Sentence Oracle</td><td>40.72</td><td>3.49</td><td>0.76</td><td>1.33</td><td>5.08</td><td>7.05</td></tr><tr><td>AdapBridge</td><td>48.01</td><td>3.56</td><td>0.83</td><td>1.56</td><td>5.56</td><td>7.60</td></tr></table>
214
+
215
+ Table 3: The metric based evaluation results on STC and Reddit datasets. DIS represent the distinct score and AH-BLEU-2 represent the average of all highest BLEU-2 score.
216
+
217
+ <table><tr><td colspan="5">STC Datasets</td></tr><tr><td>Model</td><td>3(%)</td><td>2(%)</td><td>1(%)</td><td>Mean</td></tr><tr><td>Ground Truth</td><td>82.23</td><td>13.56</td><td>4.23</td><td>2.78</td></tr><tr><td>Transformer</td><td>48.67</td><td>23.11</td><td>28.22</td><td>2.20</td></tr><tr><td>RS-Word</td><td>56.33</td><td>20.89</td><td>22.78</td><td>2.34</td></tr><tr><td>RS-Sentence</td><td>55.33</td><td>22.67</td><td>22.00</td><td>2.33</td></tr><tr><td>AdapBridge</td><td>59.56</td><td>24.33</td><td>16.11</td><td>2.43</td></tr><tr><td colspan="5">Reddit Datasets</td></tr><tr><td>Model</td><td>3(%)</td><td>2(%)</td><td>1(%)</td><td>Mean</td></tr><tr><td>Ground Truth</td><td>79.00</td><td>15.67</td><td>5.33</td><td>2.74</td></tr><tr><td>Transformer</td><td>49.78</td><td>21.89</td><td>28.33</td><td>2.21</td></tr><tr><td>RS-Word</td><td>52.44</td><td>23.00</td><td>24.56</td><td>2.28</td></tr><tr><td>RS-Sentence</td><td>53.11</td><td>24.67</td><td>22.22</td><td>2.31</td></tr><tr><td>AdapBridge</td><td>55.67</td><td>28.33</td><td>16.00</td><td>2.40</td></tr></table>
218
+
219
+ Table 4: The human evaluation on STC and Reddit.
220
+
221
+ Transformer, RS-Word and RS-Sentence models by 2.65, 0.37 and 0.48 Distinct-2 points, respectively. We can also note that our model has the highest Distinct score on both STC and Reddit datasets, which indicates that our model can generate more diverse response and avoid generating common responses. In summary, our proposed AdapBridge model has the ability to generate high quality and diverse responses, compared with baselines. We also conducted the significant test, and the result shows that the improvements of our model are significant on both two datatests, i.e., $p - value < 0.01$ .
222
+
223
+ # 4.2.2 Human Evaluation
224
+
225
+ The human evaluation results are shown in Table 4. The percentages of relevant, common and norelevant are given to evaluate the quality of responses generated by different models. From the results we can see that our AdapBridge gets the highest score in human evaluation. Take the STC as an example, compared with Transformer, RS-Word Oracle and RS-Sentence Oracle, the AdapBridge achieves performance gains $22.38\%$ , $5.73\%$ , $7.65\%$ on the relevant score. For the Mean score, we can observe that our AdapBridge generates the most relevant responses, while generates less no-relevant responses, which indicates that the responses generated by our model are attractive to annotators. We also conducted the significant test, and the result shows that the improvements of our model are significant on both two datatests, i.e., $p - value < 0.01$ .
226
+
227
+ # 4.3 Case study
228
+
229
+ In this section, we conduct case studies to show our model can generate more relevant and diverse responses than other baseline models.
230
+
231
+ We give two examples as in Table 5. In the Example 1, the response of Transformer is "Is this a fish or a fish?", which is an unreasonable sentence, as the words in the positions of two "fish" should be different according to common sense. For the response of RS-Word, we can see that it repeats "want to eat fish" twice, which is a part of the context. Although the response of RS-Sentence "How lovely! I want to eat, too." is relevant, it conforms
232
+
233
+ <table><tr><td colspan="3">Example 1</td></tr><tr><td>context</td><td>你不是天天嚷着要吃鱼嘛?鱼来了</td><td>Don&#x27;t you want to eat fish every day? The fish is coming.</td></tr><tr><td>Transformer</td><td>这是鱼还是鱼啊?</td><td>Is this a fish or a fish?</td></tr><tr><td>RS-Word</td><td>要吃鱼,要吃鱼。</td><td>Want to eat fish, want to eat fish</td></tr><tr><td>RS-Sentence</td><td>好可爱!我也想吃。</td><td>How lovely! I want to eat, too.</td></tr><tr><td>AdapBridge</td><td>这是什么鱼?</td><td>What kind of fish is this?</td></tr><tr><td colspan="3">Example 2</td></tr><tr><td>context</td><td>水城威尼斯,梦里的城市。</td><td>Venice, the city of dreams.</td></tr><tr><td>Transformer</td><td>梦里的城市,梦里的城市。</td><td>City in dreams, city in dreams.</td></tr><tr><td>RS-Word</td><td>��美的城市,好想去。</td><td>Beautiful city. I really want to go.</td></tr><tr><td>RS-Sentence</td><td>这是哪呀!我也想去。</td><td>Where is this? I want to go too.</td></tr><tr><td>AdapBridge</td><td>想和我的爸妈一起去。</td><td>I want to go here with my parents.</td></tr></table>
234
+
235
+ to a common response paradigm, such as "how . . . I want it, too" or "what's this, I want to . . ." If the context contains food, animal, locations etc., such responses all seem appropriate, which is not attractive to human. While the response generated by our AdapBridge is more specific and relevant, i.e. "What kind of fish is this?". We can also see the similar phenomenon in the Example 2 of Table 5, with the context "Venice, the city of water, the city of dreams.", Transformer repeats the same content that comes from the context, and the responses of RS-Word and RS-Sentence are both common responses as mentioned above. Compared with responses generated by baseline models, response of AdapBridge "I want to go here with my parents." is more relevant and attractive. Therefore, those results indicate that our proposed model can generate high quality and attractive responses with the adaptive switch mechanism.
236
+
237
+ # 4.4 AdapBridge on NMT
238
+
239
+ The method we propose can also be adapted for the neural machine translation(NMT) in an easy way. With this task we want to investigate if AdapBridge could be helping to improve the performance of NMT which is a classic natural language generation task. We perform experiments on the WMT'14 English $\rightarrow$ German(En $\rightarrow$ De) datasets, which contains 3,900,502, 39,414, 3,003 sentences for training, validation and testing sets, respectively. We train the Transformer-based model with the same setting described in Section 4.1.2, and then we measure the translation quality with BLEU. The evaluation results are listed in Table 6.
240
+
241
+ From the results, we can see that our method can also achieve significant performance gains, and im
242
+
243
+ Table 5: Two examples of generated responses on STC.
244
+
245
+ <table><tr><td>Model</td><td>BLEU-2(%)</td><td>BLEU-4(%)</td></tr><tr><td>Transformer</td><td>43.40</td><td>26.43</td></tr><tr><td>RS-Word</td><td>43.66</td><td>26.84</td></tr><tr><td>RS-Sentence</td><td>44.08</td><td>27.21</td></tr><tr><td>AdapBridge</td><td>43.99</td><td>27.38</td></tr></table>
246
+
247
+ Table 6: BLEU scores on $\mathrm{{En}} \rightarrow \mathrm{{De}}\mathrm{{NMT}}$ task.
248
+
249
+ prove the Transformer-based model by 0.95 BLEU-4 points on average. For the BLEU-2 score, our model is slightly lower than RS-Sentences model same as the results in Table 3, it can attributed to the sentence-level information of RS-Sentence Oracles. In order to analyze the gap of ground-truth and the generated sentences, we calculate the cosine similarity between the hidden representations of ground truth sentences and generated sentences, with a trained Bert model (Wolf et al., 2020), and get the similarity score 0.96 and 0.81 on WMT'14 and Reddit datasets, respectively. At the same time, we can also notice that BLEU score of NMT is much higher than that of dialogue generation task. The results of overlap measures indicate the severity of the exposure bias problem in dialogue generation as analyzed in Section 1.
250
+
251
+ # 5 Conclusion
252
+
253
+ In this paper, we propose a novel adaptive switch mechanism with word-level matching scores to solve the problem of exposure bias for the dialogue generation task, named AdapBridge. Our core idea is to utilize the word-level matching scores to determine the input is from ground truth or from prediction at each step of training. Experimental results show that our model significantly outperforms pre
254
+
255
+ vious baseline models. Further analysis on NMT also indicates that our model can achieve significant improvement on different generation tasks. In future work, we plan to further design different scoring methods, i.e. Bert score or BLEU, to guide the model selects better words. It is also interesting to extend our AdapBridge model to other generation tasks, such as abstractive summarization.
256
+
257
+ # Acknowledgements
258
+
259
+ This work is supported by the Beijing Academy of Artificial Intelligence (BAAI), and the National Natural Science Foundation of China (NSFC) (No.61773362).
260
+
261
+ # References
262
+
263
+ Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1, pages 1171-1179.
264
+ Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2625-2634.
265
+ Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS, pages 2672-2680.
266
+ Sebastian Goodman, Nan Ding, and Radu Soricut. 2020. Teaform: Teacher-forcing with n-grams. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8704–8717.
267
+ Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. In International Conference on Learning Representations.
268
+ Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
269
+ Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,
270
+
271
+ pages 110-119, San Diego, California. Association for Computational Linguistics.
272
+ Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157-2169.
273
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
274
+ Weili Nie, Nina Narodytska, and Ankit Patel. 2019. RelGAN: Relational generative adversarial networks for text generation. In International Conference on Learning Representations.
275
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
276
+ Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2401-2410.
277
+ Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
278
+ Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. Coldgans: Taming language gans with cautious sampling strategies. In Advances in Neural Information Processing Systems, volume 33, pages 18978-18989. Curran Associates, Inc.
279
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
280
+ Chenze Shao, Xilin Chen, and Yang Feng. 2018. Greedy search with probabilistic n-gram matching for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4778-4784.
281
+ Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum
282
+
283
+ risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683-1692.
284
+ Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc.
285
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
286
+ Arun Venkatraman, Martial Hebert, and J Bagnell. 2015. Improving multi-step prediction of learned time series models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29.
287
+ Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156-3164.
288
+ Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. In International Conference on Learning Representations.
289
+ Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256.
290
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'Tmi Louf, Morgan Funtopicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
291
+ Lijun Wu, Yingce Xia, Fei Tian, Li Zhao, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018. Adversarial neural machine translation. In *Asian Conference on Machine Learning*, pages 534-549. PMLR.
292
+ Qingyang Wu, Lei Li, and Zhou Yu. 2021. Textgail: Generative adversarial imitation learning for text generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14067-14075.
293
+ Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith
294
+
295
+ Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.
296
+ Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019. Bridging the gap between training and inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4334-4343, Florence, Italy. Association for Computational Linguistics.
297
+ Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, and Ming Zhou. 2019. Self-adversarial learning with comparative discrimination for text generation. In International Conference on Learning Representations.
adaptivebridgebetweentrainingandinferencefordialoguegeneration/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:290b72b1c7029820efd5a07eb1b5bf2fd78aa6fb2c96bdee3e46a90866771a76
3
+ size 436064
adaptivebridgebetweentrainingandinferencefordialoguegeneration/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f58504eb091e9adea56c09adb2ebcf76511eafb9ef7ad91cd4086e19e6d9ec95
3
+ size 368025
adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:011685e7f0dc8320065bc23a284bc69ac3b6e9a4e273be5d0ba1d3568f7605d6
3
+ size 96375
adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:519338a6abb09f2de906bf5015c592d0aa50faafb80481fec007169c51e16ff1
3
+ size 116243
adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eeb2b6171162345432b1040404803cf54bd76ec56ff2d9e84e3f88ea19e04550
3
+ size 1358209
adaptiveinformationseekingforopendomainquestionanswering/full.md ADDED
@@ -0,0 +1,412 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adaptive Information Seeking for Open-Domain Question Answering
2
+
3
+ Yunchang Zhu†§, Liang Pang†*, Yanyan Lan◇*, Huawei Shen†§, Xueqi Cheng†§
4
+
5
+ †Data Intelligence System Research Center
6
+
7
+ and ${}^{ \ddagger }$ CAS Key Lab of Network Data Science and Technology,
8
+
9
+ Institute of Computing Technology, Chinese Academy of Sciences
10
+
11
+ $^{\S}$ University of Chinese Academy of Sciences
12
+
13
+ $\diamond$ Institute for AI Industry Research, Tsinghua University
14
+
15
+ {zhuyunchang17s, pangliang, shenhuawei, cxq}@ict.ac.cn
16
+
17
+ lanyanyan@tsinghua.edu.cn
18
+
19
+ # Abstract
20
+
21
+ Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus. Recently, iterative approaches have been proven to be effective for complex questions, by recursively retrieving new evidence at each step. However, almost all existing iterative approaches use predefined strategies, either applying the same retrieval function multiple times or fixing the order of different retrieval functions, which cannot fulfill the diverse requirements of various questions. In this paper, we propose a novel adaptive information-seeking strategy for open-domain question answering, namely AISO. Specifically, the whole retrieval and answer process is modeled as a partially observed Markov decision process, where three types of retrieval operations (e.g., BM25, DPR, and hyperlink) and one answer operation are defined as actions. According to the learned policy, AISO could adaptively select a proper retrieval action to seek the missing evidence at each step, based on the collected evidence and the reformulated query, or directly output the answer when the evidence set is sufficient for the question. Experiments on SQuAD Open and HotpotQA fullwiki, which serve as single-hop and multi-hop open-domain QA benchmarks, show that AISO outperforms all baseline methods with predefined strategies in terms of both retrieval and answer evaluations.
22
+
23
+ # 1 Introduction
24
+
25
+ Open-domain question answering (QA) (Voorhees et al., 1999) is a task of answering questions using a large collection of texts (e.g., Wikipedia). It relies on a powerful information-seeking method to efficiently retrieve evidence from the given large corpus.
26
+
27
+ Traditional open-domain QA approaches mainly follow the two-stage retriever-reader pipeline (Chen et al., 2017; Yang et al., 2018; Karpukhin
28
+
29
+ Question.
30
+
31
+ What movie directed by Pitof in 2004 has a tie-in electronic game?
32
+
33
+ Passages:
34
+
35
+ P1: Pitof
36
+
37
+ Jean-Chri
38
+
39
+ Pty".In 20
40
+
41
+ P2:Catv
42
+
43
+ 12. Catw
44
+
45
+ Catwoman
46
+
47
+ same name
48
+
49
+ P3: Catwoman (video game)
50
+
51
+ Catwoman
52
+
53
+ based on
54
+
55
+ Strategies.
56
+
57
+ 1BM25C
58
+
59
+ BM25(G)
60
+
61
+ $^{2}\mathrm{DR}_{(\mathrm{MDR})}$
62
+
63
+ invc
64
+
65
+ $\text{一} \mathrm { B M } 2 5 + 1$
66
+
67
+ Optimal
68
+
69
+ Baidu)
70
+
71
+ (Red level)
72
+
73
+
74
+
75
+ K(GRR)
76
+
77
+ BM25(O)
78
+
79
+
80
+
81
+ DR(Q)
82
+
83
+ RM25(0)
84
+
85
+ BM23
86
+
87
+ BM25(Q)
88
+
89
+ )B
90
+
91
+ $\because {P}_{1}\left( {-2,0}\right) .{P}_{2}\left( {0,2}\right)$
92
+
93
+ P3
94
+
95
+ (1)
96
+
97
+ P1
98
+
99
+
100
+
101
+ M25(F)
102
+
103
+ →>11
104
+
105
+ OR([Q,P3])
106
+
107
+ LINK(P1)
108
+
109
+ LINK(1:1)
110
+
111
+ LINK(P1)
112
+
113
+ BM:
114
+
115
+ A
116
+
117
+ X
118
+
119
+ LIN
120
+
121
+ P2 Bin#
122
+
123
+ DR[
124
+
125
+ 5(0) m
126
+
127
+ C2 P2 3
128
+
129
+ (P2)
130
+
131
+ (1.2) X
132
+
133
+ 2,P2]) P3
134
+
135
+ BM25
136
+
137
+
138
+
139
+ ANS(
140
+
141
+ Figure 1: An example derived from HotpotQA development set. P1, P2 and P3 are the most relevant passages, of which P2 and P3 are supporting passages, which are essential to answer the question. Except for the adaptive strategy in the last row, fixed strategy methods such as using BM25 or dense retrieval multiple times and first using BM25 and then entity linking have failed, due to the rank of the remaining supporting passages larger than 1k. The number between two arrows indicates the highest rank of the remaining supporting passages in the retrieval list, unless ranked first.
142
+
143
+ et al., 2020), in which the retriever uses a determinate sparse or dense retrieval function to retrieve evidence, independently from the reading stage. But these approaches have limitations in answering complex questions, which need multi-hop or logical reasoning (Xiong et al., 2021).
144
+
145
+ To tackle this issue, iterative approaches have been proposed to recurrently retrieve passages and reformulate the query based on the original question and the previously collected passages. Nevertheless, all of these approaches adopt fixed information-seeking strategies in the iterative process. For example, some works employ a single retrieval function multiple times (Das et al., 2019a; Qi et al., 2019; Xiong et al., 2021), and the other works use a pre-defined sequence of retrieval functions (Asai et al., 2020; Dhingra et al., 2020).
146
+
147
+ However, the fixed information-seeking strategies cannot meet the diversified requirements of
148
+
149
+ various problems. Taking Figure 1 as an example, the answer to the question is 'Catwoman' in P3. Due to the lack of essential supporting passages, simply applying BM25/dense retrieval (DR) multiple times (strategy 1 (Qi et al., 2019) or 2 (Xiong et al., 2021)), or using the mixed but fixed strategy (strategy 3 (Asai et al., 2020)) cannot answer the question. Specifically, it is hard for Qi et al. (2019) to generate the ideal query 'Catwoman game' by considering P1 or P2, thus BM25 (Robertson and Zaragoza, 2009) suffers from the mismatch problem and fails to find the next supporting passage P3. The representation learning of salient but rare phrases (e.g. 'Pitof') still remains a challenging problem (Karpukhin et al., 2020), which may affect the effectiveness of dense retrieval, i.e., the supporting passage P3 is ranked 65, while P1 and P2 do not appear in the top-1000 list at the first step. Furthermore, link retrieval functions fail when the current passage, e.g., P2, has no valid entity links.
150
+
151
+ Motivated by the above observations, we propose an Adaptive Information-Seeking approach for Open-domain QA, namely AISO. Firstly, the task of open-domain QA is formulated as a partially observed Markov decision process (POMDP) to reflect the interactive characteristics between the QA model (i.e., agent) and the intractable large-scale corpus (i.e., environment). The agent is asked to perform an action according to its state (belief module) and the policy it learned (policy module). Specifically, the belief module of the agent maintains a set of evidence to form its state. Moreover, there are two groups of actions for the policy module to choose, 1) retrieval action that consists of the type of retrieval function and the reformulated query for requesting evidence, and 2) answer action that returns a piece of text to answer the question, then completes the process. Thus, in each step, the agent emits an action to the environment, which returns a passage as the observation back to the agent. The agent updates the evidence set and generates the next action, step by step, until the evidence set is sufficient to trigger the answer action to answer the question. To learn such a strategy, we train the policy in imitation learning by cloning the behavior of an oracle online, which avoids the hassle of designing reward functions and solves the POMDP in the fashion of supervised learning.
152
+
153
+ Our experimental results show that our approach achieves better retrieval and answering performance than the state-of-the-art approaches
154
+
155
+ on SQuAD Open and HotpotQA fullwiki, which are the representative single-hop and multi-hop datasets for open-domain QA. Furthermore, AISO significantly reduces the number of reading steps in the inference stage.
156
+
157
+ In summary, our contributions include:
158
+
159
+ - To the best of our knowledge, we are the first to introduce the adaptive information-seeking strategy to the open-domain QA task;
160
+ - Modeling adaptive information-seeking as a POMDP, we propose AISO, which learns the policy via imitation learning and has great potential for expansion.
161
+ - The proposed AISO achieves state-of-the-art performance on two public dataset and wins the first place on the HotpotQA fullwiki leaderboard. Our code is available at https://github.com/zycdev/AISO.
162
+
163
+ # 2 Related Work
164
+
165
+ Traditional approaches of open-domain QA mainly follow the two-stage retriever-reader pipeline (Chen et al., 2017): a retriever first gathers relevant passages as evidence candidates, then a reader reads the retrieved candidates to form an answer. In the retrieval stage, most approaches employ a determinate retrieval function and treat each passage independently (Wang et al., 2018; Lin et al., 2018; Lee et al., 2018; Yang et al., 2018; Pang et al., 2019; Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020; Izacard and Grave, 2021). As an extension, some approaches further consider the relations between passages through hyperlinks or entity links and extend evidence with the linked neighbor passages (Nie et al., 2019; Das et al., 2019b; Zhao et al., 2020). However, pipeline approaches retrieve evidence independently from reader, leading to 1) introduce less-relevant evidence to the question, and 2) hard to model the complex question which has high-order relationship between question and evidence.
166
+
167
+ Instead, recent iterative approaches sequentially retrieve new passages by updating the query inputted to a specific retrieval function at each step, conditioned on the information already gathered. At each step, Das et al. (2019a); Feldman and ElYaniv (2019); Xiong et al. (2021) reformulate the dense query vector in a latent space, while Ding et al. (2019); Qi et al. (2019); Zhang et al. (2020);
168
+
169
+ ![](images/c16af0bb0f1a1c6559326284342bd0cbebd9650e10f441b19bb7a297d645d886.jpg)
170
+ Figure 2: The overview of the AISO.
171
+
172
+ Qi et al. (2020) update the natural language query. After the first step retrieval using TF-IDF, Asai et al. (2020) and Li et al. (2021) recursively select subsequent supporting passages on top of a hyperlinked passage graph. Nevertheless, all of these approaches adopt fixed information-seeking strategies, employing the same retrieval function multiple times (Das et al., 2019a; Feldman and ElYaniv, 2019; Xiong et al., 2021; Ding et al., 2019; Qi et al., 2019; Zhang et al., 2020; Qi et al., 2020) or pre-designated sequence of applying retrieval functions (Asai et al., 2020; Li et al., 2021). Due to the diversity of questions, these fixed strategies established in advance may not be optimal for all questions, or even fail to collect evidence.
173
+
174
+ # 3 Method
175
+
176
+ In this section, we first formulate the open-domain QA task as a partially observed Markov decision process (POMDP) and introduce the dynamics of the environment. Then, we elaborate on how the agent interacts with the environment to seek evidence and answer a question. Finally, to solve the POMDP, we describe how to train the agent via imitation learning.
177
+
178
+ # 3.1 Open-Domain QA as a POMDP
179
+
180
+ Given a question $q$ and a large corpus $\mathcal{P}$ composed of passages, the task of open-domain QA is to col
181
+
182
+ lect a set of evidence $E\subset \mathcal{P}$ and answer the question based on the gathered evidence.
183
+
184
+ The fashion of iterative evidence gathering, proven effective by previous works (Das et al., 2019a; Asai et al., 2020; Xiong et al., 2021), is essentially a sequential decision-making process. Besides, since the corpus is large, ranging from millions (e.g., Wikipedia) to billions (e.g., the Web), and the input length of a QA model is limited, the QA model can only observe a part of the corpus. Owing to the above two reasons, we model open-domain QA as a partially observed Markov decision process.
185
+
186
+ In the POMDP we designed, as shown in Figure 2, the agent is the QA model that needs to issue actions to seek evidence from the largescale corpus hidden in the environment and finally respond to the question. By executing the received action, the environment can return a retrieved passage to the agent as an observation of the corpus. Formally, the POMDP is defined by $(S, \mathcal{A}, \mathcal{O}, \Omega, Z, R)$ , where $R$ is the reward function.
187
+
188
+ Actions: At timestep $t = 0,1,\dots ,T$ , the action $a_{t}$ in the action space $\mathcal{A} = \mathcal{F}\times \mathcal{U}$ is a request for an executable function $f\in \mathcal{F}$ , expressed as $\langle f,u\rangle$ , where $u\in \mathcal{U}$ is the text argument that gets passed to $f$ . The space of executable functions $\mathcal{F}$ includes two groups of functions, 1) retrieval function that takes the query $u$ and corpus $\mathcal{P}$ as
189
+
190
+ input and ranks a retrieval list of passages as $\mathcal{P}_{f(u)}$ , 2) answer function that replies to the question $q$ with the answer $u$ and ends the process. The action $a_{t}$ is performed following the policy $\Pi$ described in Subsection 3.2.2.
191
+
192
+ States: The environment state $s_t$ in the state space $S$ contains revealing states of retrieval lists of all history retrieval actions. When the agent issues an action $a_t = \langle f, u \rangle$ , $s_t$ will transfer to $s_{t+1}$ governed by a deterministic transition dynamics $\Omega(s_t, a_t)$ . Specifically, $\Omega$ will mark the topmost unrevealed passage in the retrieval list $\mathcal{P}_{f(u)}$ as revealed. If the environment has never executed $a_t$ before, it will first search and cache $\mathcal{P}_{f(u)}$ for possible repeated retrieval actions in the future.
193
+
194
+ Observations: On reaching the new environment state $s_{t+1}$ , the environment will return an observation $o_{t+1}$ from the observation space $\mathcal{O} = \{q\} \cup \mathcal{P}$ , governed by the deterministic observation dynamics $Z$ . At the initial timestep, the question $q$ will be returned as $o_0$ . In other cases, $Z$ is designed to return only the last passage marked as revealed in $\mathcal{P}_{f(u)}$ at a time. For example, if the action $\langle f, u \rangle$ is received for the $k$ th time, the $k$ th passage in $\mathcal{P}_{f(u)}$ will be returned.
195
+
196
+ # 3.2 Agent
197
+
198
+ The agent interacts with the environment to collect evidence for answering the question. Without access to the environment state $s_t$ , the agent can only perform sub-optimal actions based on current observations. It needs to build its belief $b_t$ in the state that the environment may be in, based on its experience $h_t = (o_0, a_0, o_1, \dots, a_{t-1}, o_t)$ . Therefore, the agent consists of two modules: belief module $\Phi$ that generates the belief state $b_t = \Phi(h_t)$ from the experience $h_t$ , and policy module $\Pi$ that prescribes the action $a_t = \Pi(b_t)$ to take for current belief state $b_t$ .
199
+
200
+ Both belief and policy modules are constructed based on pretrained Transformer encoders (Clark et al., 2020), respectively denoted as $\Psi^{belief}$ and $\Psi^{policy}$ , which encode each inputted token into a $d$ -dimensional contextual representation. The input of both encoders is a belief state, formatted as "[CLS] [YES] [NO] [NONE] question [SEP] title $_o$ [SOP] content $_o$ [SEP] title $_1$ [SOP] ... content $_{|E|$ [SEP]", where the subscript $_o$ denotes the observation passage, and the others passages come from the collected evidence set $E$ , [SOP] is a special token to separate the title and con
201
+
202
+ tent of a passage, [YES] and [NO] are used to indicate yes/no answer, and [NONE] is generally used to indicate that there is no desired answer/query/evidence. In this way, the self-attention mechanism across the concatenated sequence allows each passage in the input to interact with others, which has been shown crucial for multi-hop reasoning (Wang et al., 2019a).
203
+
204
+ # 3.2.1 Belief Module
205
+
206
+ The belief module $\Phi$ transforms the agent's experience $h_t$ into a belief state $b_t$ by maintaining a set of evidence $E_{t-1}$ . At the end of the process, the evidence set $E$ is expected to contain sufficient evidence necessary to answer the question and no irrelevant passage. In the iterative process, the agent believes that all the passages in $E$ may help answer the question. In other words, those passages that were observed but excluded from the evidence set, i.e., $o_{1:t-1} \setminus E_{t-1}$ , are believed to be irrelevant to the question.
207
+
208
+ For simplicity, assuming that the negative passages $o_{1:t-1} \setminus E_{t-1}$ and action history $a_{<t}$ are not helpful for subsequent decision-making, the experience $h_t$ is equivalent to $\{q, o_t\} \cup E_{t-1}$ . Thus, let $C_t = E_{t-1} \cup \{o_t\}$ be the current candidate evidence set, then the original question and current evidence candidates can form the belief state $b_t$ as
209
+
210
+ $$
211
+ b _ {t} = \Pi \left(h _ {t}\right) = \langle q, C _ {t} \rangle = \langle q, E _ {t - 1} \cup \left\{o _ {t} \right\} \rangle . \tag {1}
212
+ $$
213
+
214
+ At the beginning, the belief state $b_{0}$ is initialized to $\langle q, \varnothing \rangle$ , and the evidence set $E_{0}$ is initialized to $\varnothing$ .
215
+
216
+ To maintain the essential evidence set $E_{t}$ , we use a trainable scoring function $\phi(p|b_{t})$ to identify each evidence candidate $p \in C_{t}$ . Specifically, each passage is represented as the contextual representation of the special token [SOP] in it, which is encoded by $\Psi^{belief}$ . Then, the representation of each candidate is projected into a score through a linear layer. Besides, we use a pseudo passage $p_{0}$ , represented as [None], to indicate the dynamic threshold of the evidence set. In this way, after step $t$ , the evidence set is updated as
217
+
218
+ $$
219
+ E _ {t} = \left\{p _ {i} \mid \phi \left(p _ {i} \mid b _ {t}\right) > \phi \left(p _ {0} \mid b _ {t}\right), p _ {i} \in C _ {t} \right\}. \tag {2}
220
+ $$
221
+
222
+ It is worth noting that these evidence candidates are scored jointly since encoded together in the same input, different from conventional rerankers that score separately.
223
+
224
+ # 3.2.2 Policy Module
225
+
226
+ The policy module $\Pi$ decides the next action $a_{t}$ to be taken based on the current belief state $b_{t}$ . In this paper, we equipped the agent with three retrieval functions and one answer function, which means that the action space $\mathcal{A}$ consists of three types of retrieval actions and one type of answer actions. However, unlike the finite space of executable functions $\mathcal{F}$ , the space of function arguments $\mathcal{U}$ includes all possible natural-language queries and answers. To narrow the search space, for each executable function, we employ a suggester to propose a plausible query or answer as the argument passed to the function. Finally, we apply an action scoring function in the narrowed action space and select the action with the highest score.
227
+
228
+ Equipped Functions Formally, the space of executable functions is defined as $\mathcal{F} = \{f_s, f_d, f_l, f_o\}$ .
229
+
230
+ Among them, except $f_{o}$ is the answer function used to reply to the question, the rest are three distinct off-the-shelf retrieval functions (RF) used to explore the corpus. $f_{s}$ is a sparse RF, implemented as BM25 (Robertson and Zaragoza, 2009). It performs well when the query is concise and contains highly selective keywords but often fails to capture the semantics of the query. $f_{d}$ is a dense RF, implemented as MDR (Xiong et al., 2021) for multi-hop questions, and DPR (Karpukhin et al., 2020) for single-hop questions. Dense RFs can capture lexical variations and semantic relationships, but they struggle when encountering out-of-vocabulary words. $f_{l}$ is a link RF, implemented as hyperlink. When hyperlink markings are available in a source passage, it can readily map a query (i.e., anchor text) to the target passage.
231
+
232
+ Argument Generation The space of function arguments $\mathcal{U}$ , composed of textual queries and answers, is too large to perform an exhaustive search due to the complexity of natural language. To reduce the search complexity, inspired by Yao et al. (2020), we employ four argument generators to generate the most plausible query/answer for the equipped functions.
233
+
234
+ $g_{o}$ is a trainable reading comprehension model for $f_{o}$ . It is a span extractor built upon the contextual representations outputted by the encoder $\Psi^{policy}$ . Like conventional extractive reading comprehension models (Yang et al., 2018; Clark et al., 2020), $g_{o}$ uses the contextual representations to
235
+
236
+ calculate the start and end positions of the most plausible answer $u_{o}$ . If the current context $C_t$ is insufficient to answer the question, the special token [NONE] will be extracted.
237
+
238
+ $g_{s}$ is a query reformulation model for $f_{s}$ . In this work, we directly employ the well-trained query reformulator from Qi et al. (2019) for multi-hop questions, which takes the belief state $b_{t}$ as input and outputs a span of the input sequence as the sparse query $u_{s}$ . As for single-hop questions, since there exists no off-the-shelf multi-step query reformulator, we leave $g_{s}$ as an identity function that returns the original question directly. In this case, requesting the same RF multiple times is equivalent to traverse the retrieval list of original question.
239
+
240
+ $g_{d}$ is a query reformulator for $f_{d}$ . For multi-hop questions, $g_{d}$ concatenates the question $q$ and the passage with the highest score in evidence set $E_{t}$ as the dense query $u_{d}$ , the same as the input of MDR (Xiong et al., 2021). If $E_{t}$ is empty, $u_{d}$ is equal to the question $q$ . Similar to $g_{s}$ , $g_{d}$ for single-hop questions also leaves original questions unchanged.
241
+
242
+ $g_{l}$ is a trainable multi-class classifier for $f_{l}$ . It selects the most promising anchor text from the belief state $b_{t}$ . To enable rejecting all anchors, [NONE] is also treated as a candidate anchor. $g_{l}$ shares the encoder $\Psi^{policy}$ , where each anchor is represented by the average of contextual representations of its tokens. Upon $\Psi^{policy}$ , we use a linear layer to project the hidden representations of candidate anchors to real values and select the anchor with the highest value as the link query $u_{l}$ .
243
+
244
+ In this way, the action space is narrowed down to $\check{A} = \{\langle f_s,u_s\rangle ,\langle f_d,u_d\rangle ,\langle f_l,u_l\rangle ,\langle f_o,u_o\rangle \}$
245
+
246
+ Action Selection The action scoring function $\pi$ is also built upon the output of $\Psi^{policy}$ . To score an action $\langle f, u \rangle$ for current belief state $b_{t}$ , an additional two-layer $(3d \times 4d \times 1)$ MLP, with a ReLU activation in between, projects the concatenated representation of $b_{t}$ , executable function $f$ , and function argument $u$ , i.e., $\mathbf{v}_{[\mathrm{CLS}]}$ , $\mathbf{w}_{f}$ , and $\mathbf{v}_{u}$ , into a real value. $\mathbf{w}_{f} \in \mathbb{R}^{d}$ is a trainable embedding for each executable function, the same dimension as the token embedding. $\mathbf{v}_{u}$ is specific for each function. Since $u_{s}$ , $u_{l}$ and $u_{o}$ have explicit text span in the $b_{t}$ , thus their $\mathbf{v}_{u}$ are the averages of their token representations. As for $u_{d}$ , if $g_{d}$ does not expand the original question, $\mathbf{v}_{u_{d}}$ is the contextual representation of [NONE]. Otherwise, $\mathbf{v}_{u_{d}}$ is the [SOP] of the passage concatenated to the question.
247
+
248
+ In short, the next action is selected from the narrowed action space $\check{A}$ by the scoring function $\pi$ ,
249
+
250
+ $$
251
+ a _ {t} = \Pi (b _ {t}) = \underset {a \in \tilde {A}} {\arg \max } \pi (a | b _ {t}). \tag {3}
252
+ $$
253
+
254
+ # 3.3 Training
255
+
256
+ In the agent, in addition to the encoders $\Psi^{belief}$ and $\Psi^{policy}$ , we need to train the evidence scoring function $\phi$ , link classifier $g_{l}$ , answer extractor $g_{o}$ , and action scoring function $\pi$ , whose losses are $L_{\phi}, L_{l}, L_{o}$ , and $L_{\pi}$ . Since the policy module is dependent on the belief module, we train the agent jointly using the following loss function,
257
+
258
+ $$
259
+ L = L _ {\phi} + L _ {l} + L _ {o} + L _ {\pi}. \tag {4}
260
+ $$
261
+
262
+ Unlike $\phi$ , $g_{l}$ and $g_{o}$ that can be trained in supervised learning through human annotations in QA datasets, the supervision signal for $\pi$ is hard to be derived directly from QA datasets. Even though policies are usually trained via reinforcement learning, reinforcement learning algorithms (Sutton et al., 2000; Mnih et al., 2015) are often sensitive to the quality of reward functions. For a complex task, the reward function $R$ is often hard to specify and exhaustive to tune. Inspired by Choudhury et al. (2017), we explore the use of imitation learning (IL) by querying a model-based oracle online and imitating the action $a^{\star}$ chose by the oracle, which avoids the hassle of designing $R$ and solves the POMDP in the fashion of supervised learning. Thus, the loss of $\pi$ is defined as the cross entropy,
263
+
264
+ $$
265
+ L _ {\pi} = - \log \frac {e ^ {\pi (a ^ {\star} | b)}}{\sum_ {a \in \check {A}} e ^ {\pi (a | b)}}, \tag {5}
266
+ $$
267
+
268
+ where $b$ is the belief state of the agent.
269
+
270
+ The link classifier $g_{l}$ and the answer extractor $g_{o}$ are also optimized with multi-class cross-entropy losses. For $g_{l}$ , denoting its loss as $L_{l}$ , the classification label is set to the anchor text that links to a gold supporting passage, if there is no such anchor, then the pseudo hyperlink [NONE] is labeled. $g_{o}$ is trained as a classifier of start and end position following previous work (Clark et al., 2020), denoting its loss as $L_{o}$ . Considering the belief state $b = \langle q, \{p_{1}, p_{2}, \dots, p_{|C|}\} \rangle$ , the ListMLE (Xia et al., 2008) ranking loss of the evidence scoring function $\phi$ is defined as the negative log likelihood of the ground truth permutation,
271
+
272
+ $$
273
+ L _ {\phi} (\boldsymbol {y}, b) = - \log P \left(\tau_ {\boldsymbol {y}} \mid \left\{\phi \left(p _ {i} | b\right) \right\} _ {i = 0} ^ {| C |}\right), \tag {6}
274
+ $$
275
+
276
+ where $\pmb{y}$ is the relevance label of $\{p_0,p_1,\dots ,p_{|C|}\}$ and $\tau_{\pmb{y}}$ is their ground truth permutation. To learn the dynamic threshold $\phi (p_0|b)$ , we set the relevance label of the pseudo passage $p_0$ to $\pmb{y}_0 = 0.5$ . And passages in $C$ are labeled as $1 / 0$ according to whether they are gold supporting passages.
277
+
278
+ Model-based Oracle The model-based oracle has full access to the environment and can foresee the gold evidence and answer of every question, which means that the oracle can infer the rank of a supporting passage in the retrieval list of any retrieval action. Thus, given a state, the oracle can easily select a near-optimal one from candidate actions according to a greedy policy $\pi^{\star}$ . Specifically, if all gold evidence is collected and the argument of an answer action is a correct answer, the oracle will select the answer action. Otherwise, the oracle will use a greedy algorithm to select the retrieval action that helps to gather a missing passage of evidence in the fewest steps.
279
+
280
+ Belief States Sampling We train the agent on sampled belief states instead of long trajectories. In every epoch, one belief state is sampled for each question. To sample a belief state $\langle q,C\rangle$ , we first uniformly sample a subset from $q$ 's gold evidence as $C$ , which could be an empty set. However, at testing time, it is impossible for the candidate evidence set $C$ to contain only gold evidence. To alleviate the mismatch of the state distribution between training and testing, we inject a few negative passages into $C$ and shuffle them. We treat the first passage in the candidate set as the observation, and the others as evidence collected before.
281
+
282
+ The distribution of injected negative passages can affect the test performance. In this work, to make it simple, we sample 0~2 passages from all top-ranked negative passages in retrieval lists of $f_{s}$ , $f_{d}$ , and $f_{l}$ .
283
+
284
+ # 4 Experiments
285
+
286
+ We evaluate AISO and baselines on two Wikipedia-sourced benchmarks. We first introduce the experimental setups, then describe the experimental results on evidence gathering and question answering. Furthermore, detailed analyses are discussed.
287
+
288
+ # 4.1 Experimental Setup
289
+
290
+ Data HotpotQA (Yang et al., 2018), a multi-hop QA benchmark. We focus on its fullwiki (open-
291
+
292
+ domain) setting<sup>1</sup>. It requires gathering two supporting passages (paragraphs) to answering a question, given the introductory (first) paragraphs of 5M Wikipedia articles dumped on October 1, 2017.
293
+
294
+ SQuAD Open (Chen et al., 2017), a single-hop QA benchmark, whose questions are from the SQuAD dataset (Rajpurkar et al., 2016) and can be answered based on a single passage. We preprocess the Wikipedia dump on December 21, 2016 and extract hyperlinks using WikiExtractor<sup>2</sup>. Following Karpukhin et al. (2020), we split articles into some disjoint passages, resulting in 20M passages in total. We add two extra hyperlinks to each passage, one linking to its previous passage in the article, the other to the next passage.
295
+
296
+ Metrics To test whether the top-2 passages in the evidence set exactly cover both gold supporting passages, we use Supporting Passage Exact Match (P EM) as the evaluation metric following (Asai et al., 2020). To test the performance of answer extraction, we use EM and F1 as our metrics following (Yang et al., 2018).
297
+
298
+ Implementation Details For sparse retrieval, we index all passages in the corpus with Elucidsearch and implement BM25 following Qi et al. $(2019)^{3}$ . For dense retrieval, we leverage the trained passage encoder and query encoder from Karpukhin et al. $(2020)^{4}$ and Xiong et al. $(2021)^{5}$ and index all passage vectors using FAISS (Johnson et al., 2019) offline. During training, we use the HNSW-based index for efficient low-latency retrieval; in test time, we use the exact inner product search index for better retrieval results. For link retrieval, the filtered hyperlinks are used, whose targets have to be another article from this dump.
299
+
300
+ Based on Huggingface Transformers (Wolf et al., 2020), we use ELECTRA (Clark et al., 2020) $(d = 768 / 1024$ for base/large) $^{6}$ as the initializations for our encoders $\Psi^{belief}$ and $\Psi^{policy}$ . The maximum number of passages inputted into the encoders is set to 3 and the length of input tokens is limited to
301
+
302
+ <table><tr><td>Strategy</td><td>Method</td><td>P EM</td><td># read</td></tr><tr><td rowspan="2">fs</td><td>BM25</td><td>11.11</td><td>2</td></tr><tr><td>BM25 + Reranker</td><td>29.60</td><td>20</td></tr><tr><td>fd</td><td>DPR (Karpukhin et al., 2020)</td><td>14.18</td><td>2</td></tr><tr><td rowspan="2">fs○fl</td><td>Semantic Retrieval*���</td><td>69.35</td><td>39.4</td></tr><tr><td>Entity Centric IR*○</td><td>34.90</td><td>-</td></tr><tr><td>fs○fs</td><td>GoldEn Retriever</td><td>47.77</td><td>10</td></tr><tr><td rowspan="3">fd○fd</td><td>MDR (Xiong et al., 2021)</td><td>64.52</td><td>2</td></tr><tr><td>MDR + Reanker†*</td><td>81.20</td><td>≥200</td></tr><tr><td>Ballen†* (Khattab et al., 2021)</td><td>86.70</td><td>-</td></tr><tr><td rowspan="3">fsn</td><td>CogQA* (Ding et al., 2019)</td><td>57.80</td><td>-</td></tr><tr><td>DDRQA†* (Chen et al., 2017)</td><td>79.80</td><td>-</td></tr><tr><td>IRRR†* (Qi et al., 2020)</td><td>84.10</td><td>≥150</td></tr><tr><td rowspan="4">fs○fln-1</td><td>GRR†* (Asai et al., 2020)</td><td>75.70</td><td>≥500</td></tr><tr><td>HopRetriever†* (Li et al., 2021)</td><td>82.54</td><td>≥500</td></tr><tr><td>HopRetriever-plus†*</td><td>86.94</td><td>&gt;500</td></tr><tr><td>TPRR†* (Xinyu et al., 2021)</td><td>86.19</td><td>≥500</td></tr><tr><td>(fs||fd)n</td><td>DrKit* (Dhingra et al., 2020)</td><td>38.30</td><td>-</td></tr><tr><td rowspan="2">(fs|fd|fl)n</td><td>AISObase</td><td>85.69</td><td>36.7</td></tr><tr><td>AISOLarge</td><td>88.17</td><td>35.7</td></tr></table>
303
+
304
+ Table 1: Evidence gathering performance and reading cost on the HotpotQA fullwiki development set. The symbol $\dagger$ denotes the baseline methods use the large version of pretrained language models comparable to our $\mathrm{AISO}_{\mathrm{large}}$ . The results with $*$ are from published papers, otherwise they are our implementations. The symbol $\circ$ denotes sequential apply RFs, $f^n$ denotes apply the RF $f$ multiple times, $||$ denotes combining the results of different RFs, and $(\cdot|\cdot)_{\Pi}$ means choosing one of RFs to use according to the policy II. $\diamond$ : (Nie et al., 2019), $\heartsuit$ : (Qi et al., 2019), $\clubsuit$ : (Qi et al., 2019)
305
+
306
+ 512. To avoid the high confidence passages from being truncated, we input the passages of evidence in descending order of their belief scores from the previous step.
307
+
308
+ To accelerate the model training, for the first 24 epochs, $\Psi^{belief}$ and $\Psi^{policy}$ share parameters, for the next 6 epochs, they are trained separately. The batch size is 32. We use Adam optimization with learning rate $2 \times 10^{-5}$ . To select the best agent (QA model), we first save several checkpoints that perform well on heuristic single-step metrics, such as action accuracy. Then we choose the one that performs best in the whole process on the development set. In test time, the number of interaction steps is limited to $T$ . We set the maximum number of steps to $T = 1000$ if not specified. Once the agent has exhausted its step budget, it is forced to answer the question.
309
+
310
+ # 4.2 Results
311
+
312
+ Evidence Gathering We first evaluate the performance and reading cost on the evidence gathering, illustrating the effectiveness and efficiency of AISO. In Table 1, we split evidence gathering methods into different groups according to their
313
+
314
+ <table><tr><td rowspan="3">Method</td><td colspan="6">Dev</td><td colspan="6">Test</td></tr><tr><td colspan="2">Ans</td><td colspan="2">Sup</td><td colspan="2">Joint</td><td colspan="2">Ans</td><td colspan="2">Sup</td><td colspan="2">Joint</td></tr><tr><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td></tr><tr><td>Semantic Retrieval (Nie et al., 2019)</td><td>46.5</td><td>58.8</td><td>39.9</td><td>71.5</td><td>26.6</td><td>49.2</td><td>45.3</td><td>57.3</td><td>38.7</td><td>70.8</td><td>25.1</td><td>47.6</td></tr><tr><td>GoldEn Retriever (Qi et al., 2019)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>37.9</td><td>49.8</td><td>30.7</td><td>64.6</td><td>18.0</td><td>39.1</td></tr><tr><td>CogQA (Ding et al., 2019)</td><td>37.6</td><td>49.4</td><td>23.1</td><td>58.5</td><td>12.2</td><td>35.3</td><td>37.1</td><td>48.9</td><td>22.8</td><td>57.7</td><td>12.4</td><td>34.9</td></tr><tr><td>\( DDRQA^† \) (Zhang et al., 2020)</td><td>62.9</td><td>76.9</td><td>51.3</td><td>79.1</td><td>-</td><td>-</td><td>62.5</td><td>75.9</td><td>51.0</td><td>78.9</td><td>36.0</td><td>63.9</td></tr><tr><td>\( IRRR+^†* \) (Qi et al., 2020)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>66.3</td><td>79.9</td><td>57.2</td><td>82.6</td><td>43.1</td><td>69.8</td></tr><tr><td>MUPPET (Feldman and El-Yaniv, 2019)</td><td>31.1</td><td>40.4</td><td>17.0</td><td>47.7</td><td>11.8</td><td>27.6</td><td>30.6</td><td>40.3</td><td>16.7</td><td>47.3</td><td>10.9</td><td>27.0</td></tr><tr><td>\( MDR^† \) (Xiong et al., 2021)</td><td>62.3</td><td>75.1</td><td>56.5</td><td>79.4</td><td>42.1</td><td>66.3</td><td>62.3</td><td>75.3</td><td>57.5</td><td>80.9</td><td>41.8</td><td>66.6</td></tr><tr><td>\( GRR^† \) (Asai et al., 2020)</td><td>60.5</td><td>73.3</td><td>49.2</td><td>76.1</td><td>35.8</td><td>61.4</td><td>60.0</td><td>73.0</td><td>49.1</td><td>76.4</td><td>35.4</td><td>61.2</td></tr><tr><td>\( HopRetriever^† \) (Li et al., 2021)</td><td>62.2</td><td>75.2</td><td>52.5</td><td>78.9</td><td>37.8</td><td>64.5</td><td>60.8</td><td>73.9</td><td>53.1</td><td>79.3</td><td>38.0</td><td>63.9</td></tr><tr><td>\( HopRetriever-plus^† \) (Li et al., 2021)</td><td>66.6</td><td>79.2</td><td>56.0</td><td>81.8</td><td>42.0</td><td>69.0</td><td>64.8</td><td>77.8</td><td>56.1</td><td>81.8</td><td>41.0</td><td>67.8</td></tr><tr><td>EBS-Large*</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>66.2</td><td>79.3</td><td>57.3</td><td>84.0</td><td>42.0</td><td>70.0</td></tr><tr><td>\( TPRR^†* \) (Xinyu et al., 2021)</td><td>67.3</td><td>80.1</td><td>60.2</td><td>84.5</td><td>45.3</td><td>71.4</td><td>67.0</td><td>79.5</td><td>59.4</td><td>84.3</td><td>44.4</td><td>70.8</td></tr><tr><td>\( AISO_{base} \)</td><td>63.5</td><td>76.5</td><td>55.1</td><td>81.9</td><td>40.2</td><td>66.9</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>\( AISO_{large} \)</td><td>68.1</td><td>80.9</td><td>61.5</td><td>86.5</td><td>45.9</td><td>72.5</td><td>67.5</td><td>80.5</td><td>61.2</td><td>86.0</td><td>44.9</td><td>72.0</td></tr></table>
315
+
316
+ Table 2: Answer extraction and supporting sentence identification performance on HotpotQA fullwiki. The methods with $\dagger$ use the large version of pretrained language models comparable to $\mathrm{AISO}_{\mathrm{large}}$ . The results marked with $*$ are from the official leaderboard otherwise originated from published papers.
317
+
318
+ <table><tr><td>Method</td><td>EM</td><td>F1</td><td># read</td></tr><tr><td>DrQA (Chen et al., 2017)</td><td>27.1</td><td>-</td><td>5</td></tr><tr><td>Multi-passage BERT (Wang et al., 2019b)</td><td>53.0</td><td>60.9</td><td>100</td></tr><tr><td>DPR (Karpukhin et al., 2020)</td><td>29.8</td><td>-</td><td>100</td></tr><tr><td>BM25+DPR (Karpukhin et al., 2020)</td><td>36.7</td><td>-</td><td>100</td></tr><tr><td>Multi-step Reasoner (Das et al., 2019a)</td><td>31.9</td><td>39.2</td><td>5</td></tr><tr><td>MUPPET (Feldman and El-Yaniv, 2019)</td><td>39.3</td><td>46.2</td><td>45</td></tr><tr><td>GRR† (Asai et al., 2020)</td><td>56.5</td><td>63.8</td><td>≥ 500</td></tr><tr><td>SPARTA† (Zhao et al., 2021)</td><td>59.3</td><td>66.5</td><td>-</td></tr><tr><td>IIRR† (Qi et al., 2020)</td><td>56.8</td><td>63.2</td><td>≥ 150</td></tr><tr><td>AISOLarge</td><td>59.5</td><td>67.6</td><td>24.8</td></tr></table>
319
+
320
+ Table 3: Question answering performance on SQuAD Open benchmark. † denotes the methods use the large pretrained language models comparable to AISOlarge.
321
+
322
+ strategies. Moreover, the first three groups are the traditional pipeline approaches, and the others are iterative approaches.
323
+
324
+ For effectiveness, we can conclude that 1) almost all the iterative approaches perform better than the pipeline methods, 2) the proposed adaptive information-seeking approach $\mathrm{AISO}_{\mathrm{large}}$ outperforms all previous methods and achieves the state-of-the-art performance. Moreover, our $\mathrm{AISO}_{\mathrm{base}}$ model outperforms some baselines that use the large version of pretrained language models, such as HopRetriever, GRR, IRRR, DDRQA, and MDR.
325
+
326
+ For efficiency, the cost of answering an open-domain question includes the retrieval cost and reading cost. Since the cost of reading a passage along with the question online is much greater than the cost of a search, the total cost is linear in # read, reported in the last column of Table 1. # read means
327
+
328
+ the total number of passages read along with the question throughout the process, which is equal to the adaptive number of steps. We can find that the number of read passages in AISO model, i.e., the is about 35, which is extremely small than the competitive baselines (P EM $>80$ ) that need to read at least 150 passages. That is to say, our AISO model is efficient in practice.
329
+
330
+ Question Answering Benefit from high-performance evidence gathering, as shown in Tables 2 and 3, AISO outperforms all existing methods across the evaluation metrics on the HotpotQA fullwiki and SQuAD Open benchmarks. This demonstrates that AISO is applicable to both multi-hop questions and single-hop questions. Notably, on the HotpotQA fullwiki blind test set<sup>7</sup>, $\mathrm{AISOLarge}$ significantly outperforms the second place TPRR (Xinyu et al., 2021) by $2.02\%$ in Sup F1 (supporting sentence identification) and $1.69\%$ on Joint F1.
331
+
332
+ # 4.3 Analysis
333
+
334
+ We conduct detailed analysis of $\mathrm{AISO}_{\mathrm{base}}$ on the HotpotQA fullwiki development set.
335
+
336
+ The effect of the belief and policy module As shown in the second part of Table 4, we examine the variations of AISO with the oracle evidence scoring function $\phi^{\star}$ or oracle action scoring function $\pi^{\star}$ , which are key components of the belief
337
+
338
+ <table><tr><td>Model</td><td>P EM</td><td>Ans F1</td><td># read</td></tr><tr><td>AISObase</td><td>85.69</td><td>76.45</td><td>36.64</td></tr><tr><td>w. φ*</td><td>97.52</td><td>79.99</td><td>40.01</td></tr><tr><td>w. φ* + π*</td><td>98.88</td><td>80.34</td><td>8.92</td></tr><tr><td>fs^t</td><td>68.51</td><td>67.33</td><td>58.74</td></tr><tr><td>fd^t</td><td>79.80</td><td>72.91</td><td>68.63</td></tr><tr><td>(f_d|f_l)_{\Pi}^n</td><td>83.97</td><td>74.93</td><td>61.41</td></tr><tr><td>(f_s|f_l)_{\Pi}^n</td><td>82.44</td><td>74.44</td><td>37.76</td></tr><tr><td>(f_s|f_d)_{\Pi}^n</td><td>79.66</td><td>73.36</td><td>42.01</td></tr></table>
339
+
340
+ Table 4: Analysis experiments on HotpotQA fullwiki.
341
+
342
+ and policy module. When we replace our learned evidence scoring function with $\phi^{\star}$ that can identify supporting passage perfectly, the performance increase a lot while the reading cost do not change much. This means that the belief module has a more impact on the performance than the cost. If we further replace the learned $\pi$ with $\pi^{\star}$ , the cost decreases a lot. This shows that a good policy can greatly improve the efficiency.
343
+
344
+ The impact of retrieval functions As shown in the last part Table 4, the use of a single RF, such as $f_{s}^{t}$ and $f_{d}^{t}$ , leads to poor performance and low efficiency. Moreover, lack of any RF will degrade performance, which illustrates that all RFs contribute to performance. Specifically, although the link RF $f_{l}$ cannot be used alone, it contributes the most to performance and efficiency. Besides, the sparse RF $f_{s}$ may be better at shortening the information-seeking process than the dense RF $f_{d}$ , since removing $f_{s}$ from the action space leads to the number of read passages increase from 36.64 to 61.41. We conjecture this is because $f_{s}$ can rank the evidence that matches the salient query very high.
345
+
346
+ The impact of the maximum number of steps As shown in Figure 3, with the relaxation of the step limit $T$ , $\mathrm{AISO}_{\mathrm{base}}$ can filter out negative passages and finally observe low-ranked evidence through more steps, so its performance improves and tends to converge. However, the cost is more paragraphs to read. Besides, once $T$ exceeds 1000, only a few questions (about $1\%$ ) can benefit from the subsequent steps.
347
+
348
+ The ability to recover from mistakes We count three types of mistakes in gathering evidence on the HotpotQA development set. In the process of collecting evidence for 7405 questions, false evidence was added into the evidence set for 1061 questions, true evidence was missed for 449 questions, and
349
+
350
+ ![](images/350f7921757e7a2541fc68291b08d7674b7424ab7b4dfa2bcb4107e917359f13.jpg)
351
+ Figure 3: Performance and cost of $\mathrm{AISO_{base}}$ on the HotpotQA development set with different step limits.
352
+
353
+ true evidence was deleted from the evidence set for 131 questions. And we find that AISO recovered from $17.7\%$ , $43.9\%$ , and $35.9\%$ of these three types of errors respectively, which implies that even without beam search, $\mathrm{AISO}_{\mathrm{base}}$ can make up for previous mistakes to some extent. Besides, we can see that false evidence is the most harmful to evidence gathering and the most difficult to remedy.
354
+
355
+ # 5 Conclusion and Future Work
356
+
357
+ This work presents an adaptive information-seeking approach for open-domain question answering, called AISO. It models the open-domain QA task as a POMDP, where the environment contains a large corpus and the agent is asked to sequentially select retrieval function and reformulate query to collect the evidence. AISO achieves state-of-the-art results on two public datasets, which demonstrates the necessity of different retrieval functions for different questions. In the future, we will explore other adaptive retrieval strategies, like directly optimizing various information-seeking metrics by using reinforcement learning techniques.
358
+
359
+ # Ethical Considerations
360
+
361
+ We honor and support the ACL code of Ethics. The paper focuses on information seeking and question answering tasks, which aims to answer the question in the open-domain setting. It can be widely used in search engine and QA system, and can help people find the information more accuracy and efficiency. Simultaneously, the datasets we used in this paper are all from previously published works and do not involve privacy or ethical issues.
362
+
363
+ # Acknowledgements
364
+
365
+ This work was supported by National Natural Science Foundation of China (NSFC) under Grants No. 61906180, No. 61773362 and No. 91746301, National Key R&D Program of China under Grants 2020AAA0105200. The authors would like to thank Changying Hao for valuable suggestions on this work.
366
+
367
+ # References
368
+
369
+ Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph for question answering. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
370
+ Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics.
371
+ Sanjiban Choudhury, Ashish Kapoor, Gireeja Ranade, Sebastian A. Scherer, and Debadeepta Dey. 2017. Adaptive information gathering via imitation learning. In Robotics: Science and Systems 2017, volume 13.
372
+ Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
373
+ Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019a. Multi-step retriever-reader interaction for scalable open-domain question answering. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
374
+ Rajarshi Das, Ameya Godbole, Dilip Kavarthapu, Zhiyu Gong, Abhishek Singhal, Mo Yu, Xiaoxiao Guo, Tian Gao, Hamed Zamani, Manzil Zaheer, and Andrew McCallum. 2019b. Multi-step entity-centric information retrieval for multi-hop question answering. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 113-118, Hong Kong, China. Association for Computational Linguistics.
375
+ Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, and William W. Cohen. 2020. Differentiable reasoning over a virtual knowledge base. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
376
+
377
+ Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2694-2703, Florence, Italy. Association for Computational Linguistics.
378
+ Yair Feldman and Ran El-Yaniv. 2019. Multi-hop paragraph retrieval for open-domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2296-2309, Florence, Italy. Association for Computational Linguistics.
379
+ Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasapat, and Ming-Wei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 3929-3938. PMLR.
380
+ Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics.
381
+ Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data.
382
+ Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.
383
+ Omar Khattab, Christopher Potts, and Matei Zaharia. 2021. Baleen: Robust multi-hop reasoning at scale via condensed retrieval. arXiv preprint arXiv:2101.00436.
384
+ Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 565-569, Brussels, Belgium. Association for Computational Linguistics.
385
+ Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics.
386
+ Shaobo Li, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Chengjie Sun, Zhenzhou Ji, and Bingquan Liu. 2021. Hopretriever: Retrieve hops over wikipedia
387
+
388
+ to answer complex questions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13279-13287.
389
+ Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736-1745, Melbourne, Australia. Association for Computational Linguistics.
390
+ Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature, 518(7540):529-533.
391
+ Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Revealing the importance of semantic retrieval for machine reading at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2553-2566, Hong Kong, China. Association for Computational Linguistics.
392
+ Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, and Xueqi Cheng. 2019. Has-qa: Hierarchical answer spans model for open-domain question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6875-6882.
393
+ Peng Qi, Haejun Lee, Oghenetegiri Sido, Christopher D Manning, et al. 2020. Retrieve, rerank, read, then iterate: Answering open-domain questions of arbitrary complexity from text. arXiv preprint arXiv:2010.12527.
394
+ Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. 2019. Answering complex open-domain questions through iterative query generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2590-2602, Hong Kong, China. Association for Computational Linguistics.
395
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
396
+ Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3(4):333-389.
397
+ Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057-1063.
398
+
399
+ Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, volume 99, pages 77-82. Citeseer.
400
+ Haoyu Wang, Mo Yu, Xiaoxiao Guo, Rajarshi Das, Wenhan Xiong, and Tian Gao. 2019a. Do multi-hop readers dream of reasoning chains? In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 91-97, Hong Kong, China. Association for Computational Linguistics.
401
+ Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
402
+ Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019b. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5878-5882, Hong Kong, China. Association for Computational Linguistics.
403
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
404
+ Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: theory and algorithm. In Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of ACM International Conference Proceeding Series, pages 1192-1199. ACM.
405
+ Zhang Xinyu, Zhan Ke, Hu Enrui, Fu Chengzhen, Luo Lan, Jiang Hao, Jia Yantao, Yu Fan, Dou Zhicheng, Cao Zhao, and Chen Lei. 2021. Answer complex questions: Path ranker is all you need. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '21, New York, NY, USA. Association for Computing Machinery.
406
+ Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain questions with multi-hop dense retrieval. In International Conference on Learning Representations.
407
+
408
+ Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.
409
+ Shunyu Yao, Rohan Rao, Matthew Hausknecht, and Karthik Narasimhan. 2020. Keep CALM and explore: Language models for action generation in text-based games. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8736-8754, Online. Association for Computational Linguistics.
410
+ Yuyu Zhang, Ping Nie, Arun Ramamurthy, and Le Song. 2020. Ddrqa: Dynamic document reranking for open-domain multi-hop question answering. arXiv preprint arXiv:2009.07465.
411
+ Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul N. Bennett, and Saurabh Tiwary. 2020. Transformer-xh: Multi-evidence reasoning with extra hop attention. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
412
+ Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2021. SPARTA: Efficient open-domain question answering via sparse transformer matching retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 565-575, Online. Association for Computational Linguistics.
adaptiveinformationseekingforopendomainquestionanswering/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3f5b006bcfbe5f61d47347dc49b0681e3703551a7fc3b67ddfaabc53cb75a57
3
+ size 378677
adaptiveinformationseekingforopendomainquestionanswering/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3925653a3d5ae35ece96d6b8fefb6ac8bed48b141bf6628f70f0c0f9c56b511f
3
+ size 574977
adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f1b5f33b841996434ac16c492b0cae68191354144900cd00bf9e51c7cedcbc2
3
+ size 77429
adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae4b8c8be2723cfcfc97fa339d7124d41fd7d1eec591160c2a765f3cb283ca6d
3
+ size 92029
adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa48eb1c354585d5abd194a2333ccdaa7276d4665fc4f79b59b4148403e8ac5f
3
+ size 1545440
adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/full.md ADDED
@@ -0,0 +1,338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adaptive Proposal Generation Network for Temporal Sentence Localization in Videos
2
+
3
+ Daizong Liu $^{1,2*}$ , Xiaoye Qu $^{3*}$ , Jianfeng Dong $^{4}$ , Pan Zhou $^{1\dagger}$
4
+
5
+ <sup>1</sup>The Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology
6
+ <sup>2</sup>School of Electronic Information and Communication, Huazhong University of Science and Technology
7
+ <sup>3</sup>Huawei Cloud Zhejiang Gongshang University
8
+
9
+ {dzliu,panzhou}@hust.edu.cn, quxiaoye@huawei.com, dongjf24@gmail.com
10
+
11
+ # Abstract
12
+
13
+ We address the problem of temporal sentence localization in videos (TSLV). Traditional methods follow a top-down framework which localizes the target segment with predefined segment proposals. Although they have achieved decent performance, the proposals are handcrafted and redundant. Recently, bottom-up framework attracts increasing attention due to its superior efficiency. It directly predicts the probabilities for each frame as a boundary. However, the performance of bottom-up model is inferior to the top-down counterpart as it fails to exploit the segment-level interaction. In this paper, we propose an Adaptive Proposal Generation Network (APGN) to maintain the segment-level interaction while speeding up the efficiency. Specifically, we first perform a foreground-background classification upon the video and regress on the foreground frames to adaptively generate proposals. In this way, the handcrafted proposal design is discarded and the redundant proposals are decreased. Then, a proposal consolidation module is further developed to enhance the semantic of the generated proposals. Finally, we locate the target moments with these generated proposals following the top-down framework. Extensive experiments on three challenging benchmarks show that our proposed APGN significantly outperforms previous state-of-the-art methods.
14
+
15
+ # 1 Introduction
16
+
17
+ Temporal sentence localization in videos is an important yet challenging task in natural language processing, which has drawn increasing attention over the last few years due to its vast potential applications in information retrieval (Dong et al., 2019; Yang et al., 2020) and human-computer interaction (Singha et al., 2018). It aims to ground the most relevant video segment according to a given
18
+
19
+ ![](images/06b2fadc2a176776cbd51b58d95132b614c8a682524feda4e82a02e1886c7a01.jpg)
20
+ (a) An example of temporal sentence localization in video
21
+
22
+ ![](images/910145ee7cf6d08baf0b59154a48a3075f87aae8ba7e8928a20c658e404b413d.jpg)
23
+ Figure 1: (a) An example of temporal sentence localization in videos. (b) The Top-Down framework predicts the confidence scores of a large number of pre-defined proposals for ranking. (c) The Bottom-Up framework regresses the probabilities of all frames as start or end boundaries.
24
+
25
+ ![](images/ed7dc48a2d260a91b82ca69712feb30a2df5eb2a0d0793eed63f669d43313274.jpg)
26
+
27
+ sentence query. As shown in Figure 1 (a), most parts of video contents are irrelevant to the query (background) while only a short segment matches it (foreground). Therefore, video and query information need to be deeply incorporated to distinguish the fine-grained details of different video segments.
28
+
29
+ Most previous works (Gao et al., 2017; Chen et al., 2018; Zhang et al., 2019; Yuan et al., 2019a; Zhang et al., 2020b; Liu et al., 2021, 2020a,b) follow the top-down framework which pre-defines a large set of segment candidates (a.k.a proposals) in the video with sliding windows, and measures the similarity between the query and each candidate. The best segment is then selected according to the similarity. Although these methods achieve significant performance, they are sensitive to the proposal quality and present slow localization speed due to redundant proposals. Recently, several works (Rodriguez et al., 2020; Zhang et al., 2020a; Yuan et al., 2019b) exploit the bottom-up framework which directly predicts the probabilities of each frame as the start or end boundaries of segment. These methods are proposal-free and much more efficient. However, they neglect the rich information between start and end boundaries without capturing the segment-level interaction. Thus, the performance of bottom-up models is behind the performance of top-down counterpart thus far.
30
+
31
+ To avoid the inherent drawbacks of proposal design in the top-down framework and maintain the localization performance, in this paper, we propose an adaptive proposal generation network (APGN) for an efficient and effective localization approach. Firstly, we perform boundary regression on the foreground frames to generate proposals, where foreground frames are obtained by a foreground-background classification on the entire video. In this way, the noisy responses on the background frames are attenuated, and the generated proposals are more adaptive and discriminative compared to the pre-defined ones. Secondly, we perform proposal ranking to select target segment in a top-down manner upon these generative proposals. As the number of proposals is much fewer than the predefined methods, the ranking stage is more efficient. Furthermore, we additionally consider the proposal-wise relations to distinguish their fine-grained semantic details before the proposal ranking stage.
32
+
33
+ To achieve the above framework, APGN first generates query-guided video representations after encoding video and query features and then predicts the foreground frames using a binary classification module. Subsequently, a regression module is utilized to generate a proposal on each foreground frame by regressing the distances from itself to start and end segment boundaries. After that, each generated proposal contains independent coarse semantic. To capture higher-level interactions among proposals, we encode proposal-wise features by incorporating both positional and semantic information, and represent these proposals as nodes to construct a proposal graph for reasoning correlations among them. Consequently, each updated proposal obtains more fine-grained details for following boundary refinement process.
34
+
35
+ Our contributions are summarized as follows:
36
+
37
+ - We propose an adaptive proposal generation network (APGN) for TSLV task, which adaptively generates discriminative proposals without handcrafted design, thus making localization both effective and efficient.
38
+ - To further refine the semantics of the generated proposals, we introduce a proposal graph to consolidate proposal-wise features by reasoning their higher-order relations.
39
+ - We conduct experiments on three challenging datasets (ActivityNet Captions, TACoS, and Charades-STA), and results show that our proposed APGN significantly outperforms the existing state-of-the-art methods.
40
+
41
+ # 2 Related Work
42
+
43
+ Temporal sentence localization in videos is a new task introduced recently (Gao et al., 2017; Anne Hendricks et al., 2017), which aims to localize the most relevant video segment from a video with sentence descriptions. Various algorithms (Anne Hendricks et al., 2017; Gao et al., 2017; Chen et al., 2018; Zhang et al., 2019; Yuan et al., 2019a; Zhang et al., 2020b; Qu et al., 2020; Yang et al., 2021) have been proposed within the top-down framework, which samples candidate segments from a video first, then integrates the sentence representation with those video segments individually and evaluates their matching relationships. Some of them (Anne Hendricks et al., 2017; Gao et al., 2017) propose to use the sliding windows as proposals and then perform a comparison between each proposal and the input query in a joint multi-modal embedding space. To improve the quality of the proposals, (Zhang et al., 2019; Yuan et al., 2019a) pre-cut the video on each frame by multiple pre-defined temporal scale, and directly integrate sentence information with fine-grained video clip for scoring. (Zhang et al., 2020b) further build a 2D temporal map to construct all possible segment candidates by treating each frame as the start or end boundary, and match their semantics with the query information. Although these methods achieve great performance, they are severely limited by the heavy computation on proposal matching/ranking, and sensitive to the quality of pre-defined proposals.
44
+
45
+ Recently, many methods (Rodriguez et al., 2020; Chen et al., 2020; Yuan et al., 2019b; Mun et al., 2020; Zeng et al., 2020; Zhang et al., 2020a; Nan et al., 2021) propose to utilize the bottom-up framework to overcome above drawbacks. They do not rely on the segment proposals and directly select the starting and ending frames by leveraging cross-modal interactions between video and query. Specifically, they predict two probabilities at each frame, which indicate whether this frame is a start or end frame of the ground truth video segment. Although these methods perform segment localization more efficiently, they lose the segment-level interaction, and the redundant regression on background frames may provide disturbing noise for boundary decision, leading to worse localization performance than top-down methods.
46
+
47
+ In this paper, we propose to preserve the segment-level interaction while speeding up the
48
+
49
+ ![](images/c2c7ebde98bd62cd5f28f9be70912de2221b3520fc32b28dfa77a7f335f5fc45.jpg)
50
+ Figure 2: Overall architecture of APGN. (a) Given a video and a query, we first encode and interact them to obtain query-guided video features. (b) Then, along with regressing boundaries on each frame, we perform foreground-background classification to identify the foreground frames whose corresponding predicted boundaries are further taken as the generated segment proposals. (c) We further encode each proposal and refine them using a graph convolutional network. (d) At last, we predict the confidence score and boundary offset for each proposal.
51
+
52
+ localization efficiency. Specifically, we design a binary classification module on the entire video to filter out the background responses, which helps model focus more on the discriminative frames. At the same time, we replace the pre-defined proposals with the generated ones and utilize a proposal graph for refinement.
53
+
54
+ # 3 The Proposed Method
55
+
56
+ # 3.1 Overview
57
+
58
+ Given an untrimmed video $V$ and a sentence query $Q$ , the TSLV task aims to localize the start and end timestamps $(\tau_s, \tau_e)$ of a specific video segment referring to the sentence query. We focus on addressing this task by adaptively generating proposals. To this end, we propose a binary classification module to filter out the redundant responses on background frames. Then, each foreground frame with its regressed start-end boundaries are taken as the generated segment proposal. In this way, the number of the generated proposals is much smaller than the number of pre-defined ones, making the model more efficient. Besides, a proposal graph is further developed to refine proposal features by learning their higher-level interactions. Finally, the confidence score and boundary offset are predicted for each proposal. Figure 2 illustrates the overall architecture of our APGN.
59
+
60
+ # 3.2 Feature Encoders
61
+
62
+ Video encoder. Given a video $V$ , we represent it as $V = \{v_{t}\}_{t=1}^{T}$ , where $v_{t}$ is the $t$ -th frame and $T$ is the length of the entire video. We first extract the features by a pre-trained network, and then employ a self-attention (Vaswani et al., 2017) module to capture the long-range dependencies among video frames. We also utilize a Bi-GRU (Chung et al.,
63
+
64
+ 2014) to learn the sequential characteristic. The final video features are denoted as $\mathbf{V} = \{\mathbf{v}_t\}_{t=1}^T \in \mathbb{R}^{T \times D}$ , where $D$ is the feature dimension.
65
+
66
+ Query encoder. Given a query $Q = \{q_{n}\}_{n=1}^{N}$ , where $q_{n}$ is the $n$ -th word and $N$ is the length of the query. Following previous works (Zhang et al., 2019; Zeng et al., 2020), we first generate the word-level embeddings using Glove (Pennington et al., 2014), and also employ a self-attention module and a Bi-GRU layer to further encode the query features as $\mathbf{Q} = \{\mathbf{q}_{n}\}_{n=1}^{N} \in \mathbb{R}^{N \times D}$ .
67
+
68
+ Video-Query interaction. After obtaining the encoded features $V, Q$ , we utilize a co-attention mechanism (Lu et al., 2019) to capture the cross-modal interactions between video and query features. Specifically, we first calculate the similarity scores between $V$ and $Q$ as:
69
+
70
+ $$
71
+ \boldsymbol {S} = \boldsymbol {V} \left(\boldsymbol {Q} \boldsymbol {W} _ {S}\right) ^ {\mathrm {T}} \in \mathbb {R} ^ {T \times N}, \tag {1}
72
+ $$
73
+
74
+ where $\mathbf{W}_S\in \mathbb{R}^{D\times D}$ projects the query features into the same latent space as the video. Then, we compute two attention weights as:
75
+
76
+ $$
77
+ \boldsymbol {A} = \boldsymbol {S} _ {r} (\boldsymbol {Q} \boldsymbol {W} _ {S}) \in \mathbb {R} ^ {T \times D}, \boldsymbol {B} = \boldsymbol {S} _ {r} \boldsymbol {S} _ {c} ^ {\mathrm {T}} \boldsymbol {V} \in \mathbb {R} ^ {T \times D}, \tag {2}
78
+ $$
79
+
80
+ where $S_{r}$ and $S_{c}$ are the row- and column-wise softmax results of $S$ , respectively. We compose the final query-guided video representation by learning its sequential features as follows:
81
+
82
+ $$
83
+ \widetilde {\boldsymbol {V}} = \operatorname {B i G R U} ([ \boldsymbol {V}; \boldsymbol {A}; \boldsymbol {V} \odot \boldsymbol {A}; \boldsymbol {V} \odot \boldsymbol {B} ]) \in \mathbb {R} ^ {T \times D}, \tag {3}
84
+ $$
85
+
86
+ where $\widetilde{\boldsymbol{V}} = \{\widetilde{\boldsymbol{v}}_t\}_{t=1}^T$ , BiGRU( $\cdot$ ) denotes the Bi-GRU layers, $[;]$ is the concatenate operation, and $\odot$ is the element-wise multiplication.
87
+
88
+ # 3.3 Proposal Generation
89
+
90
+ Given the query-guided video features $\widetilde{\pmb{V}}$ , we aim to generate the proposal tuple $(t,l_s^t,l_e^t)$ based on
91
+
92
+ each foreground frame $v_{t}$ , where $l_s^t, l_e^t$ denotes the distances from frame $v_{t}$ to the starting and ending segment boundaries, respectively. To this end, we first perform binary classification on the whole frames to distinguish the foreground and background frames, and then treat the foreground ones as positive samples and regress the segment boundaries on these frames as generated proposals.
93
+
94
+ Foreground-Background classification. In the TSLV task, most videos are more than two minutes long while the lengths of annotated target segments only range from several seconds to one minute (e.g. on ActivityNet Caption dataset). Therefore, there exists much noises from the background frames which may disturb the accurate segment localization. To alleviate it, we first classify the background frames and filter out their responses in latter regression. By distinguishing the foreground and background frames with annotations, we design a binary classification module with three full-connected (FC) layers to predict the class $y_{t}$ on each video frame. Considering the unbalanced foreground/background distribution, we formulate the balanced binary cross-entropy loss as:
95
+
96
+ $$
97
+ \mathcal {L} _ {\text {c l a s s}} = - \sum_ {t = 1} ^ {T _ {\text {b a c k}}} \frac {T _ {\text {b a c k}}}{T} \log \left(y _ {t}\right) - \sum_ {t = 1} ^ {T _ {\text {f o r e}}} \frac {T _ {\text {f o r e}}}{T} \log \left(1 - y _ {t}\right), \tag {4}
98
+ $$
99
+
100
+ where $T_{\text{fore}}, T_{\text{back}}$ are the numbers of foreground and background frames. $T$ is the number of total video frames. Therefore, we can differentiate between frames from foreground and background during both training and testing.
101
+
102
+ Boundary regression. With the query-guided video representation $\widetilde{V}$ and the predicted binary sequence of 0-1, we then design a boundary regression module to predict the distance from each foreground frame to the start (or end) frame of the video segment that corresponds to the query. We implement this module by three 1D convolution layers with two output channels. Given the predicted distance pair $(l_s^t,l_e^t)$ and ground-truth distance $(g_s^t,g_e^t)$ , we define the regression loss as:
103
+
104
+ $$
105
+ \mathcal {L} _ {r e g} = \frac {1}{T _ {f o r e}} \sum_ {t = 1} ^ {T _ {f o r e}} \left(1 - \operatorname {I o U} \left(\left(t, l _ {s} ^ {t}, l _ {e} ^ {t}\right), \left(t, g _ {s} ^ {t}, g _ {e} ^ {t}\right)\right)\right), \tag {5}
106
+ $$
107
+
108
+ where $\mathrm{IoU}(\cdot)$ computes the Intersection over Union (IoU) score between the predicted segment and its ground-truth. After that, we can represent the generated proposal as tuples $\{(t,l_s^t,l_e^t)\}_{t = 1}^{T_{\text{fore}}}$ based on the regression results of the foreground frames.
109
+
110
+ ![](images/a8c660b6dba8f909e8e23e0aecebb93d7174be4a8a3bb972eede6c27a4ed7281.jpg)
111
+ Figure 3: To distinguish above three proposals, both positional and semantic relations among proposals needs to be considered.
112
+
113
+ # 3.4 Proposal Consolidation
114
+
115
+ So far, we have generated a certain number of proposals that are significantly less than the predefined ones in existing top-down framework, making the final scoring and ranking process much efficient. To further refine the proposal features for more accurate segment localization, we explicitly model higher-order interactions between the generated proposals to learn their relations. As shown in Figure 3, proposal 1 and proposal 2 contain same semantics of "blue" and "hops", we need to model their positional distance to distinguish them and refine their features for better understanding the phrase "second time". Also, for the proposals (proposal 2 and 3) which are local neighbors, we have to learn their semantic distance to refine their representations. Therefore, in our APGN, we first encode each proposal feature with both positional embedding and frame-wise semantic features, and then define a graph convolutional network (GCN) over the proposals for proposal refinement.
116
+
117
+ Proposal encoder. For each proposal tuple $(t, l_s^t, l_e^t)$ , we represent its segment boundary as $(t - l_s^t, t + l_e^t)$ . Before aggregating the features of its contained frames within this segment boundary, we first concatenate a position embedding $\boldsymbol{emb}_t^{pos}$ to each frame-wise feature $\widetilde{\boldsymbol{v}}_t$ , in order to inject position information on frame $t$ as follows:
118
+
119
+ $$
120
+ \widetilde {\boldsymbol {v}} _ {t} ^ {\prime} = \left[ \widetilde {\boldsymbol {v}} _ {t}; \boldsymbol {e m b} _ {t} ^ {\text {p o s}} \right] \in \mathbb {R} ^ {1 \times (D + d)}, \tag {6}
121
+ $$
122
+
123
+ where $emb_t^{pos}$ denotes the position embedding of the $t$ -th position, and $d$ is the dimension of $emb_t^{pos}$ . We follow (Vaswani et al., 2017) and use the sine and cosine functions of different frequencies to compose position embeddings:
124
+
125
+ $$
126
+ \boldsymbol {e m} \boldsymbol {b} _ {t} ^ {\text {p o s}} [ 2 j ] = \sin \left(\frac {t}{1 0 0 0 0 ^ {2 j / d}}\right), \tag {7}
127
+ $$
128
+
129
+ $$
130
+ \boldsymbol {e m} \boldsymbol {b} _ {t} ^ {\text {p o s}} [ 2 j + 1 ] = \cos \left(\frac {t}{1 0 0 0 0 ^ {2 j / d}}\right), \tag {8}
131
+ $$
132
+
133
+ where $2j$ and $2j + 1$ are the even and odd indices of the position embedding. In this way, each dimension of the positional encoding corresponds to
134
+
135
+ a sinusoid, allowing the model to easily learn to attend to absolute positions. Given the frame features $\{\widetilde{\pmb{v}}_t^{\prime}\}_{t = 1}^{T_{\text{fore}}}$ and a proposal segment $(t - l_s^t,t + l_e^t)$ we encode the vector feature $\pmb{p}_t$ of $t$ -th proposal by aggregating the features of the contained frames in the segment as:
136
+
137
+ $$
138
+ \boldsymbol {p} _ {t} = \operatorname {M L P} _ {2} (\operatorname {P o o l} \left(\operatorname {M L P} _ {1} \left([ \widetilde {\boldsymbol {v}} _ {\lceil t - l _ {s} ^ {t} \rceil}, \dots , \widetilde {\boldsymbol {v}} _ {\lceil t + l _ {e} ^ {t} \rceil} ]\right)\right)), \tag {9}
139
+ $$
140
+
141
+ where each MLP has two FC layers, $\mathrm{Pool}(\cdot)$ denotes the max-pooling. The frames from each proposal are independently processed by $\mathrm{MLP}_1$ before being pooled (channel-wise) to a single feature vector and passed to $\mathrm{MLP}_2$ where information from different frames are further combined. Thus, we can represent the encoded proposal feature as $p_t \in \mathbb{R}^{1 \times (D + d)}$ .
142
+
143
+ Proposal graph. We construct a graph over the proposal features $\{\pmb{p}_t\}_{t=1}^{T_{\text{fore}}}$ , where each node of the graph is a proposal associated with both positions and semantic features. We full connect all node pairs, and define relations between each proposal-pair $(\pmb{p}_t, \pmb{p}_{t'})$ for edge convolution (Wang et al., 2018) as:
144
+
145
+ $$
146
+ \boldsymbol {e} _ {t, t ^ {\prime}} = \operatorname {R e l u} \left(\boldsymbol {p} _ {t} \boldsymbol {\theta} _ {1} + \left(\boldsymbol {p} _ {t ^ {\prime}} - \boldsymbol {p} _ {t}\right) \boldsymbol {\theta} _ {2}\right), \tag {10}
147
+ $$
148
+
149
+ where $\theta_{1}$ and $\theta_{2}$ are learnable parameters. We update each proposal feature $p_t$ to $\widehat{p}_t$ as follows:
150
+
151
+ $$
152
+ \widehat {\boldsymbol {p}} _ {t} = \operatorname {M a x P o o l} \left(\boldsymbol {e} _ {t}\right), \quad \boldsymbol {e} _ {t} = \left\{\boldsymbol {e} _ {t, t ^ {\prime}} \right\} _ {t ^ {\prime} = 1} ^ {T _ {\text {f o r e}}}. \tag {11}
153
+ $$
154
+
155
+ This GCN module consists of $k$ stacked graph convolutional layers. After the above proposal consolidation with graph, we are able to learn the refined proposal features.
156
+
157
+ # 3.5 Localization Head
158
+
159
+ After proposal consolidation, we feed the refined features $\widehat{P} = \{\widehat{p}_t\}_{t=1}^{T_{fore}}$ into two separate heads to predict their confidence scores and boundary offsets for proposal ranking and refinement. Specifically, we employ two MLPs on each feature $\widehat{p}_t$ as:
160
+
161
+ $$
162
+ r _ {t} = \operatorname {S i g m o i d} \left(\mathrm {M L P} _ {3} \left(\widehat {\boldsymbol {p}} _ {t}\right)\right), \tag {12}
163
+ $$
164
+
165
+ $$
166
+ \left(\delta_ {s} ^ {t}, \delta_ {e} ^ {t}\right) = \mathrm {M L P} _ {4} (\widehat {\boldsymbol {p}} _ {t}), \tag {13}
167
+ $$
168
+
169
+ where $r_t \in (0,1)$ is the confidence score, and $(\delta_s^t, \delta_e^t)$ is the offsets. Therefore, the final predicted segment of proposal $t$ can be represented as $(t - l_s^t + \delta_s^t, t + l_e^t + \delta_e^t)$ . To learn the confidence
170
+
171
+ scoring rule, we first compute the IoU score $o_t$ between each proposal segment with the ground-truth $(\tau_s, \tau_e)$ , then we adopt the alignment loss function as below:
172
+
173
+ $$
174
+ \mathcal {L} _ {\text {a l i g n}} = - \frac {1}{T _ {\text {f o r e}}} \sum_ {t = 1} ^ {T _ {\text {f o r e}}} o _ {t} \log \left(r _ {t}\right) + \left(1 - o _ {t}\right) \log \left(1 - r _ {t}\right). \tag {14}
175
+ $$
176
+
177
+ Given the ground-truth boundary offsets $(\hat{\delta}_s^t,\hat{\delta}_e^t)$ of proposal $t$ , we also fine-tune its offsets by a boundary loss as:
178
+
179
+ $$
180
+ \mathcal {L} _ {b} = \frac {1}{T _ {\text {f o r e}}} \sum_ {t = 1} ^ {T _ {\text {f o r e}}} \mathrm {S L} _ {1} \left(\hat {\delta} _ {s} ^ {t} - \delta_ {s} ^ {t}\right) + \mathrm {S L} _ {1} \left(\hat {\delta} _ {e} ^ {t} - \delta_ {e} ^ {t}\right), \tag {15}
181
+ $$
182
+
183
+ where $\mathrm{SL}_1(\cdot)$ denotes the smooth L1 loss function.
184
+
185
+ At last, our APGN model is trained end-to-end from scratch using the multi-task loss:
186
+
187
+ $$
188
+ \mathcal {L} = \lambda_ {1} \cdot \mathcal {L} _ {\text {c l a s s}} + \lambda_ {2} \cdot \mathcal {L} _ {\text {r e g}} + \lambda_ {3} \cdot \mathcal {L} _ {\text {a l i g n}} + \lambda_ {4} \cdot \mathcal {L} _ {b}. \tag {16}
189
+ $$
190
+
191
+ # 4 Experiments
192
+
193
+ # 4.1 Datasets and Evaluation
194
+
195
+ ActivityNet Captions. It is a large dataset (Krishna et al., 2017) which contains 20k videos with 100k language descriptions. This dataset pays attention to more complicated human activities in daily life. Following public split, we use 37,417, 17,505, and 17,031 sentence-video pairs for training, validation, and testing, respectively.
196
+
197
+ TACoS. This dataset (Regneri et al., 2013) collects 127 long videos, which are mainly about cooking scenarios, thus lacking the diversity. We use the same split as (Gao et al., 2017), which has 10146, 4589 and 4083 sentence-video pairs for training, validation, and testing, respectively.
198
+
199
+ Charades-STA. (Gao et al., 2017) consists of 9,848 videos of daily life indoors activities. There are 12,408 sentence-video pairs for training and 3,720 pairs for testing.
200
+
201
+ Evaluation Metric. Following (Zhang et al., 2019; Zeng et al., 2020), we adopt “R@n, IoU=m” as our evaluation metrics, which is defined as the percentage of at least one of top-n selected moments having IoU larger than m.
202
+
203
+ # 4.2 Implementation Details
204
+
205
+ Following (Zhang et al., 2020b; Zeng et al., 2020), for video input, we apply a pre-trained C3D network for all three datasets to obtain embedded features. We also extract the I3D (Carreira and Zisserman, 2017) and VGG (Simonyan and Zisserman,
206
+
207
+ <table><tr><td>Method</td><td>Feature</td><td>R@1, IoU=0.5</td><td>R@1, IoU=0.7</td><td>R@5, IoU=0.5</td><td>R@5 IoU=0.7</td></tr><tr><td>TGN</td><td>C3D</td><td>28.47</td><td>-</td><td>43.33</td><td>-</td></tr><tr><td>CTRL</td><td>C3D</td><td>29.01</td><td>10.34</td><td>59.17</td><td>37.54</td></tr><tr><td>QSPN</td><td>C3D</td><td>33.26</td><td>13.43</td><td>62.39</td><td>40.78</td></tr><tr><td>CBP</td><td>C3D</td><td>35.76</td><td>17.80</td><td>65.89</td><td>46.20</td></tr><tr><td>SCDM</td><td>C3D</td><td>36.75</td><td>19.86</td><td>64.99</td><td>41.53</td></tr><tr><td>GDP</td><td>C3D</td><td>39.27</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LGI</td><td>C3D</td><td>41.51</td><td>23.07</td><td>-</td><td>-</td></tr><tr><td>VSLNet</td><td>C3D</td><td>43.22</td><td>26.16</td><td>-</td><td>-</td></tr><tr><td>CMIN</td><td>C3D</td><td>43.40</td><td>23.88</td><td>67.95</td><td>50.73</td></tr><tr><td>DRN</td><td>C3D</td><td>45.45</td><td>24.36</td><td>77.97</td><td>50.30</td></tr><tr><td>2DTAN</td><td>C3D</td><td>44.51</td><td>26.54</td><td>77.13</td><td>61.96</td></tr><tr><td>APGN</td><td>C3D</td><td>48.92</td><td>28.64</td><td>78.87</td><td>63.19</td></tr></table>
208
+
209
+ 2014) features on Charades-STA. After that, we apply PCA to reduce their feature dimension to 500 for decreasing the model parameters. We set the length of video to 200 for ActivityNet Caption and TACoS, 64 for Charades-STA. For sentence input, we utilize Glove model to embed each word to 300 dimension features. The dimension $D$ is set to 512, $d$ is set to 256. The number of graph layer is $k = 2$ . We set the batchsize as 64. We train our model with an Adam optimizer for 100 epochs. The initial learning rate is set to 0.0001 and it is divided by 10 when the loss arrives on plateaus. $\lambda_1, \lambda_2, \lambda_3, \lambda_4$ in the loss function are 0.1, 1, 1, 1 and decided by the weight magnitude.
210
+
211
+ # 4.3 Performance Comparison
212
+
213
+ Compared methods. We compare our proposed APGN with state-of-the-art methods. We group them into: (1) top-down methods: TGN (Chen et al., 2018), CTRL (Gao et al., 2017), QSPN (Xu et al., 2019), CBP (Wang et al., 2020), SCDM (Yuan et al., 2019a), CMIN (Zhang et al., 2019), and 2DTAN (Zhang et al., 2020b). (2) bottom-up methods: GDP (Chen et al., 2020), LGI (Mun et al., 2020), VSLNet (Zhang et al., 2020a), DRN (Zeng et al., 2020).
214
+
215
+ Quantitative comparison. As shown in Table 1, 2 and 3, our APGN outperforms all the existing methods by a large margin. Specifically, on ActivityNet Caption dataset, compared to the previous best top-down method 2DTAN, we do not rely on large numbers of pre-defined and outperform it by $4.41\%$ , $2.10\%$ , $1.74\%$ , $1.23\%$ in all metrics, respectively. Compared to the previous best bottom-up method DRN, our APGN brings significant improvement of $4.28\%$ and $12.89\%$ in the strict “R@1, IoU=0.7” and “R@5, IoU=0.7” metrics, respectively. Al
216
+
217
+ Table 1: Performance compared with the state-of-the-art TSLV models on ActivityNet Captions dataset.
218
+
219
+ <table><tr><td>Method</td><td>Feature</td><td>R@1, IoU=0.3</td><td>R@1, IoU=0.5</td><td>R@5, IoU=0.3</td><td>R@5, IoU=0.5</td></tr><tr><td>TGN</td><td>C3D</td><td>21.77</td><td>18.90</td><td>39.06</td><td>31.02</td></tr><tr><td>CTRL</td><td>C3D</td><td>18.32</td><td>13.30</td><td>36.69</td><td>25.42</td></tr><tr><td>QSPN</td><td>C3D</td><td>20.15</td><td>15.23</td><td>36.72</td><td>25.30</td></tr><tr><td>CBP</td><td>C3D</td><td>27.31</td><td>24.79</td><td>43.64</td><td>37.40</td></tr><tr><td>SCDM</td><td>C3D</td><td>26.11</td><td>21.17</td><td>40.16</td><td>32.18</td></tr><tr><td>GDP</td><td>C3D</td><td>24.14</td><td>-</td><td>-</td><td>-</td></tr><tr><td>VSLNet</td><td>C3D</td><td>29.61</td><td>24.27</td><td>-</td><td>-</td></tr><tr><td>CMIN</td><td>C3D</td><td>24.64</td><td>18.05</td><td>38.46</td><td>27.02</td></tr><tr><td>DRN</td><td>C3D</td><td>-</td><td>23.17</td><td>-</td><td>33.36</td></tr><tr><td>2DTAN</td><td>C3D</td><td>37.29</td><td>25.32</td><td>57.81</td><td>45.04</td></tr><tr><td>APGN</td><td>C3D</td><td>40.47</td><td>27.86</td><td>59.98</td><td>47.12</td></tr></table>
220
+
221
+ Table 2: Performance compared with the state-of-the-art TSLV models on TACoS datasets.
222
+
223
+ <table><tr><td>Method</td><td>Feature</td><td>R@1, IoU=0.5</td><td>R@1, IoU=0.7</td><td>R@5, IoU=0.5</td><td>R@5, IoU=0.7</td></tr><tr><td>2DTAN</td><td>VGG</td><td>39.81</td><td>23.25</td><td>79.33</td><td>51.15</td></tr><tr><td>APGN</td><td>VGG</td><td>44.23</td><td>25.64</td><td>89.51</td><td>57.87</td></tr><tr><td>CTRL</td><td>C3D</td><td>23.63</td><td>8.89</td><td>58.92</td><td>29.57</td></tr><tr><td>QSPN</td><td>C3D</td><td>35.60</td><td>15.80</td><td>79.40</td><td>45.40</td></tr><tr><td>CBP</td><td>C3D</td><td>36.80</td><td>18.87</td><td>70.94</td><td>50.19</td></tr><tr><td>GDP</td><td>C3D</td><td>39.47</td><td>18.49</td><td>-</td><td>-</td></tr><tr><td>APGN</td><td>C3D</td><td>48.20</td><td>29.37</td><td>89.05</td><td>58.49</td></tr><tr><td>DRN</td><td>I3D</td><td>53.09</td><td>31.75</td><td>89.06</td><td>60.05</td></tr><tr><td>SCDM</td><td>I3D</td><td>54.44</td><td>33.43</td><td>74.43</td><td>58.08</td></tr><tr><td>LGI</td><td>I3D</td><td>59.46</td><td>35.48</td><td>-</td><td>-</td></tr><tr><td>APGN</td><td>I3D</td><td>62.58</td><td>38.86</td><td>91.24</td><td>62.11</td></tr></table>
224
+
225
+ Table 3: Performance compared with the state-of-the-art TSLV models on Charades-STA datasets.
226
+
227
+ though TACoS suffers from similar kitchen background and cooking objects among the videos, it is worth noting that our APGN still achieves significant improvements. On Charades-STA dataset, for fair comparisons with other methods, we perform experiments with same features (i.e., VGG, C3D, and I3D) reported in their papers. It shows that our APGN reaches the highest results over all evaluation metrics.
228
+
229
+ Comparison on efficiency. We compare the efficiency of our APGN with previous methods on a single Nvidia Titan XP GPU on the TACoS dataset. As shown in Table 4, it can be observed that we achieve much faster processing speeds and relatively less learnable parameters. The reason mainly owes to two folds: First, APGN generates proposals without processing overlapped sliding windows as CTRL, and generates less proposals than pre-defined methods such as 2DTAN and CMIN, thus is more efficient; Second, APGN does not apply many convolution layers like 2DTAN or multi-level feature fusion modules as DRN for cross-modal interaction, thus has less parameters.
230
+
231
+ <table><tr><td></td><td>ACRN</td><td>CTRL</td><td>TGN</td><td>2DTAN</td><td>CMIN</td><td>DRN</td><td>APGN</td></tr><tr><td>VPS ↑</td><td>0.23</td><td>0.45</td><td>1.09</td><td>1.75</td><td>81.29</td><td>133.38</td><td>146.67</td></tr><tr><td>Para. ↓</td><td>128</td><td>22</td><td>166</td><td>363</td><td>78</td><td>214</td><td>91</td></tr></table>
232
+
233
+ Table 4: Efficiency comparison in terms of video per second (VPS) and parameters (Para.), where our method APGN is much efficient.
234
+
235
+ <table><tr><td>Model</td><td>class.</td><td>reg.</td><td>p.e.</td><td>graph</td><td>R@1, IoU=0.5</td><td>R@1, IoU=0.7</td></tr><tr><td>①</td><td>×</td><td>×</td><td>×</td><td>×</td><td>39.16</td><td>19.68</td></tr><tr><td>②</td><td>✓</td><td>×</td><td>×</td><td>×</td><td>40.84</td><td>21.30</td></tr><tr><td>③</td><td>✓</td><td>✓</td><td>×</td><td>×</td><td>42.77</td><td>23.52</td></tr><tr><td>④</td><td>✓</td><td>✓</td><td>✓</td><td>×</td><td>43.95</td><td>24.66</td></tr><tr><td>⑤</td><td>✓</td><td>✓</td><td>×</td><td>✓</td><td>45.81</td><td>26.34</td></tr><tr><td>⑥</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>48.92</td><td>28.64</td></tr></table>
236
+
237
+ # 4.4 Ablation Study
238
+
239
+ Main ablation. As shown in Table 5, we verify the contribution of each part in our model. Starting from the backbone model (Figure 2 (a)), we first implement the baseline model ① by directly adding the top-down localization head ((Figure 2 (d))). In this model, we adopt pre-defined proposals as (Zhang et al., 2019). After adding the binary classification module in ②, we can find that classification module effectively filters out redundant predefined proposals on large number of background frames. When further applying adaptive proposal generation as ③, the generated proposals perform better than the pre-defined one ②. Note that, in ③, we directly encode proposal-wise features by max-pooling, and the classification module also makes the contribution for filtering out the negative generated proposals. To capture more fine-grained semantics for proposal refinement, we introduce a proposal encoder (model ④) for discriminative feature aggregation and a proposal graph (model ⑤) for proposal-wise feature interaction. Although each of them can only bring about $1 - 3\%$ improvement, the performance increases significantly when utilizing both of them (model ⑥).
240
+
241
+ Investigation on the video/query encoder. To investigate whether a Transformer (Vaswani et al., 2017) can boost our APGN, we replace the GRU in video/query encoder with a simple Transformer and find some improvements. However, it brings
242
+
243
+ Table 5: Main ablation studies on ActivityNet Caption dataset, where 'class.' and 'reg.' denotes the classification and regression modules (Sec. 3.3), 'p.e' denotes the proposal encoder (Sec. 3.4), 'graph' denotes the proposal graph (Sec. 3.4).
244
+
245
+ <table><tr><td>Components</td><td>VPS ↑</td><td>Para. ↓</td><td>R@1, IoU=0.5</td><td>R@1, IoU=0.7</td></tr><tr><td>w/. GRU</td><td>146.67</td><td>91</td><td>48.92</td><td>28.64</td></tr><tr><td>w/. Transformer</td><td>129.38</td><td>138</td><td>50.11</td><td>29.43</td></tr></table>
246
+
247
+ Table 6: Investigation on video and query encoders on ActivityNet Caption dataset.
248
+
249
+ <table><tr><td>Components</td><td>Module</td><td>R@1, IoU=0.5</td><td>R@1, IoU=0.7</td></tr><tr><td rowspan="2">binary classification</td><td>w/o balanced loss</td><td>46.88</td><td>27.13</td></tr><tr><td>w/ balanced loss</td><td>48.92</td><td>28.64</td></tr></table>
250
+
251
+ Table 7: Investigation on binary classification on ActivityNet Caption dataset.
252
+
253
+ <table><tr><td>Components</td><td>Module</td><td>R@1, IoU=0.5</td><td>R@1, IoU=0.7</td></tr><tr><td rowspan="4">proposal encoder</td><td>w/o position</td><td>46.46</td><td>26.69</td></tr><tr><td>w/ position</td><td>48.92</td><td>28.64</td></tr><tr><td>w/ mean pooling</td><td>47.41</td><td>27.86</td></tr><tr><td>w/ max pooling</td><td>48.92</td><td>28.64</td></tr></table>
254
+
255
+ Table 8: Investigation on proposal encoder on ActivityNet Caption dataset.
256
+
257
+ larger model parameters and lower speed.
258
+
259
+ Effect of unbalanced loss. In the binary classification module, we formulate the typical loss function into a balanced one. As shown in Table 7, the model w/ balanced loss has great improvement (2.04%, 1.51%) compared to the w/o variant, which demonstrates that it is important to consider the unbalanced distribution in the classification process.
260
+
261
+ Investigation on proposal encoder. In proposal encoder, we discard the positional embedding as w/o position, and also replace the max-pooling with the mean-pooling as w/ mean pooling. From the Table 8, we can observe that positional embedding helps to learn the temporal distance (boost $2.46\%$ , $1.95\%$ ), and the max-pooling can aggregate more discriminative features (boost $1.49\%$ , $0.78\%$ ) than the mean-pooling.
262
+
263
+ Investigation on proposal graph. In the table 9, we also give the analysis on the proposal graph. Compared to w/ edge convolution model (Wang et al., 2018), w/ edge attention directly utilizes coattention (Lu et al., 2016) to compute the similarity of each node-pair and updates them by a weighted summation strategy, which performs worse than the former one.
264
+
265
+ Number of graph layer. As shown in Table 9, the model achieves the best result with 2 graph layers, and the performance will drop when the number of
266
+
267
+ <table><tr><td>Components</td><td>Module</td><td>R@1, IoU=0.5</td><td>R@1, IoU=0.7</td></tr><tr><td rowspan="2">proposal graph</td><td>w/ edge attention</td><td>46.63</td><td>26.90</td></tr><tr><td>w/ edge convolution</td><td>48.92</td><td>28.64</td></tr><tr><td rowspan="3">graph layer</td><td>1 layer</td><td>47.60</td><td>27.57</td></tr><tr><td>2 layers</td><td>48.92</td><td>28.64</td></tr><tr><td>3 layers</td><td>48.83</td><td>28.39</td></tr></table>
268
+
269
+ Table 9: Investigation on proposal graph on ActivityNet Caption dataset.
270
+
271
+ <table><tr><td>Methods</td><td>Localization Type</td><td>R@1, IoU=0.5</td><td>R@1, IoU=0.7</td></tr><tr><td>SCDM</td><td>top-downours</td><td>36.7543.86</td><td>19.8626.42</td></tr><tr><td>CMIN</td><td>top-downours</td><td>43.4050.33</td><td>23.8829.75</td></tr><tr><td>LGI</td><td>bottom-upours</td><td>41.5149.20</td><td>23.0730.64</td></tr><tr><td>DRN</td><td>bottom-upours</td><td>45.4553.72</td><td>24.3631.01</td></tr></table>
272
+
273
+ Table 10: Our proposed adaptive proposal generation can serve as a "plug-and-play" module for existing methods. The experiments are conducted on the ActivityNet Captions dataset.
274
+
275
+ layers grows up. We give the analysis is that more graph layers will result in over-smoothing problem (Li et al., 2018) since the propagation between the nodes will be accumulated.
276
+
277
+ Plug-and-play. Our proposed adaptive proposal generation can serve as a plug-and-play for existing methods. As shown in Table 10, for top-down methods, we maintain their feature encoders and video-query interaction, and add the proposal generation and proposal consolidation before the localization heads. For bottom-up methods, we first replace their regression heads with our proposal generation process and then add the proposal consolidation process. It shows that our proposal generation and proposal consolidation can bring large improvement on both two types of methods.
278
+
279
+ # 4.5 Qualitative Results
280
+
281
+ To qualitatively validate the effectiveness of our APGN, we display two typical examples in Figure 4. It is challenging to accurately localize the semantic "for a second time" in the first video, because there are two separate segments corresponding to the same object "girl in the blue dress" performing the same activity "hops". For comparison, previous method DRN fails to understand the meaning of phrase "second time", and ground both two seg
282
+
283
+ ![](images/19b60b2a25b2a7d7d3a75fcc506da9367b8bb5abe555ed6d8cdf44aa51702e66.jpg)
284
+
285
+ ![](images/1d0f3b7aaf3ff00580d655c4ec0a0aba2de1e4e8d617067b55701963a52ae4e2.jpg)
286
+ Figure 4: Typical examples of the localization results on the ActivityNet Caption dataset.
287
+
288
+ ment parts. By contrast, our method has a strong ability to distinguish these two segments in temporal dimension thanks to the positional embedding in the developed proposal graph, thus achieves more accurate localization results. Furthermore, we also display the foreground/background class of each frame in this video. With the help of the proposal consolidation module, the segment proposals of "first time" are filtered out, and all the final ranked top 10 positive frames fall in the target segment.
289
+
290
+ # 5 Conclusion
291
+
292
+ In this paper, we introduce APGN, a new method for temporal sentence localization in videos. Our core idea is to adaptively generates discriminative proposals and achieve both effective and efficient localization. That is, we first introduce binary classification before the boundary regression to distinguish the background frames, which helps to filter out the corresponding noisy responses. Then, the regressed boundaries on the predicted foreground frames are taken as segment proposals, which decreases a large number of poor quality proposals compared to the pre-defined ones in top-down framework. We further learn higher-level feature interactions between the generated proposals for refinement via a graph convolutional network. Our framework achieves state-of-the-art performance on three challenging benchmarks, demonstrating the effectiveness of our proposed APGN.
293
+
294
+ # 6 Acknowledgments
295
+
296
+ This work was supported in part by the National Key Research and Development Program of China under No. 2018YFB1404102, and the National Natural Science Foundation of China under No. 61972448.
297
+
298
+ # References
299
+
300
+ Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
301
+ Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
302
+ Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally grounding natural sentence in video. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
303
+ Long Chen, Chujie Lu, Siliang Tang, Jun Xiao, Dong Zhang, Chilie Tan, and Xiaolin Li. 2020. Rethinking the bottom-up framework for query-based video localization. In Proceedings of the AAAI Conference on Artificial Intelligence.
304
+ Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Advances in Neural Information Processing Systems (NIPS).
305
+ Jianfeng Dong, Xirong Li, Chaoxi Xu, Shouling Ji, and Xun Wang. 2019. Dual encoding for zero-example video retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
306
+ Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. Tall: Temporal activity localization via language query. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
307
+ Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
308
+ Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence.
309
+ Daizong Liu, Xiaoye Qu, Jianfeng Dong, and Pan Zhou. 2020a. Reasoning step-by-step: Temporal
310
+
311
+ sentence localization in videos via deep rectification-modulation network. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1841-1851.
312
+ Daizong Liu, Xiaoye Qu, Jianfeng Dong, Pan Zhou, Yu Cheng, Wei Wei, Zichuan Xu, and Yulai Xie. 2021. Context-aware biaffine localizing network for temporal sentence grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 11235-11244.
313
+ Daizong Liu, Xiaoye Qu, Xiao-Yang Liu, Jianfeng Dong, Pan Zhou, and Zichuan Xu. 2020b. Jointly cross-and self-modal graph attention network for query-based moment localization. In Proceedings of the ACM International Conference on Multimedia (ACM MM).
314
+ Chujie Lu, Long Chen, Chilie Tan, Xiaolin Li, and Jun Xiao. 2019. DEBUG: A dense bottom-up grounding approach for natural language video localization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
315
+ Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances in Neural Information Processing Systems (NIPS).
316
+ Jonghwan Mun, Minsu Cho, and Bohyung Han. 2020. Local-global video-text interactions for temporal grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
317
+ Guoshun Nan, Rui Qiao, Yao Xiao, Jun Liu, Sicong Leng, Hao Zhang, and Wei Lu. 2021. Interventional video grounding with dual contrastive learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
318
+ Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
319
+ Xiaoye Qu, Pengwei Tang, Zhikang Zou, Yu Cheng, Jianfeng Dong, Pan Zhou, and Zichuan Xu. 2020. Fine-grained iterative attention network for temporal language localization in videos. In Proceedings of the 28th ACM International Conference on Multimedia, pages 4280-4288.
320
+ Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
321
+ Cristian Rodriguez, Edison Marrese-Taylor, Fatehneh Sadat Saleh, Hongdong Li, and Stephen Gould. 2020. Proposal-free temporal moment localization
322
+
323
+ of a natural-language query in video using guided attention. In The IEEE Winter Conference on Applications of Computer Vision (WACV).
324
+ Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
325
+ Joyeeta Singha, Amarjit Roy, and Rahul Hussain Laskar. 2018. Dynamic hand gesture recognition using vision-based approach for human-computer interaction. Neural Computing and Applications.
326
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS).
327
+ Jingwen Wang, Lin Ma, and Wenhao Jiang. 2020. Temporally grounding language queries in videos by contextual boundary-aware prediction. In Proceedings of the AAAI Conference on Artificial Intelligence.
328
+ Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. 2018. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics.
329
+ Huijuan Xu, Kun He, Bryan A Plummer, Leonid Sigal, Stan Sclaroff, and Kate Saenko. 2019. Multi-level language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence.
330
+ Xun Yang, Jianfeng Dong, Yixin Cao, Xun Wang, Meng Wang, and Tat-Seng Chua. 2020. Tree-augmented cross-modal encoding for complex-query video retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 1339-1348.
331
+ Xun Yang, Fuli Feng, Wei Ji, Meng Wang, and TatSeng Chua. 2021. Deconfounded video moment retrieval with causal intervention. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR).
332
+ Yitian Yuan, Lin Ma, Jingwen Wang, Wei Liu, and Wenwu Zhu. 2019a. Semantic conditioned dynamic modulation for temporal sentence grounding in videos. In Advances in Neural Information Processing Systems (NIPS).
333
+ Yitian Yuan, Tao Mei, and Wenwu Zhu. 2019b. To find where you talk: Temporal sentence localization in video with attention based location regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33.
334
+ Runhao Zeng, Haoming Xu, Wenbing Huang, Peihao Chen, Mingkui Tan, and Chuang Gan. 2020. Dense regression network for video grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
335
+
336
+ Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. 2020a. Span-based localizing network for natural language video localization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
337
+ Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo. 2020b. Learning 2d temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence.
338
+ Zhu Zhang, Zhijie Lin, Zhou Zhao, and Zhenxin Xiao. 2019. Cross-modal interaction networks for query-based moment retrieval in videos. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR).
adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4eeabf8185bca38cbd502fc259ea357e51d624d14711b88f33a30c6f3ae841f7
3
+ size 593549
adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d177910cc2d6cfb16031ce3166d4c3418961590a715b75596a6e13e707d983b1
3
+ size 370906
adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d048afd58b71b82eaefbe03e8fe0584791a382cc8a38cce95548a1ee245f17c5
3
+ size 132225
adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4b5bc9b93b4f5dcf8084136e8046743f986da4cf6477e6e0398bfda12bc5982
3
+ size 176004