diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_content_list.json b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..41361b57126ed48eb8daef32a7d71425f2d0dddd --- /dev/null +++ b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7320209a3db6c5c40b15cc5a1201833fbb5f9f5615a6fc7371eb762b5548593a +size 47901 diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_model.json b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6efd13dde3213e3052e6b869a4236012259071df --- /dev/null +++ b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8329d1931c4bc65e82c037375125a463443f2f783b123fda121bcfe0f327594 +size 56602 diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_origin.pdf b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fe5607a624d345399a5c3fe77a428b5fa88ec0d2 --- /dev/null +++ b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:269dc5f6cedcd7af70b19a8c319620dbdf06c6f212ab7b05d1715b66ac271114 +size 874521 diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/full.md b/abstractrationalestanceajointmodelforscientificclaimverification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..393496f42050d019412de18a7b6964b132e53f8e --- /dev/null +++ b/abstractrationalestanceajointmodelforscientificclaimverification/full.md @@ -0,0 +1,169 @@ +# Abstract, Rationale, Stance: A Joint Model for Scientific Claim Verification + +Zhiwei Zhang $^{1,2}$ , Jiyi Li $^{2*}$ , Fumiyo Fukumoto $^{2}$ and Yanming Ye $^{1}$ + +Hangzhou Dianzi University, Hangzhou, China + +University of Yamanashi, Kofu, Japan + +hdluxiaozhi97@gmail.com, {jyli, fukumoto}@yamanashi.ac.jp +yeym@hdu.edu.cn + +# Abstract + +Scientific claim verification can help the researchers to easily find the target scientific papers with the sentence evidence from a large corpus for the given claim. Some existing works propose pipeline models on the three tasks of abstract retrieval, rationale selection and stance prediction. Such works have the problems of error propagation among the modules in the pipeline and lack of sharing valuable information among modules. We thus propose an approach, named as ARSJOINT, that jointly learns the modules for the three tasks with a machine reading comprehension framework by including claim information. In addition, we enhance the information exchanges and constraints among tasks by proposing a regularization term between the sentence attention scores of abstract retrieval and the estimated outputs of rational selection. The experimental results on the benchmark dataset SCIFACT show that our approach outperforms the existing works. + +# 1 Introduction + +A system of scientific claim verification can help the researchers to easily find the target scientific papers with the sentence evidence from a large corpus for the given claim. To address this issue, Wadden et al. (2020) introduced scientific claim verification which consists of three tasks. As illustrated in Figure 1, for a given claim, the system finds the abstracts which are related to the claim from a scholarly document corpus (abstract retrieval task); it selects the sentences which are the evidences in the abstract related to the claim (rationale selection task); it also classifies whether the abstract/sentences support or refute the claims (stance prediction task). Wadden et al. (2020) also provided a dataset called SCIFACT. + +Most of the existing works of general claim verification are based on pipeline models (Soleimani et al., 2020; Alonso-Reina et al., 2019; Liu et al., + +![](images/582f6611664416a1d4893e5d3dff788ff02194fb574ffce9eda15fb901958fb5.jpg) +Figure 1: An example of scientific claim verification. + +2020; Zhou et al., 2019; Nie et al., 2019; Lee et al., 2020b); some works utilize joint optimization strategies (Lu and Li, 2020; Yin and Roth, 2018; Hidey et al., 2020). These models attempted to jointly optimize the rationale selection and stance prediction, but did not directly link the two modules (Li et al., 2020). In the case of the scientific claim verification, Wadden et al. (2020) proposed a baseline model VERiSCI based on a pipeline of three components for the three tasks. Pradeep et al. (2021) proposed a pipeline model called VERT5ERINI which utilized the pre-trained language model T5 (Raffel et al., 2020) and adapted a pre-trained sequence-to-sequence model. Li et al. (2020) jointly trained two tasks of rationale selection and stance prediction, and had a pipeline on abstract retrieval task and the joint module. + +Above existing works on scientific claim verification are fully or partially pipeline solutions. One problem of these works is the error propagation among the modules in the pipeline. Another problem is that the module in the pipeline trained independently cannot share and leverage the valuable information among each other. Therefore, we propose an approach, named as ARSJOINT, which jointly learns the three modules for the three tasks. It has a Machine Reading Comprehension (MRC) framework which uses the claim content as the query to learn additional information. In addition, we assume that the abstract retrieval module should + +have good interpretability and tend to assign high sentence-level attention scores to the evidence sentences that influence the retrieval results; it is consistent with the goal of the rationale selection module. We thus enhance the information exchanges and constraints among tasks by proposing a regularization term based on a symmetric divergence to bridge these two modules. + +The experimental results on the benchmark dataset SCiFACT show that the proposed approach has better performance than the existing works. The main contributions of this paper can be summarized as follows. (1). We propose a scientific claim verification approach which jointly trains on the three tasks in a MRC framework. (2). We propose a regularization based on the divergence between the sentence attention of abstract retrieval and the outputs of rational selection. + +# 2 Our Approach + +# 2.1 Notation and Definitions + +We denote the query claim as $q$ and an abstract of a scientific paper as $a \in \mathcal{A}$ . We denote the set of sentences in abstract $a$ as $S = \{s_i\}_{i=1}^l$ and the word sequence of $s_i$ is $[s_{i1}, \ldots, s_{ini_i}]$ . The title of the paper $t \in \mathcal{T}$ is used as auxiliary information, the word sequence of $t$ is $[t_1, \ldots, t_{nt_t}]$ . Here, $S$ , $s_i$ and $t$ are for $a$ in default and we omit the subscripts 'a' in the notations. The purpose of the abstract retrieval task is to detect the set of related abstracts to $q$ ; it assigns relevance labels $y^b \in \{0,1\}$ to a candidate abstract $a$ . The rationale selection task is to detect the decisive rationale sentences $S^r \subseteq S$ of $a$ relevant to the claim $q$ ; it assigns evidence labels $y_i^r \in \{0,1\}$ to each sentence $s_i \in S$ . The stance prediction task classifies $a$ into stance labels $y^e$ which are in {SUPPORTS=0, REFUTES=1, NOINFO=2}. The sentences in $a$ have the same stance label value. + +# 2.2 Pre-processing + +As there are a huge amount of papers in the corpus, applying all of them to the proposed model is time-consuming. Therefore, similar to the existing works on this topic (Wadden et al., 2020; Pradeep et al., 2021; Li et al., 2020), we also utilize a lightweight method to first roughly select a set of candidate papers. We used the BioSentVec (Chen et al., 2019; Pagliardini et al., 2018) to obtain the embeddings of the claim or a scientific paper based on its title and abstract, and compute the cosine sim + +ilarity between the claim and the paper. The papers with top- $k$ similarities are used as the candidates. + +# 2.3 Joint Abstract, Rationale, Stance Model + +The input sequence of our model is defined as $seq = [[\mathrm{CLS}]q[\mathrm{SEP}]t \cdots [\mathrm{SEP}]s_i[\mathrm{SEP}] \cdots]$ , which is obtained by concatenating the claim $q$ , title $t$ and abstract $a$ . We compute the list of word representations $\mathbf{H}_{seq}$ of the input sequence by a pre-trained language model (e.g., BioBERT (Lee et al., 2020a)). We obtain the word representations of the claim $\mathbf{H}_q = [\mathbf{h}_{q_1}, \dots, \mathbf{h}_{q_{n_q}}]$ , the title $\mathbf{H}_t = [\mathbf{h}_{t_1}, \dots, \mathbf{h}_{t_{n_t}}]$ , each sentence $\mathbf{H}_{s_i} = [\mathbf{h}_{s_{i1}}, \dots, \mathbf{h}_{s_{in_i}}]$ , and the abstract $\mathbf{H}_S = \mathbf{H}_a = [\dots, \mathbf{H}_{s_i}, \dots]$ from $\mathbf{H}_{seq}$ and use them in our AR-SJOINT model. Figure 2 shows the framework of our model with three modules for the three tasks. + +In all three modules, we use attention layer (denoted as $g(\cdot)$ ) on word (sentence) representations to compute a sentence (document) representation. A document can be a claim, title, abstract, or their combinations. The computation is as follows (refer to (Li et al., 2020)), where the * in $\mathbf{H}_{*}$ represents any type of sentence (claim $q$ , title $t$ or a sentence $s$ in an abstract), the $\star$ in $\mathbf{H}_{\star}$ represents any type of document, $\mathbf{W}$ and $\mathbf{b}$ are trainable parameters. + +$$ +\begin{array}{c} g (\mathbf {H} _ {*}) = \sum_ {i} \mathbf {u} _ {* i} \boldsymbol {\alpha} _ {* i}, \boldsymbol {\alpha} _ {* i} = \frac {\exp (\mathbf {W} _ {w _ {2}} \mathbf {u} _ {* i} + \mathbf {b} _ {w _ {2}})}{\sum_ {j} \exp (\mathbf {W} _ {w _ {2}} \mathbf {u} _ {* j} + \mathbf {b} _ {w _ {2}})}, \\ \mathbf {u} _ {* j} = \tanh (\mathbf {W} _ {w _ {1}} \mathbf {h} _ {*} ^ {j} + \mathbf {b} _ {w _ {1}}) \text {f o r w o r d - l e v e l a t t e n t i o n}, \end{array} +$$ + +$$ +\begin{array}{l} g \left(\mathbf {H} _ {\star}\right) = \sum_ {i} \mathbf {U} _ {\star i} \boldsymbol {\alpha} _ {\star i}, \boldsymbol {\alpha} _ {\star i} = \frac {\exp \left(\mathbf {W} _ {c _ {2}} \mathbf {U} _ {\star i} + \mathbf {b} _ {c _ {2}}\right)}{\sum_ {j} \exp \left(\mathbf {W} _ {c _ {2}} \mathbf {U} _ {\star j} + \mathbf {b} _ {c _ {2}}\right)}, \\ \mathbf {U} _ {\star j} = \tanh \left(\mathbf {W} _ {c _ {1}} \mathbf {H} _ {\star j} + \mathbf {b} _ {c _ {1}}\right) \text {f o r s e n t e n c e - l e v e l a t t e n t i o n .} \end{array} \tag {1} +$$ + +Abstract Retrieval: In this task, a title can be regarded as an auxiliary sentence that may contain the information related to the claim for the abstract, we thus use the title with the sentences in the abstract together. We build a document $ta = [t, a]$ and concatenate the word representations of $t$ and $a$ into $\mathbf{H}_{ta} = [\mathbf{H}_t, \mathbf{H}_a]$ as the input to this module. We use a hierarchical attention network (HAN) (Yang et al., 2016) to compute document representations $\mathbf{h}_{ta} \in \mathbb{R}^d$ , $\mathbf{h}_{ta} = \mathrm{HAN}(\mathbf{H}_{ta})$ . HAN is proper for document classification by considering the hierarchical document structure (a document has sentences, a sentence has words). We also compute the sentence representation of claim $\mathbf{h}_q \in \mathbb{R}^d$ with a word-level attention layer (denoted as $g(\cdot)$ ), $\mathbf{h}_q = g(\mathbf{H}_q)$ . To compute the relevance between $\mathbf{h}_{ta}$ and $\mathbf{h}_q$ , we use a Hadamard product on them and a Multi-Layer Perception (MLP, denoted as $f(\cdot)$ ) with Softmax (denoted as $\sigma(\cdot)$ ); the outputs + +![](images/87e32f8b5d7a37c014df2b8b8fbdf00ea9f3e3e66fbe989c93afa385a329c6d0.jpg) +Figure 2: Framework of our ARSJOINT model which jointly learns three modules and has rationale regularization. + +are the probabilities that whether the abstract is relevant to the claim, $[p_0^b,p_1^b ] = \sigma (f(\mathbf{h}_q\circ \mathbf{h}_a))$ . A cross entropy loss $\mathcal{L}_{ret}$ is used for training. + +Rationale Selection: This task focuses on judging whether a sentence in the abstract is a rationale one or not. For the multiple sentences in the abstract, they have same title information but have different rationale labels. Therefore, when judging each sentence in the abstract, using the title may not positively influence the performance. We thus use the word representation $\mathbf{H}_a$ of the abstract as input. We compute the sentence representation $\mathbf{h}_{s_i}$ by a word-level attention layer, and use a MLP with Softmax to estimate the probability $p_{i1}^r$ and $p_{i0}^r$ that whether $s_i$ is the evidence of the abstract or not. The cross entropy loss is $\mathcal{L}_{rat}$ . + +Stance Prediction: The module first computes the sentence representation $\mathbf{h}_{s_i}$ in a same way with that of rationale selection. After that, it only selects the sentences $S^r$ with the true evidence label $\hat{y}_i^r = 1$ or the estimated evidence probability $p_{i1}^r > p_{i0}^r$ ; whether using the true label or the estimated label is decided by a scheduled sampling which will be introduced later. We then compute the estimated stance labels based on a sentence-level attention layer and a MLP with Softmax, $\mathbf{h}_{S^r} = g(\mathbf{H}_{S^r})$ and $[p_0^e, p_1^e, p_2^e] = \sigma(f(\mathbf{h}_q \circ \mathbf{h}_{S^r}))$ , where $S^r = \{s_i \in S | \hat{y}_i^r = 1 \text{ or } p_{i1}^r > p_{i0}^r\}$ . The cross entropy loss is $\mathcal{L}_{sta}$ . + +Scheduled Sampling: Since rationale sentences $S^r$ are used in stance prediction, the error of the rationale selection module will be propagated to the stance prediction module. To alleviate this problem, following (Li et al., 2020), we also use a scheduled sampling method (Bengio et al., 2015), which is to + +
SUPPORTNOINFOREFUTESALL
Train332 / 370304 / 220173 / 194809
Dev.124 / 138112 / 11464 / 71300
ALL456 / 508416 / 444237 / 2651109
+ +Table 1: Statistics of SCIFACT dataset. The numbers are "number of claims / number of relevant abstracts". + +feed the sentences with true evidence label $\hat{y}_i^r = 1$ to the stance prediction module at the beginning, and then gradually increase the proportion of the sentences with the estimated evidence probability $p_{i1}^{r} > p_{i0}^{r}$ , until eventually all sentences in $S^r$ are based on the estimated evidences. We set the sampling probability of using the estimated evidences as $p_{\text{sample}} = \sin \left( \frac{\pi}{2} \times \frac{\text{current\_epoch} - 1}{\text{total\_epoch} - 1} \right)$ . + +Rationale Regularization (RR): The attention scores have been used for interpretability in NLP tasks (Serrano and Smith, 2019;Wiegreffe and Pinter, 2019; Sun and Lu, 2020). We assume that the abstract retrieval module should have good interpretability and tend to assign high sentence-level attention scores to the evidence sentences that influence the retrieval results; it is consistent with the goal of the rationale selection module. We thus enhance the information exchanges and constraints among tasks by proposing a regularization term based on a symmetric divergence on the sentence attention scores $\alpha$ of abstract retrieval and the estimated outputs $\mathbf{y}^r$ of the rational selection to bridge these two modules. The detailed formula is as follows, where $\mathbf{p}$ and $\mathbf{q}$ are $\alpha$ or $\mathbf{y}^r$ . + +$$ +\mathcal {D} (\mathbf {p} | | \mathbf {q}) = - \sum_ {i = 1} ^ {l} \left(\mathbf {p} _ {i} \log \left(\mathbf {q} _ {i}\right) + \left(1 - \mathbf {p} _ {i}\right) \log \left(1 - \mathbf {q} _ {i}\right)\right), +$$ + +$$ +\mathcal {L} _ {R R} = \mathcal {D} (\boldsymbol {\alpha} \| \mathbf {y} ^ {r}) + \mathcal {D} (\mathbf {y} ^ {r} \| \boldsymbol {\alpha}). \tag {2} +$$ + +Joint Training: We jointly train our model on abstract retrieval, rationale selection and stance prediction. The joint loss with our RR is as follows, $\mathcal{L} = \lambda_1\mathcal{L}_{ret} + \lambda_2\mathcal{L}_{rat} + \lambda_3\mathcal{L}_{sta} + \gamma \mathcal{L}_{RR}$ , where $\lambda_{1},\lambda_{2},\lambda_{3}$ and $\gamma$ are hyperparameters. + +# 3 Experiments + +# 3.1 Experimental Settings + +Dataset: We utilize the benchmark dataset SCIFACT1. It consists of 5,183 scientific papers with titles and abstracts and 1,109 claims in the training and development sets. Table 1 presents the statistics of the dataset. + +Experimental Settings: For our ARSJOINT model, we use Optuna (Akiba et al., 2019) to tune the hyperparameters $\lambda_{1},\lambda_{2},\lambda_{3}$ and $\gamma$ of the loss $\mathcal{L}$ on $20\%$ of the training set and based on the performance on another $20\%$ training set. We choose the optimal hyperparameters by the average F1-score on abstract-level and sentence-level evaluations. The search ranges of these four hyperparameters are set to [0.1, 12], and the number of search trials is set to 100. Table 2 lists the selected weight hyperparameters of our model. The other hyperparameters such as learning rate in the model refer to the ones used in exiting work (Li et al., 2020) to make a fair comparison. These hyperparameters are listed in Table 3. + +We implement our ARSJOINT model in PyTorch. Since the length of the input sequence seq is often greater than the maximum input length of a BERT-based model, we perform a tail-truncation operation on each sentence of seq that exceeds the maximum input length. For the pre-trained language model, we verify our approach by respectively using RoBERTa-large (Liu et al., 2019) and BioBERT-large (Lee et al., 2020a) trained on a biomedical corpus. We fine-tune RoBERTa-large and BioBERT-large on the SCIFACT dataset. In addition, the MLP in our model has two layers. + +We compare our ARSJOINT approach with Paragraph-Joint (Li et al., 2020), VERISCI $^1$ (Wadden et al., 2020) and VERT5ERINI (Pradeep et al., 2021). We use the publicly available code $^2$ of them. The "Paragraph-Joint Pre-training" model is pretrained on the FEVER dataset (Thorne et al., 2018) and then fine-tune on the SCIFACT dataset. The "Paragraph-Joint SCIFACT-only" is not pre-trained + +
Modelλ1λ2λ3γ
ARSJOINT w/o RR (RoBERTa)2.711.72.2-
ARSJOINT (RoBERTa)0.911.12.62.2
ARSJOINT w/o RR (BioBERT)0.110.84.7-
ARSJOINT (BioBERT)0.212.01.11.9
+ +Table 2: Hyperparameters selected by Optuna for different variants of our model. The "w/o RR" means the model does not utilize rationale regularization. + +
NameValueNameValueNameValue
ktra12lr11 × 10-5Batch size1
kret30lr25 × 10-6Dropout0
+ +Table 3: Hyperparameter settings following the existing work. $k_{tra}$ and $k_{ret}$ are the number of candidate abstracts for each claim in the training and testing stages. $lr_1$ and $lr_2$ are the learning rates of the BERT-based model and other modules of the proposed model. + +on other datasets. + +Evaluation: We evaluate the methods by using the abstract-level and sentence-level evaluation criteria given in SCIFACT1. Abstract-level evaluation: It evaluates the performance of a model on detecting the abstracts which support or refute the claims. For the "Label-Only" evaluation, given a claim $q$ , the classification result of an abstract $a$ is correct if the estimated relevance label $\hat{y}^b$ is correct and the estimated stance label $\hat{y}^e$ is correct. For the "Label+Rationale" evaluation, the abstract is correctly rationalized, in addition, if the estimated rationale sentences contain a gold rationale. Sentence-level evaluation: It evaluates the performance of a model on detecting rationale sentences. For the "Selection-Only" evaluation, an estimated rationale sentence $s_i$ of an abstract $a$ is correctly selected if the estimated rationale label $\hat{y}_i^r$ is correct and the estimated stance label $\hat{y}^e$ is not "NOINFO". Especially, if consecutive multiple sentences are gold rationales, then all these sentences should be estimated as rationales. For the "Selection+Label", the estimated rationale sentences are correctly labeled, in addition, if the estimated stance label $\hat{y}^e$ of this abstract is correct. The evaluation metrics F1-score (F1), Precision (P), and Recall (R) are used. We train the model using all training data, and since Wadden et al. (2020) does not publish the labels on the test set, we evaluate the approaches on the development set following (Li et al., 2020). + +# 3.2 Experimental Results + +Table 4 shows the main experimental results. First, the proposed method ARSJOINT (BioBERT) out + +
ModelsSentence-levelAbstract-level
Selection-OnlySelection+LabelLabel-OnlyLabel+Rationale
PRF1PRF1PRF1PRF1
VERISCI54.343.448.348.538.843.156.448.352.154.246.450.0
Paragraph-Joint SCIFACT-only69.350.058.159.843.250.269.952.159.764.748.355.3
Paragraph-Joint Pre-training74.257.464.763.348.955.271.459.865.165.755.059.9
VERT5ERINI (BM25)67.753.860.063.950.856.670.961.766.067.058.462.4
VERT5ERINI (T5)64.857.460.960.853.857.165.165.165.161.761.761.7
ARSJOINT w/o RR (RoBERTa)70.956.662.956.845.450.566.156.060.661.051.756.0
ARSJOINT (RoBERTa)67.957.162.055.546.750.764.557.460.859.152.655.7
ARSJOINT w/o RR (BioBERT)75.457.765.363.648.655.172.757.464.267.953.659.9
ARSJOINT (BioBERT)76.258.566.266.551.157.875.359.866.770.556.062.4
+ +Table 4: Main experimental results. + +
Claim: Ly6C hi monocytes have a lower inflammatory capacity than Ly6C lo monocytes.αiŷiTyiT
Blood monocytes are well-characterized precursors for macrophages and dendritic cells.0.074500
......
Under inflammatory conditions elicited either by acute infection with Listeria monocytogenes or chronic 1,0,0 infection with Leishmania major, there was a significant increase in immature Ly-6C(high) monocytes, resembling the inflammatory left shift of granulocytes.0.093611
In addition, acute peritoneal inflammation recruited preferentially Ly-6C(med-high) monocytes.0.161311
Taken together, these data identify distinct subpopulations of mouse blood monocytes that differ in maturation stage and capacity to become recruited to inflammatory sites.0.074500
+ +Table 5: Result example of Rationale Regularization. Given a claim, it lists the sentences from an abstract. $\alpha_{i}$ is sentence attention score in the abstract retrieval task; $\hat{y}_i^r$ is estimated rationale label; $y_i^r$ is true rationale label. + +performs the existing works with fully or partially pipelines. VERISCI and VERT5ERINI are pipeline models and Paragraph-Joint is a partially pipeline model with a joint model on two tasks. It shows that the proposed model which jointly learns the three tasks is effective to improve the performance. + +Second, when using the same pre-trained model RoBERTa-large, comparing our method and the paragraph-joint model, ARSJOINT (RoBERTa) and ARSJOINT w/o RR (RoBERTa) have better performance than "Paragraph-Joint SciFact Only", especially on Recall. It shows that jointly learning with the abstract retrieval task can improve performance. For the Paragraph-Joint method, "Paragraph-Joint Pre-training" with pre-training on another FEVER dataset has much better performance than "Paragraph-Joint SCIFACT-only" without pre-training on other datasets. Similarly, we replace RoBERTa-large with BioBERT large which contains biological knowledge; ARSJOINT (BioBERT) achieves better performance over "Paragraph-Joint Pre-training". + +Third, as an ablation study of the proposed RR, in the case of using BioBERT-large, there is a significant difference between the model with and without RR. Although only a small difference in the case of using RoBERTa-large, there is still an improvement on Recall. This indicates that ratio + +nale regularization can effectively improve the performance of the model. Table 5 shows an example of the results with RR. In this example, it lists a claim and the sentences from an abstract. The attention scores of the sentences in the abstract retrieval task are consistent with the true rationale labels (as well as the estimated rationale labels). The abstract retrieval module thus has good interpretability. + +# 4 Conclusion + +In this paper, we propose a joint model named as ARSJOINT on three tasks of abstract retrieval, rationale selection and stance prediction for scientific claim verification in a MRC framework by including claim. We also propose a regularization based on the divergence between the sentence attention of the abstract retrieval task and the outputs of the rational selection task. The experimental results illustrate that our method achieves better results on the benchmark dataset SCIFACT. In future work, we will try to pre-train the model on other general claim verification datasets such as FEVER (Thorne et al., 2018) to improve the performance. + +# Acknowledgments + +This work was partially supported by KDDI Foundation Research Grant Program. + +# References + +Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2623-2631. +Aimée Alonso-Reina, Robert Sepulveda-Torres, Estela Saquete, and Manuel Palomar. 2019. Team gplsi: approach for automated fact checking. In Proceedings of the Second Workshop on Fact Extraction and VERIFICATION (FEVER), pages 110-114. +Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 1171-1179, Cambridge, MA, USA. MIT Press. +Qingyu Chen, Yifan Peng, and Zhiyong Lu. 2019. Biosentvec: creating sentence embeddings for biomedical texts. In 2019 IEEE International Conference on Healthcare Informatics (ICHI), pages 1-5. IEEE. +Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, and Smaranda Muresan. 2020. DeSePtion: Dual sequence prediction and adversarial examples for improved fact-checking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8593-8606, Online. Association for Computational Linguistics. +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020a. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240. +Nayeon Lee, Yejin Bang, Andrea Madotto, and Pascale Fung. 2020b. Misinformation has high perplexity. arXiv preprint arXiv:2006.04666. +Xiangci Li, Gully Burns, and Nanyun Peng. 2020. A paragraph-level multi-task learning model for scientific fact-verification. arXiv preprint arXiv:2012.14500. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2020. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342-7351. + +Yi-Ju Lu and Cheng-Te Li. 2020. GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 505-514, Online. Association for Computational Linguistics. +Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6859-6866. +Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 528-540, New Orleans, Louisiana. Association for Computational Linguistics. +Ronak Pradeep, Xueguang Ma, Rodrigo Nogueira, and Jimmy Lin. 2021. Scientific claim verification with VerT5erini. In Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, pages 94-103, online. Association for Computational Linguistics. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67. +Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics. +Amir Soleimani, Christof Monz, and Marcel Worring. 2020. Bert for evidence retrieval and claim verification. In European Conference on Information Retrieval, pages 359-366. Springer. +Xiaobing Sun and Wei Lu. 2020. Understanding attention for text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3418-3428, Online. Association for Computational Linguistics. +James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics. + +David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7534-7550, Online. Association for Computational Linguistics. +Sarah Wegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics. +Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480-1489. +Wenpeng Yin and Dan Roth. 2018. TwoWingOS: A two-wing optimization strategy for evidential claim verification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 105-114, Brussels, Belgium. Association for Computational Linguistics. +Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. Gear: Graph-based evidence aggregating and reasoning for fact verification. In Proceedings of ACL 2019. \ No newline at end of file diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/images.zip b/abstractrationalestanceajointmodelforscientificclaimverification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ea8c2c4b2aeac9393b13a65a17625018d9ded9c8 --- /dev/null +++ b/abstractrationalestanceajointmodelforscientificclaimverification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:257ef4207f60c4a2cf0e0082c6012f811d690295fcc51b88c70711de8f7eb253 +size 385685 diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/layout.json b/abstractrationalestanceajointmodelforscientificclaimverification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ecc73947a8e1bacf7f8f228228ea84af6e67aa2b --- /dev/null +++ b/abstractrationalestanceajointmodelforscientificclaimverification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ae082651477493561a5be5e73b0bd01013f6e05039b13a6b36866cc57e19692 +size 261604 diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_content_list.json b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5dbf7b90106891a3537a1ce0be9d499137cdbf9b --- /dev/null +++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5615793074297e89d47da8480d03ca6b2d1e28510f832a8492666036cb8935e8 +size 105954 diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_model.json b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4254c7a40a5b1afffa9a0b13697a5034d0189142 --- /dev/null +++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:907ff0eee1a28f48d2efaa38a570a7b5cbcab88ef2f6bfd1a5b61c68a0f39d4e +size 127606 diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_origin.pdf b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..54093a0de3487039f2aeebb81d66f0fb05700e9f --- /dev/null +++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bb71154ac32dc9180e6c3acdf16e648231096c72af7b6e7928cf7edb152618a +size 1532404 diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/full.md b/achievingmodelrobustnessthroughdiscreteadversarialtraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..12e1fd76a5aaae40f00f4ce80765ec45aecfb653 --- /dev/null +++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/full.md @@ -0,0 +1,382 @@ +# Achieving Model Robustness through Discrete Adversarial Training + +Maor Ivgi + +Tel-Aviv University + +maorivgi@mail.tau.ac.il + +Jonathan Berant + +Tel-Aviv University + +The Allen Institute for AI + +joberant@cs.tau.ac.il + +# Abstract + +Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error. While such attacks have been extensively explored for the purpose of evaluating model robustness, their utility for improving robustness has been limited to offline augmentation only. Concretely, given a trained model, attacks are used to generate perturbed (adversarial) examples, and the model is re-trained exactly once. In this work, we address this gap and leverage discrete attacks for online augmentation, where adversarial examples are generated at every training step, adapting to the changing nature of the model. We propose (i) a new discrete attack, based on best-first search, and (ii) random sampling attacks that unlike prior work are not based on expensive search-based procedures. Surprisingly, we find that random sampling leads to impressive gains in robustness, outperforming the commonly-used offline augmentation, while leading to a speedup at training time of $\sim 10\mathrm{x}$ . Furthermore, online augmentation with search-based attacks justifies the higher training cost, significantly improving robustness on three datasets. Last, we show that our new attack substantially improves robustness compared to prior methods. + +# 1 Introduction + +Adversarial examples are inputs that are slightly, but intentionally, perturbed to create a new example that is misclassified by a model (Szegedy et al., 2014). Adversarial examples have attracted immense attention in machine learning (Goodfellow et al., 2015; Carlini and Wagner, 2017; Papernot et al., 2017) for two important, but separate, reasons. First, they are useful for evaluating model robustness, and have revealed that current models are over-sensitive to minor perturbations. Second, adversarial examples can improve robustness: training on adversarial examples reduces the brittleness and over-sensitivity of deep learning models to + +![](images/8810663b71d1f0f9bc9c289a722a9c85db7c1b2479f4e2170802de80651e87b0.jpg) + +![](images/937baa9587da996922ebde48f2106651a7ba83d76a75b08429fd69d3a9a80178.jpg) + +![](images/d64334038a3a97c7837e7cca41b01ee2edb919f86aa45a2a7fd873f669e8aa4e.jpg) +Figure 1: Robust accuracy vs. slowdown in training time, comparing different methods to Baseline (purple pentagon); x-axis in logarithmic scale. The popular ADVOFF (blue squares, offline augmentation with adversarial example) is $10\mathrm{x}$ slower than our simple augmentation of 4 (8) random samples (triangles, RAND-OFF-4, RANDOFF-8) and achieves similar or worse robust accuracy. Our online augmentation of adversarial examples (ADVON, yellow circles) significantly improves robust accuracy, but is expensive to train. + +such perturbations (Alzantot et al., 2018; Jin et al., 2020; Li et al., 2020; Lei et al., 2019; Wallace et al., 2019; Zhang et al., 2020; Garg and Ramakrishnan, 2020; Si et al., 2020a; Goel et al., 2021). + +Training and evaluating models with adversarial examples has had considerable success in computer vision, with gradient-based techniques like FGSM (Goodfellow et al., 2015) and PGD (Madry et al., 2018). In computer vision, adversarial examples can be constructed by considering a continuous space of imperceptible perturbations around image pixels. Conversely, language is discrete, and any perturbation is perceptible. Thus, robust models must be invariant to input modifications that preserve semantics, such as synonym substitutions (Alzantot et al., 2018; Jin et al., 2020), paraphrasing (Tan et al., 2020), or typos (Huang et al., 2019). + +Due to this property of language, ample work has been dedicated to developing discrete attacks that generate adversarial examples through combinatorial optimization (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020; Zhou et al., 2020; Zang et al., 2020). For example, in sentiment analysis, it is common to consider the space of all synonym substitutions, where an adversarial example for an input "Such an amazing movie!" might be "Such an extraordinary film" (Fig. 2). This body of work has mostly focused on evaluating robustness, rather than improving it, which naturally led to the development of complex combinatorial search algorithms, whose goal is to find adversarial examples in the exponential space of perturbations. + +In this work, we address a major research gap in current literature around improving robustness with discrete attacks. Specifically, past work (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020) only considered offline augmentation, where a discrete attack is used to generate adversarial examples and the model is re-trained exactly once with those examples. This ignores online augmentation, which had success in computer vision (Kurakin et al., 2017; Perez and Wang, 2017; Madry et al., 2018), where adversarial examples are generated in each training step, adapting to the changing model. Moreover, simple data augmentation techniques, such as randomly sampling from the space of synonym substitutions and adding the generated samples to the training data have not been investigated and compared to offline adversarial augmentation. We address this lacuna and systematically compare online augmentation to offline augmentation, as well as to simple random sampling techniques. To our knowledge, we are the first to evaluate online augmentation with discrete attacks on a wide range of NLP tasks. Our results show that online augmentation leads to significant improvement in robustness compared to prior work and that simple random augmentation achieves comparable results to the common offline augmentation at a fraction of the complexity and training time. + +Moreover, we present a new search algorithm for finding adversarial examples, Best-First search over a Factorized graph (BFF), which alleviates the greedy nature of previously-proposed algorithms. BFF improves search by incorporating backtracking, and allowing to re-visit previously-discarded search paths, once the current one is revealed to be sub-optimal. + +![](images/0a77b426876b36d98ceeb67a39b10c4b726b1a70d8fcfbb14c897a334655f20d.jpg) +Expected: Positive +Figure 2: Given a movie review $x$ , the model $A$ is robust to a set of perturbations, while $A'$ is not. + +We evaluate model robustness on three datasets: BoolQ (Clark et al., 2019), IMDB (Maas et al., 2011), and SST-2 (Socher et al., 2013), which vary in terms of the target task (question answering and sentiment analysis) and input length. Surprisingly, we find across different tasks (Fig. 1) that augmenting each training example with 4-8 random samples from the synonym substitution space performs as well as (or better than) the commonly used offline augmentation, while being simpler and $10\mathrm{x}$ faster to train. Conversely, online augmentation makes better use of the extra computational cost, and substantially improves robust accuracy compared to offline augmentation. Additionally, our proposed discrete attack algorithm, BFF, outperforms prior work by a wide margin. Our data and code are available at https://github.com/Mivg/robust_transformers. + +# 2 Problem Setup and Background + +Problem setup We focus in this work on the supervised classification setup, where given a training set $\{x_{j},y_{j}\}_{j = 1}^{N}$ sampled from $\mathcal{X}\times \mathcal{Y}$ , our goal is to learn a mapping $A:\mathcal{X}\to \mathcal{Y}$ that achieves high accuracy on held-out data sampled from the same distribution. Moreover, we want the model $A$ to be robust, i.e., invariant to a set of pre-defined label-preserving perturbations to $x$ , such as synonym substitutions. Formally, for any natural language input $x$ , a discrete attack space of label-preserving perturbations $S(x)\subset \mathcal{X}$ is defined. Given a labeled example $(x,y)$ , a model $A$ is robust w.r.t $x$ , if $A(x) = y$ and for any $\bar{x}\in S(x)$ , the output $A(\bar{x}) = A(x)$ . An example $\bar{x}\in S(x)$ such that $A(\bar{x})\neq A(x)$ is called an adversarial example. We assume $A$ provides not only a prediction but a distribution $p_A(x)\in \Delta^{\lfloor \mathcal{V}\rfloor}$ over the possible classes, where $\Delta$ is the simplex, and denote the probability $A$ assigns to the gold label by $[p_A(x)]_y$ . Fig. 2 shows an example from sentiment analysis, + +where a model $A$ is robust, while $A^{\prime}$ is not w.r.t $x$ . + +Robustness is evaluated with robust accuracy (Tsipras et al., 2019), i.e., the fraction of examples a model is robust to over some held-out data. Typically, the size of the attack space $S(x)$ is exponential in the size of $x$ and it is not feasible to enumerate all perturbations. Instead, an upper bound is estimated by searching for a set of adversarial attacks, i.e., "hard" examples in $S(x)$ for every $x$ , and estimating robust accuracy w.r.t to that set. + +Improving robustness with discrete attacks Since language is discrete, a typical approach for evaluating robustness is to use combinatorial optimization methods to search for adversarial examples in the attack space $S(x)$ . This has been repeatedly shown to be an effective attack method on pre-trained models (Alzantot et al., 2018; Lei et al., 2019; Ren et al., 2019; Li et al., 2020; Jin et al., 2020; Zang et al., 2020). However, in terms of improving robustness, discrete attacks have thus far been mostly used with offline augmentation (defined below) and have led to limited robustness gains. In this work, we examine the more costly but potentially more beneficial online augmentation. + +Offline vs. online augmentation Data augmentation is a common approach for improving generalization and robustness, where variants of training examples are automatically generated and added to the training data (Simard et al., 1998). Here, discrete attacks can be used to generate these examples. We consider both offline and online data augmentation and focus on improving robustness with adversarial examples. + +Given a training set $\{(x_j, y_j)\}_{j=1}^N$ , offline data augmentation involves (a) training a model $A$ over the training data, (b) for each training example $(x_j, y_j)$ , generating a perturbation w.r.t to $A$ (using some discrete attack) and labeling it with $y_j$ , and (c) training a new model over the union of the original training set and the generated examples. This is termed offline augmentation because examples are generated with respect to a fixed model $A$ . + +Online data augmentation is this setup, examples are generated at training time w.r.t the current model $A$ . This is more computationally expensive, as examples must be generated during training and not as pre-processing, but examples can adapt to the model over time. In each step, half the batch contains examples from the training set, and half are adversarial examples generated by some dis + +crete attack w.r.t to the model's current state. + +Online augmentation has been used to improve robustness in NLP with gradient-based approaches (Jia et al., 2019; Shi et al., 2020; Zhou et al., 2020), but to the best of our knowledge has been overlooked in the context of discrete attacks. In this work, we are the first to propose model-agnostic online augmentation training, which uses automatically generated discrete adversarial attacks to boost overall robustness in NLP models. + +# 3 The Attack Space + +An attack space for an input with respect to a classification task can be intuitively defined as the set of label-preserving perturbations over the input. A popular attack space $S(x)$ , which we adopt, is the space of synonym substitutions (Alzantot et al., 2018; Ren et al., 2019). Given a synonym dictionary that provides a set of synonyms $\operatorname{Syn}(w)$ for any word $w$ , the attack space $S_{\operatorname{syn}}(x)$ for an utterance $x = (w_1, \dots, w_n)$ contains all utterances that can be obtained by replacing a word $w_i$ (and possibly multiple words) with one of their synonyms. Typically, the number of words from $x$ allowed to be substituted is limited to be no more than $D = \lceil d \cdot |x| \rceil$ , where $d \in \{0.1, 0.2\}$ is a common choice. + +Synonym substitutions are context-sensitive, i.e., substitutions might only be appropriate in certain contexts. For example, in Fig. 3, replacing the word "like" with its synonym "similar" (red box) is invalid, since "like" is a verb in this context. Consequently, past work (Ren et al., 2019; Jin et al., 2020) filtered $S_{\text{syn}}(x)$ using a context-sensitive filtering function $\Phi_x(w_i, \bar{w}_i) \in \{0, 1\}$ , which determines whether substituting a word $w_i$ from the original utterance $x$ with its synonym $\bar{w}_i$ is valid in a particular context. For instance, an external model can check whether the substitution maintains the part-of-speech, and whether the overall semantics is maintained. We define the filtered synonyms substitutions space $S_{\Phi}(x)$ as the set that includes all utterances $\bar{x}$ that can be generated through a sequence of no more than $D$ single-word substitutions from the original utterance that are valid according to $\Phi(\cdot, \cdot)$ . In §5.2, we describe the details of the synonym dictionary and function $\Phi$ . + +![](images/148154a816a7fa9accf6c69941f4129f8c63c99ed3ee9093802795152583398d.jpg) +Figure 3: Example of an attack space, and the paths taken by a greedy algorithm and best-first search. An adversarial example has a probability $p < 0.5$ for the gold positive label. + +# 4 Best-first Search Over a Factorized Graph + +Searching over the attack space $S_{\Phi}(x)$ can be naturally viewed as a search problem over a directed acyclic graph (DAG), $G = (\mathcal{U}, \mathcal{E})$ , where each node $u_{\bar{x}} \in \mathcal{U}$ is labeled by an utterance $\bar{x}$ , and edges $\mathcal{E}$ correspond to single-word substitutions, valid according to $\Phi(\cdot)$ . The graph is directed and acyclic, since only substitutions of words from the original utterance $x$ are allowed (see Fig. 3). Because there is a one-to-one mapping from the node $u_{\bar{x}}$ to the utterance $\bar{x}$ , we will use the latter to denote both the node and the utterance. + +Discrete attacks use search algorithms to find an adversarial example in $S(x)$ . The search is guided by a heuristic scoring function $s_A(x) \coloneqq [p_A(x)]_y$ , where the underlying assumption is that utterances that give lower probability to the gold label are closer to an adversarial example. A popular choice for a search algorithm in NLP is greedy search, illustrated in Fig. 3. Specifically, one holds in step $t$ the current node $x_t$ , where $t$ words have been substituted in the source node $x_0 = x$ . Then, the model $A(\cdot)$ is run on the frontier, that is, all out-neighbor nodes $\mathcal{N}(x_t) = \{\hat{x}_{t+1} \mid (x_t, \hat{x}_{t+1}) \in \mathcal{E}\}$ , and the one that minimizes the heuristic scoring function is selected: $x_{t+1} \coloneqq \operatorname*{argmin}_{\hat{x} \in \mathcal{N}(x_t)} s_A(\hat{x})$ . + +While greedy search has been used for character-flipping (Ebrahimi et al., 2018), it is ill-suited in the space of synonym substitutions. The degree of nodes is high - assuming $n_{\mathrm{rep}}$ words can be replaced in the text, each with $K$ possible synonyms, then the out degree is $O(n_{\mathrm{rep}} \cdot K)$ . This results in an infeasible number of forward passes through the attacked model even for a small number of search + +iterations. + +To enable effective search through the search space, we (a) factorize the graph such that the out-degree of nodes is lower, and (b) use a best-first search algorithm. We describe those next. + +Graph factorization To reduce the out-degree of a node in the search space and thus improve its efficiency, we can split each step into two. First, choose a position to substitute in the utterance; Second, choose a substitution for that position. This reduces the number of evaluations of $A$ per step from $O(n_{\mathrm{rep}} \cdot K)$ to $O(n_{\mathrm{rep}} + K)$ . To estimate the score of a position $i$ , one can mask the word $w_i$ with a mask token $\tau$ and measure $s_A(x_{w_i \rightarrow \tau})$ where $x_{w_i \rightarrow \tau}$ is the utterance $x$ where the word in position $i$ is replaced by the mask $\tau$ . + +We can describe this approach as search over a bi-partite DAG $\hat{G} = (\mathcal{U}\cup \mathcal{W},\hat{\mathcal{E}})$ . The nodes $\mathcal{U}$ are utterances like in $G$ , and the new nodes are utterances with a single mask token $\mathcal{W} = \{\bar{x}_{w_i\rightarrow \tau}\mid \bar{x}\in S(x)\wedge w_i$ is a word in $x\}$ . The edges comprise two types: $\hat{\mathcal{E}} = \mathcal{E}_1\cup \mathcal{E}_2$ . The edges $\mathcal{E}_1$ are from utterances to masked utterances: $\mathcal{E}_1 = \{(\bar{x},\bar{x}_{w_i\rightarrow \tau})\} \subset \mathcal{U}\times \mathcal{W}$ , and $\mathcal{E}_2 = \{(\bar{x}_{w_i\rightarrow \tau},\bar{x}_{w_i\rightarrow w_{syn}})\} \subset \mathcal{W}\times \mathcal{U}$ where $w_{syn}\in \operatorname {Syn}(w_i)$ . In Figure 3, the two rightmost nodes in each row would be factorized together as they substitute the same word, and the algorithm will evaluate only one of them to estimate the potential benefit of substituting "movie". + +Best-first search A factorized graph makes search possible by reducing the out-degree of nodes. However, greedy search is still sub-optimal. This is since it relies on the heuristic search function to be a good estimate of the distance to an adversarial example, which can often be false. Consider the example in Fig. 3. The two adversarial examples (with $p = 0.4$ or $p = 0.45$ ) are not reachable from the best node after the first step ( $p = 0.6$ ), only from the second-best ( $p = 0.65$ ). + +Best-first search (Pearl, 1984) overcomes this at a negligible cost, by holding a min-heap over the nodes of the frontier of the search space (Alg. 1). In each step, we pop the next utterance, which assigns the lowest probability to the gold label, and push all neighbors into the heap. When a promising branch turns out to be sub-optimal, search can resume from an earlier node to find a better solution, as shown in the blue path in Figure 3. To bound the cost of finding a single adversarial example, we bound the number of forward passes through the model + +$A$ with a budget parameter $B$ . To further reduce "greedyness", search can use a beam by popping more than one node in each step, expanding all their neighbors and pushing the result back to the heap. Our final approach uses Best-First search over a Factorized graph, and is termed BFF. + +Algorithm 1: BFF +```txt +input :model A, factorized graph $G$ , utterance $x$ . +heap $\leftarrow \{(x,s_A(X)\}$ $x^{*} \gets x$ +while $|\text{heap}| > 0$ and budget $B$ not exhausted: +[ \begin{aligned} & \bar{x} \gets \text{heap.pop()} \\ & x^{*} \gets \operatorname{argmin}_{\hat{x} \in \{\bar{x},x^{*}\}} A(\hat{x}) \\ & \text{if } A(x^{*}) \neq y \text{ break;} \\ & \text{for } \hat{x} \in \mathcal{N}(\bar{x}) \text{ do} \\ & \quad | \quad \text{heap.push}(\hat{x},s_A(\hat{x})) \end{aligned} ] +return $x^{*}$ +``` + +# 5 Experiments + +We conduct a thorough empirical evaluation of model robustness across a wide range of attacks and training procedures. + +# 5.1 Experimental Setup + +To evaluate our approach over diverse settings, we consider three different tasks: text classification, sentiment analysis and question answering, two of which contain long passages that result in a large attack space (see Table 1). + +1. SST-2: Based on the Stanford sentiment treebank (Socher et al., 2013), SST-2 is a binary (positive/negative) classification task containing 11,855 sentences describing movie reviews. SST-2 has been frequently used for evaluating robustness. +2. IMDB (Maas et al., 2011): A binary (positive/negative) text classification task, containing 50K reviews from IMDB. Here, passages are long and thus the attack space is large (Table 1). +3. BoolQ (Clark et al., 2019): contains 16,000 yes/no questions over Wikipedia paragraphs. This task is perhaps the most interesting, because the attack space is large and answering requires global passage understanding. We allow word substitutions in the paragraph only and do not substitute nouns, verbs, or adjectives that appear in the question to avoid non-label-preserving perturbations. Further details can be found in App. A.2. + +Models We consider a wide array of models and evaluate both their downstream accuracy and ro + +bustness. In all models, we define a budget of $B = 1000$ , which specifies the maximal number of allowed forward passes through the model for finding an adversarial example. All results are an average of 3 runs. + +To demonstrate the effectiveness of BFF for both robustness evaluation as well as adversarial training, we compare it to a recent state-of-the-art discrete attack, TEXTFOOLER (Jin et al., 2020), which we denote in model names below by the prefix TxF. The models compared are: + +- BASELINE: we fine-tune a pretrained language model on the training set. We use BERT-BASE (Devlin et al., 2019) for IMDB/SST-2 and ROBERTA-LARGE (Liu et al., 2019) for BoolQ. These baselines are on par with current state-of-the-art to demonstrate the efficacy of our method. +- BFFOFF/TXFOFF Offline augmentation with the BFF or TEXTFOOLER attacks. +- BFFON/TxFON Online augmentation with the BFF or TEXTFOOLER attacks. +- RANDOFF- $L$ : We compare search-based algorithms to a simple and efficient approach that does not require any forward passes through the model $A$ . Specifically, we randomly sample $L$ utterances from the attack space for each example (without executing $A$ ) and add them to the training data. +- RANDOM: A random sampling approach that does use the model $A$ . Here, we sample $B$ random utterances, pass them through $A$ , and return the attack that resulted in lowest model probability. +- FREELB: For completeness, we also consider FREELB (Zhu et al., 2020), a popular gradient-based approach for improving robustness, which employs virtual adversarial training (see §6). This approach uses online augmentation, where examples are created by taking gradient steps w.r.t the input embeddings to maximize the model's loss. Other gradient-based approaches (e.g., certified robustness) are not suitable when using pre-trained transformers, which we further discuss in §6. + +In a parallel line of work, Garg and Ramakrishnan (2020) and Li et al. (2020) used pre-trained language models to both define an attack space and to generate high-fidelity attacks in that space. While successful, these approaches are not suitable for our setting, due to the strong coupling between the attack strategy and the attack space itself. We further discuss this in §6 + +Evaluation We evaluate models on their downstream accuracy, as well as on robust accuracy, i.e. the fraction of examples against which the model is robust. Since exact robust accuracy is intractable to compute due to the exponential size of the attack space, we compute an upper-bound by attacking each example with both BFF and TEXTFOOLER (TxF) with a budget of $B = 2000$ . An example is robust if we cannot find an utterance where the prediction is different from the gold label. We evaluate robust accuracy on 1000/1000/872 samples from the development sets of BoolQ/IMDB/SST-2. + +# 5.2 Attack Space + +Despite the myriad of works on discrete attacks, an attack space for synonym substitutions has not been standardized. While all past work employed a synonym dictionary combined with a $\Phi(\cdot, \cdot)$ filtering function (see §3), the particular filtering functions vary. When examining the attack space proposed in TxF, we observed that attacks result in examples that are difficult to understand or are not label-preserving. Table 6 in App. A.4 shows several examples. For instance, in sentiment classification, the attack replaced "compelling" with "unconvincing" in the sentence "it proves quite unconvincing as an intense, brooding character study" which alters the meaning and the sentiment of the sentence. Therefore, we use a more strict definition of the filtering function and conduct a user study to verify it is label-preserving. + +Concretely, we use the synonym dictionary from Alzantot et al. (2018). We determine if a word substitution is context-appropriate by computing all single-word substitutions $(n_{\mathrm{rep}} \cdot K)$ and disallowing those that change the POS tag according to spaCy (Honnibal et al.) or increase perplexity according to GPT-2 (Radford et al., 2019) by more than $25\%$ . Similar to Jin et al. (2020), we also filter out synonyms that are not semantics-preserving according to the USE (Cer et al., 2018) model. The attack space includes any combination of allowed single-word substitutions, where the fraction of allowed substitutions is $d = 0.1$ . Implementation details are in App. A.2. We find that this ensemble of models reduces the number of substitutions that do not preserve semantics and are allowed by the filtering function. + +We check the validity of our more restrictive attack space with a user study, where we verify that our attack space is indeed label-preserving. The + +
|x|nrep|Syn(w)||Sφ(x)|
SST-28.92.72.427.7
IMDB242.497.33.62.27 × 1064
BoolQ†97.738.73.63.64 × 1025
+ +Table 1: Statistics on datasets and the size of attack space. We show the average number of words per utterance $|x|$ , the average number of words with substitutions $n_{\mathrm{rep}}$ , average number of synonyms per replaceable word, and an estimation of the attack space size. + +details of the user study are in $\S 5.6$ + +# 5.3 Robustness Results + +Table 2 shows accuracy on the development set, robust accuracy, and slowdown compared to BASELINE for all models and datasets. For downstream accuracy, training for robustness either maintains or slightly increases downstream accuracy. This is not the focus of this work, but is indeed a nice side-effect. For robust accuracy, discrete attacks substantially improve robustness: $80.5 \rightarrow 85.3$ on SST-2, $41.2 \rightarrow 78.9$ on IMDB, and $50.0 \rightarrow 68.7$ on BoolQ, closing roughly half the gap from downstream accuracy. + +Comparing different attacks, online augmentation (BFFON), which has been overlooked in the context of discrete attacks, leads to dramatic robustness gains compared to other methods, but is slow to train - 20-270x slower than BASELINE. This shows the importance of continuous adaptation to the current vulnerabilities of the model. + +Interestingly, adding offline random samples $(\mathrm{RANDOFF} - L)$ consistently improves robust accuracy, and using $L = 12$ leads to impressive robustness gains without executing $A$ at all, outperforming BFFOFF in robust accuracy, and being $\sim 5\mathrm{x}$ faster on IMDB and BoolQ. Moreover, random sampling is trivial to implement, and independent from the attack strategy. Hence, the common practice of using offline augmentation with search-based attacks, such as BFFOFF, seems misguided, and a better solution is to use random sampling. Online random augmentation obtains impressive results, not far from BFFON, without applying any search procedure, but is very slow, since it uses the entire budget $B$ in every example. + +Comparing BFF to TxF, we observe that BFF, which uses best-first search, outperforms TxF in both the online and offline setting. Last FREELB, which is based on virtual adversarial training, improves robust accuracy at a low computational cost, but is dramatically outperformed by discrete search + +
ModelAccuracyRobust AccuracySlowdown
SST-2IMDBBoolQSST-2IMDBBoolQSST-2IMDBBoolQ
Baseline91.993.484.580.541.250.0×1×1×1
FREELB92.593.985.582.162.555.8×1.8×1.8×3.9
RANDOFF-191.993.585.683.550.352.2×1.9×1.5×2.1
RANDOFF-491.693.785.583.657.058.4×3.8×4.5×5.1
RANDOFF-891.193.886.183.360.961.3×5.4×8.0×9.3
RANDOFF-1291.593.785.884.260.163.0×6.3×11.5×13.2
TXOFF91.293.486.583.549.061.5×3.0×56.1×8.6
BFFOFF91.893.785.884.654.362.3×5.4×60.0×63.2
RANDOM91.794.185.684.968.566.0×14.8×249.3×280.4
TXFON91.393.886.084.067.465.3×3.9×58.0×28.1
BFFON91.794.286.585.378.968.7×21.1×270.7×215.9
+ +Table 2: Accuracy on the evaluation set, robust accuracy, and slowdown in model training for all datasets. + +
ModelIMDBBoolQ
RandTxFBFFGenRandTxFBFFGen
Baseline73.170.249.954.162.167.750.252.0
RND-OA74.874.752.959.170.972.059.462.0
TxFOFF67.777.552.556.771.075.061.563.4
BFFOFF75.476.958.664.170.974.864.765.2
RANDOM87.076.468.579.671.572.660.167.5
TxFON81.184.269.773.773.474.865.367.4
BFFON87.084.979.081.975.176.169.070.3
+ +Table 3: Robust accuracy of different robust models w.r.t particular discrete attacks. RND-OA is offline augmentation with a random attack and $B = 1000$ . Gen is our implementation of the Genetic Attack by Alzantot et al. (2018). + +based attacks, including BFF. + +To summarize, random sampling leads to significant robustness gains at a small cost, outperforming the commonly used offline augmentation. Online augmentation leads to the best robustness, but is more expensive to train. + +# 5.4 Robustness across Attack Strategies + +A natural question is whether a model trained for robustness with an attack (e.g., BFF) is robust w.r.t to examples generated by other attacks, which are potentially uncorrelated with them. To answer that, we evaluate the robustness of our models to attacks generated by BFF, TxF, and random sampling. Moreover, we evaluate robustness to a genetic attack, which should not be correlated with BFF and TxF: we re-implement the genetic attack algorithm from Alzantot et al. (2018) (details in A.3), and examine the robustness of our model to this attack. All attacks are with a budget of $B = 2000$ . + +Table 3 shows the result of this evaluation. We observe that BFFON obtains the highest robust accuracy results w.r.t to all attacks: BFF, TxF, random sampling, and a genetic attack. In offline + +augmentation, we observe again that BFFOFF obtains good robust accuracy, higher or comparable to all other offline models for any attack strategy. This result highlights the generality of BFF for improving model robustness. + +# 5.5 Success Rate Results + +To compare the different attacks proposed in §4, we analyze the success rate against BASELINE, i.e., the proportion of examples for which an attack finds an adversarial example as a function of the budget $B$ . + +Fig. 4 compares the success rate of different attacks. We observe that BFF-based attacks have the highest success rate after a few hundred executions. TEXTFOOLER performs well at first, finding adversarial examples for many examples, but then its success plateaus. Similarly, a random approach, which ignores the graph structure, starts with a relatively high success rate, as it explores far regions in the graph, but fails to properly utilize its budget and then falls behind. + +BFF combines backtracking with graph factorization. When removing backtracking, i.e., greedy search over the factorized graph, success rate decreases, especially in BoolQ. Greedy search without graph factorization leads to a low success rate due to the large number of neighbors of each node, which quickly exhausts the budget. Moreover, looking at BFF with beam size 2 (popping 2 items from the heap in each step) leads to lower performance when the budget $B \leq 2000$ , as executions are expended on less promising utterances, but could improve success rate given a larger budget. + +Lastly, due to our more strict definition of the attack space, described in (§5.2), success rates of BFF and TxF are lower compared to Jin et al. (2020). To verify the correctness of our attacks, + +![](images/930b717c723a8f4dd3ad31c739564b2b64569ea166c140fe977e93b0f7dfce1b.jpg) + +![](images/d2273dd718e5b0a8ec3951e84c8a35cadbcbe593bf6b009c255097cb6928b486.jpg) +Figure 4: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget. + +![](images/dd5d8068ca9d062b2b9861ae21ed26f9e0d86eae5b684634fff34381417302c1.jpg) + +
OriginalRandomBFF
IMDB98.098.096.0
BoolQ89.091.583.5
SST-297.096.094.4
+ +Table 4: Evaluating attack space validity. We show human performance on original examples, random examples, and examples generated with BFF. + +we run BFF and TxF in their attack space, which uses a larger synonym dictionary, a more permissive function $\Phi$ , and does not limit the number of substitutions $D$ and budget $B$ . We obtain a similar success rate, close to $100\%$ . Nevertheless, we argue our attack space, validated by users to be label-preserving is preferable, and leave standardization of attack spaces through a broad user study to future work. + +# 5.6 User Study + +Since a model is considered to not be robust even if it flips the output label for a single adversarial sample, the validity of adversarial examples in the attack space is crucial. When we examined generated attacks based on prior works, we found many label-flipping attacks. This was especially noticeable when using BFF attacks over tasks not evaluated in prior works (see examples in Appendix A.4). In this work, our focus was on evaluating different methods for increasing model robustness, + +and thus over-constraining the attack space to guarantee its validity was acceptable. We stress that our attack search space is more conservative than prior work, and is a strict subset of prior attack spaces (see Appendix A.2), leading to higher validity of adversarial examples. + +We evaluate the validity of our attack space and the generated adversarial samples with a user study. We sample 100/100/50 examples from SST-2/BoolQ/IMDB respectively, and for each example create two adversarial examples: (a) by random sampling (b) using a BFF attack. We ask 25 NLP graduate students to annotate both the original example and the two adversarial ones. Each example is annotated by two annotators and each annotator only sees one version of an example. If human performance on random and adversarial examples is similar to the original task, this indicates the attack space is label-preserving. + +Table 4 shows the results. Human performance on random examples is similar to the original utterances. Human performance on examples generated with BFF is only mildly lower than the performance on the original utterances, overall confirming that the attack space is label-preserving. + +Ideally, the validity of adversarial examples should be as high as the original examples. However, a small degradation in random vs. original is expected since the search space is not perfect, and similarly for BFF since it is targeted at finding adversarial examples. Nevertheless, observed drops were small, showing the advantage in validity compared to prior work. The minor irregularity in BoolQ between random and original is indicative of the noise in the dataset. + +# 6 Related Work + +Adversarial attacks and robustness have attracted tremendous attention. We discuss work beyond improving robustness through adversarial attacks. + +Certified Robustness is a class of methods that provide a mathematical certificate for robustness (Dvijotham et al., 2018; Gowal et al., 2018; Jia et al., 2019; Huang et al., 2019; Shi et al., 2020). The model is trained to minimize an upper bound on the loss of the worst-case attack. When this upper bound is low, we get a certificate for robustness against all attacks. While this approach has had success, it struggles when applied to transformers, since upper bounds are propagated through many layers, and become too loose to be practical. + +Gradient-based methods In a white-box setting, adversarial examples can be generated by performing gradient ascent with respect to the input representation. Gradient-based methods (Goodfellow et al., 2015; Madry et al., 2018) have been empirically successful (Gowal et al., 2018; Ebrahimi et al., 2018), but suffer from a few shortcomings: (a) they assume access to gradients, (b) they lose their effectiveness when combined with sub-word tokenization, since one cannot substitute words that have a different number of sub-words, and (c) they can generate noisy examples that does not preserve the output label. In parallel to our work, Guo et al. (2021) proposed a gradient-based approach that finds a distribution over the attack space at the token level, resulting in an efficient attack. + +Virtual adversarial training In this approach, one does not generate explicit adversarial examples (Zhu et al., 2020; Jiang et al., 2020; Li and Qiu, 2020; Pereira et al., 2021). Instead, embeddings in an $\epsilon$ -sphere around the input (that do not correspond to words) are sampled, and continuous optimization approaches are used to train for robustness. These works were shown to improve downstream accuracy, but did not result in better robust accuracy. Recently, Zhou et al. (2020) proposed a method that does improve robustness, but like other gradient-based methods, it is white-box, does not work well with transformers over subwords, and leads to noisy samples. A similar approach has been taken by Si et al. (2020b) to generate virtual attacks during training by interpolating offline-generated attacks. + +Defense layers This approach involves adding normalization layers to the input before propagating it to the model, so that different input variations are mapped to the same representation (Wang et al., 2019; Mozes et al., 2020; Jones et al., 2020). While successful, this approach requires manual engineering and a reduction in model expressivity as the input space is significantly reduced. A similar approach (Zhou et al., 2019) has been to identify adversarial inputs and predict the original un-perturbed input. + +Pretrained language-models as attacks In this work, we decouple the definition of the attack-space from the attack strategy itself, which is cast as a search algorithm. This allows us to systematically compare different attack strategies and methods to improve robustness in the same setting. An + +orthogonal approach to ours was proposed by Garg and Ramakrishnan (2020) and Li et al. (2020), who used the fact that BERT was trained with the masked language modeling objective to predict possible semantic preserving adversarial perturbations over the input tokens, thereby coupling the definition of the attack space with the attack strategy. While this approach showed great promise in efficiently generating valid adversarial examples, it does not permit any external constraint on the attack space and thus is not comparable to attacks in this work. Future work can test whether robustness transfers across attack spaces and attack strategies by either (a) evaluating the robustness of models trained in this work against the aforementioned works (in their attack space), or (b) combine such attacks with online augmentation to train robust models and compare to the attacks proposed in our work. + +# 7 Conclusions + +We examine achieving robustness through discrete adversarial attacks. We find that the popular approach of offline augmentation is sub-optimal in both speed and accuracy compared to random sampling, and that online augmentation leads to impressive gains. Furthermore, we propose BFF, a new discrete attack based on best-first search, and show that it outperforms past work both in terms of robustness improvement and in terms of attack success rate. + +Together, our contributions highlight the key factors for success in achieving robustness through adversarial attacks, and open the door to future work on better and more efficient methods for achieving robustness in natural language understanding. + +# Acknowledgements + +We thank Mor Geva, Tomer Wolfson, Jonathan Herzig, Inbar Oren, Yuval Kirstain, Uri Shaham and Omer Levy for their useful comments. This research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800). + +# References + +Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, Huan Zhang, Cho-Jui Hsieh, and + +Mani B Srivastava. 2019. Genattack: Practical black-box attacks with gradient-free optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 1111-1119. +Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics. +Nicholas Carlini and D. Wagner. 2017. Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57. +Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175. +Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. *BoolQ: Exploring the surprising difficulty of natural yes/no questions.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. 2018. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265. +Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36, Melbourne, Australia. Association for Computational Linguistics. +Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6174-6181, Online. Association for Computational Linguistics. + +Karan Goel, Nazneen Rajani, Jesse Vig, Samson Tan, Jason Wu, Stephan Zheng, Caiming Xiong, Mohit Bansal, and Christopher Re. 2021. Robustness gym: Unifying the nlp evaluation landscape. arXiv preprint arXiv:2101.04840. +Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. 2018. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715. +Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. 2021. Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733. +Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. spaCy: Industrialstrength Natural Language Processing in Python. +Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4083-4093, Hong Kong, China. Association for Computational Linguistics. +Robin Jia, Aditi Raghunathan, Kerem Goksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4129-4142, Hong Kong, China. Association for Computational Linguistics. +Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177-2190, Online. Association for Computational Linguistics. +Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of + +Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8018-8025. AAAI Press. +Erik Jones, Robin Jia, Aditi Raghunathan, and Percy Liang. 2020. Robust encodings: A framework for combating adversarial typos. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2752-2765, Online. Association for Computational Linguistics. +Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Qi Lei, Lingfei Wu, Pin-Yu Chen, Alex Dimakis, Inderjit S. Dhillon, and Michael J. Witbrock. 2019. Discrete adversarial attacks and submodular optimization with applications to text classification. In Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019. mlsys.org. +Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6193–6202, Online. Association for Computational Linguistics. +Linyang Li and Xipeng Qiu. 2020. Tavat: Token-aware virtual adversarial training for language understanding. arXiv: Computation and Language. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, and Lewis D Griffin. 2020. Frequency-guided word substitutions for detecting textual adversarial examples. arXiv preprint arXiv:2004.05887. + +Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506-519. +Judea Pearl. 1984. Heuristics: Intelligent Search Strategies for Computer Problem Solving, page 48. Addison-Wesley Longman Publishing Co., Inc., USA. +Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, and Ichiro Kobayashi. 2021. Targeted adversarial training for natural language understanding. arXiv preprint arXiv:2104.05847. +Luis Perez and Jason Wang. 2017. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097, Florence, Italy. Association for Computational Linguistics. +Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, and Cho-Jui Hsieh. 2020. Robustness verification for transformers. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Chenglei Si, Ziqing Yang, Yiming Cui, Wentao Ma, Ting Liu, and Shijin Wang. 2020a. Benchmarking robustness of machine reading comprehension models. arXiv preprint arXiv:2004.14004. +Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2020b. Better robustness by more coverage: Adversarial training with mixup augmentation for robust fine-tuning. arXiv preprint arXiv:2012.15699. +Patrice Y Simard, Yann A LeCun, John S Denker, and Bernard Victorri. 1998. Transformation invariance in pattern recognition—tangent distance and tangent propagation. In Neural networks: tricks of the trade, pages 239–274. Springer. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. + +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. +Samson Tan, Shafiq Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! Combating linguistic discrimination with inflectional perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2920-2935, Online. Association for Computational Linguistics. +Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Robustness may be at odds with accuracy. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. +Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153-2162, Hong Kong, China. Association for Computational Linguistics. +Xiaosen Wang, Hao Jin, and Kun He. 2019. Natural language adversarial attacks and defenses in word level. arXiv preprint arXiv:1909.06723. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066-6080, Online. Association for Computational Linguistics. +Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan Kankanhalli. 2020. Attacks which do not kill training make adversarial learning stronger. arXiv preprint arXiv:2002.11242. +Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-wei Chang, and Xuanjing Huang. 2020. Defense against + +adversarial attacks in nlp via dirichlet neighborhood ensemble. arXiv preprint arXiv:2006.11627. +Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4904-4913, Hong Kong, China. Association for Computational Linguistics. +Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. + +# A Appendix + +# A.1 Experimental Details + +All of the code was written in python and is available at https://github.com/Mivg/ robust_transformers. The models are trained with the transformers library (Wolf et al., 2020). Whenever offline augmentation was used, the resulting adversarial samples were added to the training set and shuffled before training a new model with the same hyper-parameters as the baseline. Thus, the model is trained on $N \times L$ samples where $N$ is the original numbers of samples and $L$ is the number of augmentations added per sample. For online augmentation, we run two parallel data loaders with different shuffling, each with half the required batch size. We then attack the samples in one batch and concatenate the most successful attack to the other batch. The model is fed with the new constructed batch with identical weighting to the halves. Here, we consider a full epoch when every sample was passed through the model both as a perturbed and as an unperturbed sample. As such, the model is trained on $2N$ samples. For each dataset, we use the default train-dev split as described in the paper, and report the accuracy on the development set. We train with hyper-parameters as described below: + +SST-2: We fine-tuned a pre-trained cased BERT-BASE (Devlin et al., 2019) with max seq length $= 128$ over Nvidia Titan XP GPU for three epochs with batch size of 32 and learning rate of $2e - 5$ . + +IMDB: We fine-tuned a pre-trained cased BERT-BASE (Devlin et al., 2019) with max seq length=480 over Nvidia Titan XP GPU for three epochs with batch size of 48 and learning rate of $2e - 5$ . + +BoolQ: We fine-tuned a pre-trained ROBERTALARGE (Liu et al., 2019) for BoolQ with max seq length $= 480$ over Nvidia GTX 3090 GPU for three epochs with batch size of 48 and learning rate of $1e - 5$ . + +For each parameter choice reported in Table 2, we ran three different experiments with different random initialization, and reported the mean results. The respective standard deviations are given in Table 5. To finetune the models using the FreeLB (Zhu et al., 2020) method, we adapted the implementation from https://github.com/zhuchen03/FreeLB and used the following parameters: + +SST-2: init-magnitude $= 0.6$ , adversarial-steps $= 1$ + +
ModelAccuracyRobust Accuracy
SST-2IMDBBoolQSST-2IMDBBoolQ
Baseline±0.1±0.1±1.3±0.4±0.6±0.9
FREELB±0.2±0.1±0.4±0.5±1.0±1.1
RANDOFF-1±0.3±0.1±1.8±0.5±1.4±1.8
RANDOFF-4±0.7±0.1±0.5±0.6±1.9±0.5
RANDOFF-8±0.2±0.1±0.8±0.7±2.1±0.8
RANDOFF-12±0.6±0.1±1.0±0.5±1.4±1.0
TxFOFF±0.6--±0.3--
BFFOFF±0.3-±0.3±0.3-±1.8
RANDOM±0.1--±0.3--
TxFON±0.0--±0.3--
BFFON±0.5--±0.6--
+ +Table 5: Standard deviation on the experiments reported in Table 2. Missing cells indicate a single-run was used due to the long training time. + +![](images/c0d5ed21beb70f4719e9b697381a74769e704474a2c230ed9b519301db260520.jpg) +Figure 5: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget. + +2, adversarial-learning-rate $= 0.1$ and $l_{2}$ norm with no limit on the norm. + +IMDB: init-magnitude $= 0.2$ , adversarial-steps $= 4$ , adversarial-learning-rate $= 0.2$ and $l_{2}$ norm with no limit on the norm. + +BoolQ: init-magnitude $= 0.2$ , adversarial-steps $= 4$ , adversarial-learning-rate $= 0.2$ and $l_{2}$ norm with no limit on the norm. + +BFF implementation For the factorization phase of BFF, we use $\tau \sim \operatorname{Syn}(w)$ with uniform sampling. We find that while using an out-of-vocabulary masking token is useful to compute a word salience, it is less suitable here as we are interested in the model's over-sensitivity to perturbations in the exact phrasing of the word. Also, + +in contrast to TxF which is optimistic and factorizes the attack space only once, BFF factorizes the space after every step. Namely, Optimistic greedy search plans the entire search path by evaluating all permissible single-word substitutions. Let $x_{w_i \to w}$ denote the utterance $x$ where the word $w_i$ is replaced with a synonym $w \in \operatorname{Syn}(w_i)$ . The optimistic greedy algorithm scores each word $w_i$ in the utterance with $s(w_i) := \min_{w \in \operatorname{syn}(w_i)} s_A(x_{w_i \to w})$ , that is, the score of a word is the score for its best substitution, and also stores this substitution. Then, it sorts utterance positions based on $s(w_i)$ in ascending order, which defines the entire search path: In each step, the algorithm moves to the next position based on the sorted list and uses the best substitution stored for that position. Fig. 5 shows the benefit from each of those modifications. + +Budget Effect Intuitively, higher budgets better approximate an exhaustive search and thus the robustness evaluation as an upper bound should approach its true value. However, due to lack of backtracking in some of the attack strategies, they may plateau early on. In this work, we used $B = 1000$ for all training phases and $B = 2000$ for the robustness evaluation. Empirically, this gives a good estimate on the upper bound of model's robust accuracy, while constraining the computational power needed for the experiments. A natural question is how much tighter the bounds may be if a larger budget is given. Fig. 6 depicts an evaluation of strategies' success-rates over the same models as in Fig. 4 with a larger budget. As can be seen, while the RANDOM attack and TxF plateau, BFF variants as well as GENATTACK are able to exploit the larger budget to fool the model in more cases. This is especially true in IMDB where the search space is considerably larger. We expect this trend of tighter bounds to continue with ever larger budgets, though we note that the rate of improvements decreases with budget and that the ranking between strategies remains unchanged. Therefore, we conclude that drawing conclusions about strategies comparison and robustness improvements by evaluating with a budget of 2,000 suffices. + +# A.2 Attack Space Implementation Details + +As described in §5.2, we use the synonyms dictionary defined by Alzantot et al. (2018). In particular, we use the pre-computed set of those synonyms given by Jia et al. (2019) as our bases for $\operatorname{Syn}(w)$ . We pre-process the entire development and training + +![](images/afce736e7a74e6da5c686525f66c7bf7efc65181707a6127e17e677063133e6a.jpg) +Figure 6: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget. + +data and store for each utterance, the set $\text{Syn}_{\Phi}(w)$ and avoid the need to employ large language models during training and robustness evaluation. For every word in an utterance $w_i \in x$ , and for every $\bar{w}_i \in \text{Syn}(w_i)$ we evaluate $\Phi(w_i, \bar{w}_i)$ as follows: + +1. With the same sequences as above, we validate that $POS(w_{i}) \equiv POS(\bar{w}_{i})$ according to spaCy's (Honnibal et al.) POS tagger. +2. With a window of size 101, we validate that $\mathrm{PPL}(x) / \mathrm{PPL}(\bar{x})\geq 0.8$ where $\mathrm{PPL}(\cdot)$ is the perplexity of the sequence as given by a pre-trained GPT-2 model (Radford et al., 2019) +3. For BoolQ only, we also use spaCy's POS tagger to tag all content words (namely NOUN, PROPN, ADV, and ADJ) in the question. We then restrict all those words from being perturbed in the passage. +4. Following Jin et al. (2020), we take a window of size 15 around the word, and validate with USE (Cer et al., 2018) that the semantic similarity between the unperturbed sequence $(w_{i-7},\ldots,w_i,\ldots,w_{i+7})$ and the perturbed sequence $(w_{i-7},\ldots,\bar{w}_i,\ldots,w_{i+7})$ is at least 0.7. + +# A.3 Genetic Attack Implementation Details + +Our implementation of Gen-Attack presented by Alzantot et al. (2018) was based on https://github.com/nesl/nlp_adversarial_ + +examples/blob/master/attacks.py and used our attack space rather than the original attack space presented there. For evaluation we used the distribution hyperparameters as defined by the paper. Namely, population-size $p := 20$ , maximum generations: $g := 100$ and softmax-temperature $= 0.3$ . Note we did not need to limit the number of candidate synonyms considered as this was already done in the attack space construction. However, we have made the modifications to the original algorithm in order to adapt to our settings. + +Maximal modification constraints While the original algorithm presented by Alzantot et al. (2019) contained a clipping phase where mutated samples were clipped to match a maximal norm constraint, the adapted version for discrete attacks presented in Alzantot et al. (2018) did not. As we wish to limit the allowed number of perturbation for any single input utterance and the crossover phase followed by the perturb sub-routine can easily overstep this limit, a post-perturb phase was added. Namely, in every generation creation, after the crossover and mutation (i.e. perturb) subroutines create a candidate child, if the total number of perturbed samples exceeds the limit, we randomly uniformly revert the perturbation in words until the limit is reached. This step introduced another level of randomness into the process. We experimented with reverting based on the probability to be replaced as used in the perturb sub-routine, but this resulted in sub-par results. + +Improved Efficiency In addition to estimating the fitness function of each child in a generation which requires a forward pass through the attacked model, Alzantot et al. (2018) also used a greedy step in the perturb sub-routine to estimate the fitness of each synonym mutation for a chosen position. This results in an extremely high number of forward passes through the model, specifically $\mathcal{O}(g\cdot p\cdot (k + 1))$ which is orders of magnitude larger than our allowed budget of 2000. However, many of the passes are redundant, so by utilizing caching to previous results, the attack strategy can better utilize its allocated budget, resulting in significantly better success rate in with better efficiency. + +# A.4 Attack Space in Prior Work + +Examining the attack space proposed in Jin et al. (2020), which includes a larger synonym dictionary and a different filtering function $\Phi(\cdot)$ , we observe + +that many adversarial examples are difficult to understand or are not label-preserving. Table 6 shows examples from an implementation of the attack space of the recent TEXTFOOLER (Jin et al., 2020). We observe that while in IMDB the labels remain mostly unchanged, many passages are difficult to understand. Moreover, we observe frequent label flips in datasets such as in SST-2 example, as well as perturbations in BoolQ that leave the question unanswerable. + +
Passage: Table of prime factors – The number 1 is called a unit. It has no incipient [prime] factors and is neither first [prime] nor composite. +Question: is 1 a prime factor of every number +Answer: False
Passage: Panama Canal – The nouvelle [new] locks commences [opened] for commercial vehicular [traffic] on 26 June 2016, and the first yacht [ship] to intersecting [cross] the canal using the third set of locks was a modern New Panamax vessel, the Chinese-owned container warships [ship] Cosco Shipping Panama. The original locks, now over 100 centuries [years] old, give [allow] engineer [engineers] best [greater] access for maintenance, and are hoped [projected] to continue workplace [operating] indefinitely. +Question: is the old panama canal still in use +Answer: True
Passage: Chevrolet Avalanche – The Chevrolet Avalanche is a four-door, five or eight [six] commuter [passenger] harvest [pickup] trucking [truck] stocks [sharing] GM's long-wheelbase frame [chassis] used on the Chevrolet Suburban and Cadillac Escalade ESV. Breaking with a long-standing tradition, the Avalanche was not affordable [available] as a GMC, but only as a Chevrolet. +Question: is there a gmc version of the avalanche +Answer: False
Sentence: I've been waiting for this movie for SO many years! The best part is that it decedent [lives] up to my visions! This is a MUST SEE for any Tenacious D or true Jack Black fan. It's just once [so] great to see JB, KG and Lee on the big screen! It's not a authentic [true] story, but who cares. The D is the greatest band on earth! I had the soundtrack to the movie last week and heeded [listed] to it non-stop. To see the movie was unadulterated [pure] bliss for me and my hubby. We've both met Jack and Kyle after 2 different Tenacious D concerts and also saw them when they toured with Weezer. We left that concert after the D was done playing. Nobody can top their show! Long live the D!!! :D +Answer: True
Sentence: Sweet, kidding [entertaining] tale of a young 17 1/2 year old boy, controlled by an overbearing religious mother and withdrawn father, and how he finds himself through his work with a retired, eccentric and tragic actress. Very better [well] acted, especially by Julie Walters. Rupert Grint plays the role of the teenage boy well, showing his talent will last longer than the Harry Potter series of films. Laura Linney plays his ruthlessly strict mother without a hint of redemption, so there's no room to like her at all. But the film is a awfully [very] antics [entertaining] film, made well by the British in the style of the likes of Keeping Mum and Calendar Girls. +Answer: True
Sentence: Enormous adjourned [suspension] of disbelief is required where Will's "genius" is concerned. Not just in math-he is also very well reads [read] in economic history, able to out-shrink several shrinks, etc etc. No, no, no. I don't buy it. While they're at it, they might as well have him wearing a big "S" on his chest, flying faster than a jet plane and stopping bullets.<br / > <br / >Among other problems...real genius (shelving for the moment the problem of what it really is, and whether it deserves such mindless homage) doesn't simply appear /ex nihil/o/. It isn't ever so multi-faceted. And it is very virtually [rarely] appreciates [appreciated] by contemporaries.<br / >Better to have made Will a basketball prodigy. Except that Damon's too short. +Answer: False
Sentence: it proves quite unconvincing [compelling] as an intense , brooding character study . +Answer: True
Sentence: an sensible [unwise] amalgam of broadcast news and vibes . an sensible amalgam of broadcast news and vibes . +Answer: False
Sentence: if you dig on david mamet 's mind tricks ... rent this movie and iike [enjoy] ! +Answer: True
+ +Table 6: Examples of adversarial examples, which are difficult to understand or not label-preserving, found for BoolQ/IMDB/SST-2 with the attack space from (Jin et al., 2020). In **bold** are the substituting words and in brackets the original word. \ No newline at end of file diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/images.zip b/achievingmodelrobustnessthroughdiscreteadversarialtraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bc15ac54b81b80fcf7aed299312bea13266aff0e --- /dev/null +++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91633a02bc0b41581d4c4ae4ce8724d75371d4ff39d9ca5f451df4ed8e034ef7 +size 782682 diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/layout.json b/achievingmodelrobustnessthroughdiscreteadversarialtraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ca9271aa407139f9a17684be73597afc571be215 --- /dev/null +++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6d1f969d58f11f530737771f1d9311f9f0d1b3904916f33dbc1fb1bc27e91bd +size 543331 diff --git a/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_content_list.json b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..eef4485426f7ae59ca7b9535d3eb766b30c9eac9 --- /dev/null +++ b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbebeeb3a46bd05e2c51ba901ae4a98c45c5cea22c278299368616e9869e22ab +size 85239 diff --git a/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_model.json b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1b15fe2680f779410e8381c1938362e142ce745e --- /dev/null +++ b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61cd3704b5abe2b3b6303d5f15c75364f2adc2e574d3b5f345e3ec4ff82b929e +size 102449 diff --git a/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_origin.pdf b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8991055940dc72ff91ed037c37f4b67b9a652250 --- /dev/null +++ b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b77bf05440b0d21ec56e3611a2f320f6e7c86483f464ec429ee862f58918065b +size 586182 diff --git a/activeeaactivelearningforneuralentityalignment/full.md b/activeeaactivelearningforneuralentityalignment/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f11f9e80d38c0493a88811bf848edcb02e2bf057 --- /dev/null +++ b/activeeaactivelearningforneuralentityalignment/full.md @@ -0,0 +1,388 @@ +# ActiveEA: Active Learning for Neural Entity Alignment + +Bing Liu1,2, Harrison Scells1, Guido Zuccon1, Wen Hua1, Genghong Zhao2 + +1The University of Queensland, Australia + +$^{2}$ Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., China + +{bing.liu, h.scells, g.zuccon, w.hua}@uq.edu.au + +zhaogenghong@neusoft.com + +# Abstract + +Entity Alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs) and is an essential step of KG fusion. Current mainstream methods – neural EA models – rely on training with seed alignment, i.e., a set of pre-aligned entity pairs which are very costly to annotate. In this paper, we devise a novel Active Learning (AL) framework for neural EA, aiming to create highly informative seed alignment to obtain more effective EA models with less annotation cost. Our framework tackles two main challenges encountered when applying AL to EA: + +(1) How to exploit dependencies between entities within the AL strategy. Most AL strategies assume that the data instances to sample are independent and identically distributed. However, entities in KGs are related. To address this challenge, we propose a structure-aware uncertainty sampling strategy that can measure the uncertainty of each entity as well as its impact on its neighbour entities in the KG. +(2) How to recognise entities that appear in one KG but not in the other KG (i.e., bachelors). Identifying bachelors would likely save annotation budget. To address this challenge, we devise a bachelor recognizer paying attention to alleviate the effect of sampling bias. + +Empirical results show that our proposed AL strategy can significantly improve sampling quality with good generality across different datasets, EA models and amount of bachelors. + +# 1 Introduction + +Knowledge Graphs (KGs) store entities and their relationships with a graph structure and are used as knowledge drivers in many applications (Ji et al., 2020). Existing KGs are often incomplete but complementary to each other. A popular approach used to tackle this problem is KG fusion, which attempts to combine several KGs into a single, comprehensive one. Entity Alignment (EA) is an essential + +![](images/893f6bd1c1033f7f2447eb80ffc34b5377173614f88c09d605a0429fff520a78.jpg) +Figure 1: An example of Entity Alignment. + +step for KG fusion: it identifies equivalent entities across different KGs, supporting the unification of their complementary knowledge. For example, in Fig. 1 Donald Trump and US in the first KG correspond to D.J. Trump and America respectively in the second KG. By aligning them, the political and business knowledge about Donald Trump can be integrated within one KG. + +Neural models (Chen et al., 2017, 2018; Wang et al., 2018; Cao et al., 2019) are the current state-of-the-art in EA and are capable of matching entities in an end-to-end manner. Typically, these neural EA models rely on a seed alignment as training data which is very labour-intensive to annotate. However, previous EA research has assumed the availability of such seed alignment and ignored the cost involved with their annotation. In this paper, we seek to reduce the cost of annotating seed alignment data, by investigating methods capable of selecting the most informative entities for labelling so as to obtain the best EA model with the least annotation cost: we do so using Active Learning. Active Learning (AL) (Aggarwal et al., 2014) is a Machine Learning (ML) paradigm where the annotation of data and the training of a model are performed iteratively so that the sampled data is highly informative for training the model. Though many general AL strategies have been proposed (Settles, 2012; Ren et al., 2020), there are some unique challenges in applying AL to EA. + +The first challenge is how to exploit the dependencies between entities. In the EA task, neighbouring entities (context) in the KGs naturally affect each other. For example, in the two KGs of Fig. 1, we can infer US corresponds to America if we already know that Donald Trump and D.J. Trump refer to the same person: this is because a single person can only be the president of one country. Therefore, when we estimate the value of annotating an entity, we should consider its impact on its context in the KG. Most AL strategies assume data instances are independent, identically distributed and cannot capture dependencies between entities (Aggarwal et al., 2014). In addition, neural EA models exploit the structure of KGs in different and implicit ways (Sun et al., 2020b). It is not easy to find a general way of measuring the effect of entities on others. + +The second challenge is how to recognize the entities in a KG that do not have a counterpart in the other KG (i.e., bachelors). In the first KG of Fig. 1, Donald Trump and US are matchable entities while New York City and Republican Party are bachelors. Selecting bachelors to annotate will not lead to any aligned entity pair. The impacts of recognizing bachelors are twofold: + +1. From the perspective of data annotation, recognizing bachelors would automatically save annotation budget (because annotators will try to seek a corresponding entity for some time before giving up) and allow annotators to put their effort in labelling matchable entities. This is particularly important for the existing neural EA models, which only consider matchable entities for training: thus selecting bachelors in these cases is a waste of annotation budget. +2. From the perspective of EA, bachelor recognition remedies the limitation of existing EA models that assume all entities to align are matchable, and would enable them to be better used in practice (i.e., real-life KGs where bachelors are popular). + +To address these challenges, we propose a novel AL framework for EA. Our framework follows the typical AL process: entities are sampled iteratively, and in each iteration a batch of entities with the highest acquisition scores are selected. Our novel acquisition function consists of two components: a structure-aware uncertainty measurement module and a bachelor recognizer. The structure-aware uncertainty can reflect the uncertainty of a single + +entity as well as the influence of that entity in the context of the KG, i.e., how many uncertainties it can help its neighbours eliminate. In addition, we design a bachelor recognizer, based on Graph Convolutional Networks (GCNs). Because the bachelor recognizer is trained with the sampled data and used to predict the remaining data, it may suffer from bias (w.r.t. the preference of sampling strategy) of these two groups of data. We apply model ensembling to alleviate this problem. + +Our major contributions in this paper are: + +1. A novel AL framework for neural EA, which can produce more informative data for training EA models while reducing the labour cost involved in annotation. To our knowledge, this is the first AL framework for neural EA. +2. A structure-aware uncertainty sampling strategy, which models uncertainty sampling and the relation between entities in a single AL strategy. +3. An investigation of bachelor recognition, which can reduce the cost of data annotation and remedy the defect of existing EA models. +4. Extensive experimental results that show our proposed AL strategy can significantly improve the quality of data sampling and has good generality across different datasets, EA models, and bachelor quantities. + +# 2 Background + +# 2.1 Entity Alignment + +Entity alignment is typically performed between two KGs $\mathcal{G}^1$ and $\mathcal{G}^2$ , whose entity sets are denoted as $\mathcal{E}^1$ and $\mathcal{E}^2$ respectively. The goal of EA is to find the equivalent entity pairs $\mathcal{A} = \{(e^1, e^2) \in \mathcal{E}^1 \times \mathcal{E}^2 | e^1 \sim e^2\}$ , where $\sim$ denotes an equivalence relationship and is usually assumed to be a one-to-one mapping. In supervised and semi-supervised models, a subset of the alignment $\mathcal{A}^{seed} \subset \mathcal{A}$ , called seed alignment, are annotated manually beforehand and used as training data. The remaining alignment form the test set $\mathcal{A}^{test} = \mathcal{A} \setminus \mathcal{A}^{seed}$ . The core of an EA model $F$ is a scoring function $F(e^1, e^2)$ , which takes two entities as input and returns a score for how likely they match. The effectiveness of an EA model is essentially determined by $\mathcal{A}^{seed}$ and we thus denote it as $m(\mathcal{A}^{seed})$ . + +# 2.2 Active Learning + +An AL framework consists of two components: (1) an oracle (annotation expert), which provides labels for the queries (data instances to label), and + +![](images/4732199d79631663c8673e11223dfee9c11f318cfe08719c484e4b85821f926d.jpg) +Figure 2: Overview of ActiveEA. + +(2) a query system, which selects the most informative data instances as queries. In pool-based scenario, there is a pool of unlabelled data $\mathcal{U}$ . Given a budget $B$ , some instances $\mathcal{U}_{\pi,B}$ are selected from the pool following a strategy $\pi$ and sent to the experts to annotate, who produce a training set $\mathcal{L}_{\pi,B}$ . We train the model on $\mathcal{L}_{\pi,B}$ and the effectiveness $m(\mathcal{L}_{\pi,B})$ of the obtained model reflects how good the strategy $\pi$ is. The goal is to design an optimal strategy $\pi_*$ such that $\pi_* = \operatorname{argmax}_{\pi} m(\mathcal{L}_{\pi,B})$ . + +# 3 ActiveEA: Active Entity Alignment + +# 3.1 Problem Definition + +Given two KGs $\mathcal{G}^1, \mathcal{G}^2$ with entity sets $\mathcal{E}^1, \mathcal{E}^2$ , an EA model $F$ , a budget $B$ , the AL strategy $\pi$ is applied to select a set of entities $\mathcal{U}_{\pi,B}$ so that the annotators label the counterpart entities to obtain the labelled data $\mathcal{L}_{\pi,B}$ . $\mathcal{L}_{\pi,B}$ consists of annotations of matchable entities $\mathcal{L}_{\pi,B}^{+}$ , which form the seed alignment $\mathcal{A}_{\pi,B}^{seed}$ , and bachelors $\mathcal{L}_{\pi,B}^{-}$ . We measure the effectiveness $m(\mathcal{A}_{\pi,B}^{seed})$ of the AL strategy $\pi$ by training the EA model on $\mathcal{A}_{\pi,B}^{seed}$ and then evaluating it with $\mathcal{A}_{\pi,B}^{test} = \mathcal{A} \setminus \mathcal{A}_{\pi,B}^{seed}$ . Our goal is to design an optimal entity sampling strategy $\pi_*$ so that $\pi_* = \operatorname{argmax}_{\pi} m(\mathcal{A}_{\pi,B}^{seed})$ . + +In our annotation setting, we select entities from one KG and then let the annotators identify their counterparts from the other KG. Under this setting, we assume the pool of unlabelled entities is initialized with $\mathcal{U} = \mathcal{E}^1$ . The labelled data will be like $\mathcal{L}_{\pi,B}^{+} = \{(e^{1} \in \mathcal{E}^{1}, e^{2} \in \mathcal{E}^{2})\}$ and $\mathcal{L}_{\pi,B}^{-} = \{(e^{1} \in \mathcal{E}^{1}, null)\}$ . + +# 3.2 Framework Overview + +The whole annotation process, as shown in Fig. 2, is carried out iteratively. In each iteration, the query system selects $N$ entities from $\mathcal{U}$ and sends them to the annotators. The query system includes (1) a structure-aware uncertainty measurement module $f^{su}$ , which combines uncertainty sampling with the structure information of the KGs, and (2) a + +bachelor recognizer $f^b$ , which helps avoid selecting bachelor entities. The final acquisition $f^{\pi}$ used to select which entities to annotate is obtained by combining the outputs of these two modules. After the annotators assign the ground-truth counterparts to the selected entities, the new annotations are added to the labelled data $\mathcal{L}$ . With the updated $\mathcal{L}$ , the query system updates the EA model and the bachelor recognizer. This process repeats until no budget remains. To simplify the presentation, we omit the sampling iteration when explaining the details. + +# 3.3 Structure-aware Uncertainty Sampling + +We define the influence of an entity on its context as the amount of uncertainties it can help its neighbours remove. As such, we formulate the structure-aware uncertainty $f^{su}$ as + +$$ +\begin{array}{l} f ^ {s u} \left(e _ {i} ^ {1}\right) = \alpha \sum_ {e _ {i} ^ {1} \rightarrow e _ {j} ^ {1}, e _ {j} ^ {1} \in \mathcal {N} _ {i} ^ {\text {o u t}}} w _ {i j} f ^ {s u} \left(e _ {j} ^ {1}\right) \tag {1} \\ + (1 - \alpha) \frac {f ^ {u} (e _ {i} ^ {1})}{\sum_ {e ^ {1} \in \mathcal {E} ^ {1}} f ^ {u} (e ^ {1})}, \\ \end{array} +$$ + +where $\mathcal{N}_i^{out}$ is the outbound neighbours of entity $e_i^1$ (i.e. the entities referred to by $e_i^1$ ) and $w_{ij}$ measures the extent to which $e_i^1$ can help $e_j^1$ eliminate uncertainty. The parameter $\alpha$ controls the trade-off between the impact of entity $e_i^1$ on its context (first term in the equation) and the normalized uncertainty (second item). Function $f^u(e^1)$ refers to the margin-based uncertainty of an entity. For each entity $e^1$ , the EA model can return the matching scores $F(e^1, e^2)$ with all unaligned entities $e^2$ in $\mathcal{G}^2$ . Since these scores in existing works are not probabilities, we exploit the margin-based uncertainty measure for convenience, outlined in Eq. 2: + +$$ +f ^ {u} \left(e ^ {1}\right) = - \left(F \left(e ^ {1}, e _ {*} ^ {2}\right) - F \left(e ^ {1}, e _ {* *} ^ {2}\right)\right) \tag {2} +$$ + +where $F(e^{1}, e_{*}^{2})$ and $F(e^{1}, e_{**}^{2})$ are the highest and second highest matching scores respectively. A large margin represents a small uncertainty. + +For each entity $e_j^1$ , we assume its inbound neighbours can help it clear all uncertainty. Then, we have $\sum_{e_i^1 \to e_j^1, e_i^1 \in \mathcal{N}_j^{in}} w_{ij} = 1$ , where $\mathcal{N}_j^{in}$ is the inbound neighbour set of $e_j^1$ . In this work, we assume all inbound neighbours have the same impact on $e_j^1$ . In this case, $w_{ij} = \frac{1}{\mathrm{degree}(e_j^1)}$ , where $\mathrm{degree}(\cdot)$ returns the in-degree of an entity. + +Using matrix notion, Eq. 1 can be rewritten as + +$$ +\mathbf {f} ^ {s u} = \alpha \mathbf {W} \mathbf {f} ^ {s u} + (1 - \alpha) \frac {\mathbf {f} ^ {u}}{| \mathbf {f} ^ {u} |} +$$ + +where $\mathbf{f}^{su}$ is the vector of structure-aware uncertainties, $\mathbf{f}^u$ is the vector of uncertainties, and $\mathbf{W}$ is a matrix encoding influence between entities, i.e., $w_{ij} > 0$ if $e_i^1$ is linked to $e_j^1$ , otherwise 0. + +As $\mathbf{W}$ is a stochastic matrix (Gagniuc, 2017), we solve Eq. 1 iteratively, which can be viewed as the power iteration method (Franceschet, 2011), similar to Pagerank (Brin and Page, 1998). Specifically, we initialize the structure-aware uncertainty vector as $\mathbf{f}_0^{su} = \mathbf{f}^u$ . Then we update $\mathbf{f}_t^{su}$ iteratively: + +$$ +\mathbf {f} _ {t} ^ {s u} = \alpha \mathbf {W} \mathbf {f} _ {t - 1} ^ {s u} + (1 - \alpha) \frac {\mathbf {f} ^ {u}}{| \mathbf {f} ^ {u} |}, t = 1, 2, 3, \ldots +$$ + +The computation ends when $|\mathbf{f}_t^{su} - \mathbf{f}_{t - 1}^{su}| < \epsilon$ + +# 3.4 Bachelor Recognizer + +The bachelor recognizer is formulated as a binary classifier, which is trained with the labelled data and used to predict the unlabelled data. One challenge faced here is the bias between the labelled data and the unlabelled data caused by the sampling strategy (since it is not random sampling). We alleviate this issue with a model ensemble. + +# 3.4.1 Model Structure + +We apply two GCNs (Kipf and Welling, 2017; Hamilton et al., 2017) as the encoders to get the entity embeddings $\mathbf{H}^{1} = \mathbf{GCN}^{1}(\mathcal{G}^{1}),\mathbf{H}^{2} = \mathbf{GCN}^{2}(\mathcal{G}^{2})$ , where each row in $\mathbf{H}^1$ or $\mathbf{H}^2$ corresponds to a vector representation of a particular entity. The two GCN encoders share the same structure but have separate parameters. With each GCN encoder, each entity $e_i$ is first assigned a vector representation $\mathbf{h}_i^{(0)}$ . Then contextual features of each entity are extracted: + +$$ +\mathbf {h} _ {i} ^ {(l)} = \operatorname {n o r m} (\sigma (\sum_ {j \in \mathcal {N} _ {i} \cup \{i \}} \mathbf {V} ^ {(l)} \mathbf {h} _ {j} ^ {(l - 1)} + \mathbf {b} ^ {(l)})), +$$ + +where $l$ is the layer index, $\mathcal{N}_i$ is the neighbouring entities of entity $e_i$ , and $\sigma$ is the activation function, $\mathrm{norm}(\cdot)$ is a normalization function, and $\mathbf{V}^{(l)}, \mathbf{b}^{(l)}$ are the parameters in the $l$ -th layer. The representations of each entity $e_i$ obtained in all GCN layers are concatenated into a single representation: $\mathbf{h}_i = \mathrm{concat}(\mathbf{h}_i^{(0)}, \mathbf{h}_i^{(1)}, \dots, \mathbf{h}_i^{(L)})$ , where $L$ is the number of GCN layers. + +After getting the representations of entities, we compute the similarities of each entity in $\mathcal{E}^1$ with all entities in $\mathcal{E}^2$ ( $\mathbf{S} = \mathbf{H}^1 \cdot \mathbf{H}^{2T}$ ) and obtain its corresponding maximum matching score as in + +$f^{s}(e_{i}^{1}) = \max (\mathbf{S}_{i,:}).$ The entity $e_i^1$ whose maximum matching score is greater than a threshold $\gamma$ is considered to be a matchable entity as in $f^{b}(e_{i}^{1}) = \mathbb{1}_{f^{s}(e_{i}^{1}) > \gamma},$ otherwise a bachelor. + +# 3.4.2 Learning + +In each sampling iteration, we train the bachelor recognizer with existing annotated data $\mathcal{L}$ containing matchable entities $\mathcal{L}^{+}$ and bachelors $\mathcal{L}^{-}$ . Furthermore, $\mathcal{L}$ is divided into a training set $\mathcal{L}^t$ and a validation set $\mathcal{L}^v$ . + +We optimize the parameters, including $\{\mathbf{V}^{(l)},\mathbf{b}^{(l)}\}_{1\leq l\leq L}$ of each GCN encoder and the threshold $\gamma$ , in two phases, sharing similar idea with supervised contrastive learning (Khosla et al., 2020). In the first phase, we optimize the scoring function $f^s$ by minimizing the constrastive loss shown in Eq.3. + +$$ +\begin{array}{l} \text {l o s s} = \sum_ {\left(e _ {i} ^ {1}, e _ {j} ^ {2}\right) \in \mathcal {L} ^ {t, +}} \| \mathbf {h} _ {i} ^ {1} - \mathbf {h} _ {j} ^ {2} \| \\ + \beta \sum_ {\left(e _ {i ^ {\prime}} ^ {1}, e _ {j ^ {\prime}} ^ {2}\right) \in \mathcal {L} ^ {t, n e g}} \left[ \lambda - \left\| \mathbf {h} _ {i ^ {\prime}} ^ {1} - \mathbf {h} _ {j ^ {\prime}} ^ {2} \right\| \right] _ {+} \tag {3} \\ \end{array} +$$ + +Here, $\beta$ is a balance factor, and $[\cdot]_{+}$ is $\max(0, \cdot)$ , and $\mathcal{L}^{t,neg}$ is the set of negative samples generated by negative sampling (Sun et al., 2018). For a given pre-aligned entity pair in $\mathcal{L}^{+}$ , each entity of it is substituted for $N^{neg}$ times. The distance of negative samples is expected to be larger than the margin $\lambda$ . In the second phase, we freeze the trained $f^{s}$ and optimize $\gamma$ for $f^{b}$ . It is easy to optimize $\gamma$ , e.g. by simple grid search, so that $f^{b}$ can achieve the highest performance on $\mathcal{L}^{v}$ (denoted as $q(f^{s}, \gamma, \mathcal{L}^{v}))$ using: + +$$ +\gamma^ {*} = \operatorname {a r g m a x} _ {\gamma} q (f ^ {s}, \gamma , \mathcal {L} ^ {v}). +$$ + +# 3.4.3 Model Ensemble for Sampling Bias + +The sampled data may be biased, since they have been preferred by the sampling strategy rather than selected randomly. As a result, even if the bachelor recognizer is well trained with the sampled data it may perform poorly on data yet to sample. We apply a model ensemble to alleviate this problem. Specifically, we divide the $\mathcal{L}$ into $K$ subsets evenly. Then we apply $K$ -fold cross-validation to train $K$ scoring functions $\{f_1^s,\dots,f_K^s\}$ , each time using $K - 1$ subsets as the training set and the left out portion as validation set. Afterwards, we search for an effective $\gamma$ threshold: + +$$ +\gamma^ {*} = \operatorname {a r g m a x} _ {\gamma} \frac {1}{K} \sum_ {1 \leq k \leq K} q \left(f _ {k} ^ {s}, \gamma , \mathcal {L} _ {k} ^ {v}\right) +$$ + +At inference, we ensemble by averaging the $K$ scoring functions $f_{k}^{s}$ to form the final scoring function $f^{s}$ as in Eq. 4 and base $f^{b}$ on it. + +$$ +f ^ {s} \left(e _ {i} ^ {1}\right) = \frac {1}{K} \sum_ {1 \leq k \leq K} f _ {k} ^ {s} \left(e _ {i} ^ {1}\right) \tag {4} +$$ + +# 3.5 Final Acquisition Function + +We combine our structure-aware uncertainty sampling with the bachelor recognizer to form the final acquisition function: + +$$ +f ^ {\pi} \left(e _ {i} ^ {1}\right) = f ^ {s u} \left(e _ {i} ^ {1}\right) f ^ {b} \left(e _ {i} ^ {1}\right) +$$ + +# 4 Experimental Setup + +# 4.1 Sampling Strategies + +We construct several baselines for comparison: rand random sampling used by existing EA work degree selects entities with high degrees. + +pagerank (Brin and Page, 1998) measures the centrality of entities by considering their degrees as well as the importance of its neighbours. + +betweenness (Freeman, 1977) refers to the number of shortest paths passing through an entity. + +uncertainty sampling selects entities that the current EA model cannot predict with confidence. Note that in this work we measure uncertainty using Eq. 2 for fair comparison. + +degree, pagerank and betweenness are purely topology-based and do not consider the current EA model. On the contrary, uncertainty is fully based on the current EA model without being able to capture the structure information of KG. We compare both our structure-aware uncertainty sampling (struct_uncert) and the full framework ActiveEA with the baselines listed above. We also examine the effect of Bayesian Transformation, which aims to make deep neural models represent uncertainty more accurately (Gal et al., 2017). + +# 4.2 EA Models + +We apply our ActiveEA framework to three different EA models, which are a representative spread of neural EA models and varied in KG encoding, considered information and training method (Liu et al., 2020; Sun et al., 2018): + +BootEA (Sun et al., 2018) encodes the KGs with the translation model (Bordes et al., 2013), exploits the structure of KGs, and uses self-training. + +Alinet (Sun et al., 2020a) also exploits the structure of KGs but with a GCN-based KG encoder, and is trained in a supervised manner. + +RDGCN (Wu et al., 2019) trains a GCN in a supervised manner, as Alinet, but it can incorporate entities' attributes. + +Our implementations and parameter settings of the models rely on OpenEA $^1$ (Sun et al., 2020b). + +# 4.3 Datasets + +We use three different datasets: D-W-15K V1 (DW), EN-DE-15K V1 (ENDE), and EN-FR-100K V1 (ENFR), obtained from OpenEA (Sun et al., 2020b). Each dataset contains two KGs and equivalent entity pairs. The KGs used in these datasets were sampled from real KGs, i.e. DBpedia (Lehmann et al., 2015), Wikidata (Vrandecic and Krötzsch, 2014), and YAGO (Rebele et al., 2016), which are widely used in EA community. These datasets differ in terms of KG sources, languages, sizes, etc. We refer the reader to Sun et al. (2020b) for more details. + +Existing work on EA assumes all entities in the KGs are matchable, thus only sampling entities with counterparts when producing the datasets. For investigating the influence of bachelors on AL strategies, we synthetically modify the datasets by excluding a portion of entities from the second KG. + +# 4.4 Evaluation Metrics + +We use Hit@1 as the primary evaluation measure of the EA models. To get an overall evaluation of one AL strategy across different sized budgets, we plot the curve of a EA model's effectiveness with respect to the proportion of annotated entities, and calculate the Area Under the Curve (AUC). + +# 4.5 Parameter Settings + +We set $\alpha = 0.1$ , $\epsilon = 1e^{-6}$ for the structure-aware uncertainty. We use $L = 1$ GCN layer for our bachelor recognizer with 500 input and 400 output dimensions. We set $K = 5$ for its model ensemble and $\lambda = 1.5$ , $\beta = 0.1$ , $N^{neg} = 10$ for its training. The sampling batch size is set to $N = 100$ for 15K data and $N = 1000$ for 100K data. + +# 4.6 Reproducibility Details + +Our experiments are run on a GPU cluster. We allocate 50G memory and one 32GB nVidia Tesla V100 GPU for each job on 15K data, and 100G memory for each job on 100K data. The training and evaluation of ActiveEA take approximately 3h with Alinet on 15K data, 10h with BootEA on 15K + +![](images/0e7b85123ba3222c91749a6774968a6894d6d99f9ef692b56df5b82ef5241184.jpg) + +![](images/bda3231bc48686b818588655bbac0af2ffe2529ef1321e249b137baece035b17.jpg) + +![](images/4cba5c7c3a390a14420afc22a2b0d32883b969322b4a28eb2bf83578732c4b0e.jpg) + +![](images/5f8b7e8b1185c95dc3a76e2183d3db4119a0971a7936afb10216edf738e25d3f.jpg) + +![](images/eb82474157c1146c818f53fb636c20e48f147570d158224bc8e00dc2c914fdda.jpg) + +![](images/63395ed3d8db6727ed940e5f916f965349cd8f49b0a455919c0bc022ddad1ebb.jpg) + +![](images/2bd8dd0338d5659a228e24a21c15c625e3086152554ee0370a7a40fc4d42c051.jpg) + +![](images/d72d1bc0ec5d158ed7ee6f7bb7369fb011ed72de8274bf06ef2c12024889d6d5.jpg) + +![](images/119a128161229ef8303dc9603898baa832b3ed4ea2b98f7a48afb8a5d9543303.jpg) +rand degree pagerank betweenness uncertainty struct_untert ActiveEA + +![](images/a11a2ef80b4a2d1a019311ab0a0fe2778e02206ca1abca1068bda8d55a9a9508.jpg) +Percentage of Annotated Entities + +![](images/ddf54fb954690ae239b2902a8db12a18e9ed2167066d76cf18b23000e71bdc83.jpg) + +![](images/efd2acd2e93799c93aa121be77ede3faa6917be363c4ec1aa3a0a056c7798925.jpg) + +![](images/ce06a2faaf344939251edcae8bf1567644f575aa255bcc269cfe4aee6f2c1152.jpg) +Figure 3: HIT@1 of sampling strategies for all EA models on DW and ENDE, as annotation portion increases. Top row shows experiments that do not include bachelors; bottom row shows experiments that include $30\%$ bachelors. ActiveEA is equivalent to struct_uncert in absence of bachelors, and is thus shown only for the second row. + +![](images/eefb87dab4d0e2eb89e0f3b0a04de6500fdc917f8835d63675b257089f8b5203.jpg) + +![](images/c92c1be31c865089f47ff6b2bdfe5802a8cb3b9c7d0b564e40708d2ddb356a40.jpg) +Figure 4: Hit@1 for all sampling strategies on the Alinet EA model on ENFR. Left shows experiments without bachelors, right shows with $30\%$ bachelors. + +data, 10h with RDGCN on 15K data, and 48h with Alinet on 100K data. Most baseline strategies take less time than ActiveEA on the same dataset except betweenness on 100K data, which takes more than 48h. We apply grid search for setting $\alpha$ and $N$ (shown in Sec. 5.4). Hyper-parameters of the bachelor recognizer are chosen by referring the settings of OpenEA and our manual trials. Code and datasets are available at https://github.com/UQ-Neusoft-Health-Data-Science/ActiveEA. + +# 5 Experimental Results + +# 5.1 Comparison with Baselines + +Fig. 3 presents the overall performance of each strategy with three EA models on two datasets, each of which we also synthetically modify to include $30\%$ bachelors. We also report the AUC@0.5 values of these curves in Tab. 1. ActiveEA degenerates into struct_uncert when there is no bachelor. + +Random Sampling. Random sampling usually performs poorly when the annotation proportion is small, while it becomes more competitive when the amount of annotations increases. But for most annotation proportions, random sampling exhibits a large gap in performance compared to the best method. This observation highlights the need to investigate data selection for EA. + +Topology-based Strategies. The topology-based strategies are effective when few annotations are provided, e.g., $< 20\%$ . However, once annotations increase, the effectiveness of topology-based strategies is often worse than random sampling. This may be because these strategies suffer more from the bias between the training set and test set. Therefore, only considering the structural information of KGs has considerable drawbacks for EA. + +Uncertainty Sampling. On the contrary, the un + +
StrategyBootEAAliNetRDGCN
DW (0%)DW (30%)ENDE (0%)ENDE (30%)DW (0%)DW (30%)ENDE (0%)ENDE (30%)DW (0%)DW (30%)ENDE (0%)ENDE (30%)
rand23.5n17.028.121.319.416.726.023.725.825.041.3n41.0
degree19.516.024.020.017.115.222.220.523.322.939.139.4
pagerank22.318.327.623.019.917.325.824.124.523.940.540.6
betweenness20.516.326.121.117.815.623.722.323.222.740.240.3
uncertainty23.916.129.821.221.615.428.222.224.723.940.9n40.5
struct_uncert ActiveEA26.320.833.627.423.119.130.626.826.525.641.941.0
26.731.531.525.732.828.142.3
+ +Table 1: Overall performance (AUC@0.5 $(\%)$ ) for each sampling strategy. The highest performing strategy in each column is indicated in bold. We run each strategy 5 times; most results for ActiveEA show statistically significant differences over other methods (paired t-test with Bonferroni correction, $p < 0.05$ ), except the few cells indicated by $n$ . + +certainty sampling strategy performs poorly when the proportion of annotations is small but improves after several annotations have been accumulated. One reason for this is that neural EA models cannot learn useful patterns with a small number of annotations. On datasets with bachelors, uncertainty sampling always performs worse than random sampling. Thus, it is clear that uncertainty sampling cannot be applied directly to EA. + +Structure-aware Uncertainty Sampling. Structure-aware uncertainty is effective across all annotation proportions. One reason for this is that it combines the advantages of both topology-based strategies and uncertainty sampling. This is essential for AL as it is impossible to predict the amount of annotations required for new datasets. + +ActiveEA. ActiveEA, which enhances structure-aware sampling with a bachelor recognizer, greatly improves EA when KGs contain bachelors. + +# 5.1.1 Generality + +The structure-aware uncertainty sampling mostly outperforms the baselines, while ActiveEA performs even better in almost all cases. ActiveEA also demonstrates generality across datasets, EA models, and bachelor proportions. + +When the dataset has no bachelors, our uncertainty-aware sampling is exceeded by uncertainty sampling in few large-budget cases. However, the real-world datasets always have bachelors. In this case, our structure-aware uncertainty shows more obvious advantages. + +In addition, the strategies are less distinguishable when applied to RDGCN. The reason is that RDGCN exploits the name of entities for prealignment and thus all strategies achieve good performance from the start. + +![](images/ee12a998a10876ac19aa3c7c1a49930f5bc6cc5a67c749108b2556ac6d482f2c.jpg) +Figure 5: Comparison demonstrating the effect of bachelors $(0\% -40\%)$ on the BootEA and Alinet models. + +![](images/78137d991cac7e7efc12b9593da6d175dfd7a19664f7d77dd7ae35b8d767ac31.jpg) + +![](images/fd59491837423d98762fab5598daeb1493ad2d7b8da8c33715b08045342b4326.jpg) +Figure 6: Comparison demonstrating the effectiveness of the bachelor recognizer and the effect of the model ensemble (ME) on BootEA and Alinet. + +![](images/4bb588c85ac62023bd713b637a9da7bbedb60e242e2a37d25075b8335b189464.jpg) + +To assess the generality across datasets of different sizes, we evaluate the sampling strategies with Alinet using ENFR (100K entities), which is larger than DW and ENDE (15K entities). We choose Alinet because it is more scalable than BootEA and RDGCN (Zhao et al., 2020). Fig. 4 presents comparable results to the 15K datasets. + +# 5.2 Effect of Bachelors + +To investigate the effect of bachelors, we removed different amounts of entities randomly (each larger sample contains the subset from earlier samples) from $\mathcal{G}^2$ so that $\mathcal{G}^1$ had different percentages of bachelors. Fig. 5 shows the results of applying all strategies to these datasets. We further make the + +![](images/3aff202d4c18040458c2efc9c673090a3d0e7ca9f27c772d16a2830a13de3826.jpg) +Figure 7: Comparison demonstrating the effects different parameters have on our sampling strategies. + +following four observations: + +1. The performance of all strategies except ActiveEA decrease as bachelors increase. How to avoid selecting bachelors is an important issue in designing AL strategies for EA. +2. Among all strategies, uncertainty sampling is affected the most, while topology-based methods are only marginally affected. +3. Our structure-aware uncertainty outperforms the baselines in all tested bachelor proportions. +4. ActiveEA increases performance as the proportion of bachelors increases. The reason is: if $\mathcal{G}^1$ is fixed and the bachelors can be recognized successfully, a certain budget can lead to larger ratio of annotated matchable entities in datasets with more bachelors than in those with less bachelors. + +# 5.3 Effectiveness of Bachelor Recognizer + +Fig. 6 shows the effectiveness of our bachelor recognizer in the sampling process and the effect of model ensemble. The green curve shows the MicroF1 score of our bachelor recognizer using the model ensemble. Our bachelor recognizer achieves high effectiveness from the start of sampling, where there are few annotations. Each red dot represents the performance of the bachelor recognizer trained with a certain data partition without using the model ensemble. Performance varied because of the bias problem. Therefore, our model ensemble makes the trained model obtain high and stable performance. + +# 5.4 Sensitivity of Parameters + +To investigate the sensitivity of parameters, we ran our strategy with AliNet and BootEA on two DW variants with bachelor proportions of $0\%$ and $30\%$ . + +The sensitivity w.r.t. $\alpha$ is shown in the top row of + +![](images/a19c5eaea573c006a2fbc0682e66dc9f3a97679192dfb1fc608a38aa34698ec8.jpg) +Figure 8: Effect of Bayesian Transformation on uncertainty and ActiveEA across the DW and ENDE datasets and different bachelor percentages. + +Fig. 7. We observe that our method is not sensitive to $\alpha$ . The effectiveness fluctuates when $\alpha < 0.5$ and decreases when $\alpha > 0.5$ . This indicates uncertainty is more informative than structural information. When $\alpha = 0$ , our struct_uncert degenerates to uncertainty sampling (Eq. 2). In the upper left plot, we show the corresponding performance with dotted lines. Under most settings of $\alpha$ , the struct_uncert is much better than uncertainty sampling. This means that introducing structure information is beneficial. + +The bottom row of Fig. 7 shows the effect of sampling batch size $N$ . The overall trend is that larger batch sizes decrease performance. This observation confirms the intuition that more frequent updates to the EA model lead to more precise uncertainty. Therefore, the choice of value of sampling batch size is a matter of trade-off between computation cost and sampling quality. + +# 5.5 Examination of Bayesian Transformation + +We enhanced the uncertainty sampling and ActiveEA with Bayesian Transformation, implemented with Monte Carlo (MC) dropout, and applied them to Alinet and RDGCN on DW and ENDE as in Sec. 5.1. Fig. 8 shows improvements with different settings of MC dropout rate. We find (1) the variation of effects on uncertainty sampling is greater than that on ActiveEA; (2) Bayesian Transformation with small dropout (e.g., 0.05) results in slight improvements to ActiveEA in most cases. + +# 6 Related Works + +Entity Alignment. Entity Alignment refers to the matching of entities across different KGs that refer to the same real-world object. Compared with Entity Resolution (Mudgal et al., 2018), which matches duplicate entities in relational data, EA deals with graph data and emphasizes on exploiting the structure of KGs. Neural models (Chen + +et al., 2017, 2018; Wang et al., 2018; Cao et al., 2019) replaced conventional approaches (Jiménez-Ruiz and Grau, 2011; Suchanek et al., 2011) as the core methods used in recent years. Typically they rely on seed alignment as training data – this is expensive to annotate. Iterative training (i.e., self-training) has been applied to improve EA models by generating more training data automatically (Sun et al., 2018; Mao et al., 2020). These works concern better training methods with given annotated data. However, the problem of reducing the cost of annotation has been neglected. Berrenderf et al. (2021) have been the first to explore AL strategies for EA task. They compared several types of AL heuristics including node centrality, uncertainty, graph coverage, unmatched entities, etc. and they empirically showed the impact of sampling strategies on the creation of seed alignment. In our work, we highlight the limitations of single heuristics and propose an AL framework that can consider information structure, uncertainty sampling and unmatched entities at the same time. In addition, existing neural models assume all KGs entities have counterparts: this is a very strong assumption in reality (Zhao et al., 2020). We provide a solution to recognizing the bachelor entities, which is complementary to the existing models. + +Active Learning. Active Learning is a general framework for selecting the most informative data to annotate when training Machine Learning models (Aggarwal et al., 2014). The pool-based sampling scenario is a popular AL setting where a base pool of unlabelled instances is available to query from (Settles, 2012; Aggarwal et al., 2014). Our proposed AL framework follows this scenario. Numerous AL strategies have been proposed in the general domain (Aggarwal et al., 2014). Uncertainty sampling is the most widely used because of its ease to implement and its robust effectiveness (Lewis, 1995; Cohn et al., 1996). However, there are key challenges that general AL strategies cannot solve when applying AL to EA. Most AL strategies are designed under the assumption that the data is independent and identically distributed. However, KGs entities in the AL task are correlated, as in other graph-based tasks, e.g., node classification (Bilgic et al., 2010) and link prediction (Ostapuk et al., 2019). In addition, bachelor entities cause a very special issue in EA. They may have low informativeness but high uncertainty. We + +design an AL strategy to solve these special challenges. Few existing works (Qian et al., 2017; Malmi et al., 2017) have applied AL to conventional EA but do not consider neural EA models, which have now become of widespread use. Only Berrendorf et al. (2021) empirically explored general AL strategies for neural EA but did not solve the aforementioned challenges. + +# 7 Conclusion + +Entity Alignment is an essential step for KG fusion. Current mainstream methods for EA are neural models, which rely on seed alignment. The cost of labelling seed alignment is often high, but how to reduce this cost has been neglected. In this work, we proposed an Active Learning framework (named ActiveEA), aiming to produce the best EA model with the least annotation cost. Specifically, we attempted to solve two key challenges affecting EA that general AL strategies cannot deal with. Firstly, we proposed a structure-aware uncertainty sampling, which can combine uncertainty sampling with the structure information of KGs. Secondly, we designed a bachelor recognizer, which reduces annotation budget by avoiding the selection of bachelors. Specially, it can tolerate sampling biases. Extensive experimental showed ActiveEA is more effective than the considered baselines and has great generality across different datasets, EA models and bachelor percentages. + +In future, we plan to explore combining active learning and self-training which we believe are complementary approaches. Self-training can generate extra training data automatically but suffers from incorrectly labelled data. This can be addressed by amending incorrectly labelled data using AL strategies. + +# Acknowledgements + +This research is supported by the Shenyang Science and Technology Plan Fund (No. 20-201-4-10), the Member Program of Neusoft Research of Intelligent Healthcare Technology, Co. Ltd.(No. NRMP001901)). Dr Wen Hua is the recipient of an Australian Research Council DECRA Research Fellowship (DE210100160). Dr Guido Zuccon is the recipient of an Australian Research Council DECRA Research Fellowship (DE180101579). + +# References + +Charu C. Aggarwal, Xiangnan Kong, Quanquan Gu, Jiawei Han, and Philip S. Yu. 2014. Active learning: A survey. In Charu C. Aggarwal, editor, Data Classification: Algorithms and Applications, pages 571-606. CRC Press. +Max Berrendorf, Evgeniy Faerman, and Volker Tresp. 2021. Active learning for entity alignment. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, volume 12656 of Lecture Notes in Computer Science, pages 48-62. Springer. +Mustafa Bilgic, Lilyana Mihalkova, and Lise Getoor. 2010. Active learning for networked data. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 79-86. Omnipress. +Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787-2795. +Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. Comput. Networks, 30(1-7):107-117. +Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1452-1461. Association for Computational Linguistics. +Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2018. Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 3998-4004. ijcai.org. +Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 1511-1517. ijcai.org. +David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. 1996. Active learning with statistical models. J. Artif. Intell. Res., 4:129-145. + +Massimo Franceschet. 2011. Pagerank: standing on the shoulders of giants. Commun. ACM, 54(6):92-101. +Linton C Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry, pages 35-41. +Paul A Gagniuc. 2017. Markov chains: from theory to implementation and experimentation. John Wiley & Sons. +Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1183-1192. PMLR. +William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1024-1034. +Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Martinen, and Philip S. Yu. 2020. A survey on knowledge graphs: Representation, acquisition and applications. CoRR, abs/2002.00388. +Ernesto Jiménez-Ruiz and Bernardo Cuenca Grau. 2011. Logmap: Logic-based and scalable ontology matching. In *The Semantic Web - ISWC 2011 - 10th International Semantic Web Conference*, Bonn, Germany, October 23-27, 2011, Proceedings, Part I, volume 7031 of Lecture Notes in Computer Science, pages 273-288. Springer. +Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Soren Auer, and Christian Bizer. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195. +David D. Lewis. 1995. A sequential algorithm for training text classifiers: Corrigendum and additional data. SIGIR Forum, 29(2):13-19. + +Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6355-6364. Association for Computational Linguistics. +Eric Malmi, Aristides Gionis, and Evimaria Terzi. 2017. Active network alignment: A matching-based approach. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 1687-1696. ACM. +Xin Mao, Wenting Wang, Huimin Xu, Man Lan, and Yuanbin Wu. 2020. MRAEA: an efficient and robust entity alignment approach for cross-lingual knowledge graph. In WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pages 420-428. ACM. +Sidharth Mudgal, Han Li, Theodoros Rekatsinas, AnHai Doan, Youngchoon Park, Ganesh Krishnan, Rohit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018, Houston, TX, USA, June 10-15, 2018, pages 19-34. ACM. +Natalia Ostapuk, Jie Yang, and Philippe Cudré-Mauroux. 2019. Activelink: Deep active learning for link prediction in knowledge graphs. In *The World Wide Web Conference*, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 1398-1408. ACM. +Kun Qian, Lucian Popa, and Prithviraj Sen. 2017. Active learning for large-scale entity resolution. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 1379-1388. ACM. +Thomas Rebele, Fabian M. Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, and Gerhard Weikum. 2016. YAGO: A multilingual knowledge base from wikipedia, wordnet, and geonames. In The Semantic Web - ISWC 2016 - 15th International Semantic Web Conference, Kobe, Japan, October 17-21, 2016, Proceedings, Part II, volume 9982 of Lecture Notes in Computer Science, pages 177-185. +Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. 2020. A survey of deep active learning. CoRR, abs/2009.00236. +Burr Settles. 2012. Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers. + +Fabian M. Suchanek, Serge Abiteboul, and Pierre Senellart. 2011. PARIS: probabilistic alignment of relations, instances, and schema. Proc. VLDB Endow., 5(3):157-168. +Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4396-4402. ijcai.org. +Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020a. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 222-229. AAAI Press. +Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020b. A benchmarking study of embedding-based entity alignment for knowledge graphs. Proc. VLDB Endow., 13(11):2326-2340. +Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM, 57(10):78-85. +Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 349-357. Association for Computational Linguistics. +Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5278-5284. ijcai.org. +Xiang Zhao, Weixin Zeng, Jiuyang Tang, Wei Wang, and Fabian Suchanek. 2020. An experimental study of state-of-the-art entity alignment approaches. IEEE Annals of the History of Computing, (01):1-1. \ No newline at end of file diff --git a/activeeaactivelearningforneuralentityalignment/images.zip b/activeeaactivelearningforneuralentityalignment/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..87bcb2560d15795a93f4333156598ea40f55af4b --- /dev/null +++ b/activeeaactivelearningforneuralentityalignment/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f23d50d8d3ba8ce63c0ba1951bba075dad81acd606997004bde5ae83c0e70a05 +size 501999 diff --git a/activeeaactivelearningforneuralentityalignment/layout.json b/activeeaactivelearningforneuralentityalignment/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..31498f12803be1d4eafbdc6b3ba89b0e5b3d174c --- /dev/null +++ b/activeeaactivelearningforneuralentityalignment/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7473a1f42a18f3cdb608f987cd14e2cd265ba10ae63762eedd88475a5d31bc20 +size 488918 diff --git a/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_content_list.json b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bed825cef184467d3fd763086fb5145b91b92b85 --- /dev/null +++ b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61f5e8cebfa9d96d304abebb79a5ec4ebda6d38f6177bacffbd01a82a01b5243 +size 96958 diff --git a/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_model.json b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ba6ca7e7b03fa9dd2d55858157fc655166b8e680 --- /dev/null +++ b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:688e89c21402ac35230dd3865d26f2dae39d552e85c0da7f55629d8ed06a14e8 +size 117787 diff --git a/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_origin.pdf b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7d7667f915fd7370205ab0a1920de3423fa0471f --- /dev/null +++ b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:105db2dffcce78184264b4626a2d2fd2111689f25539626d342a8572d54f806a +size 2146918 diff --git a/activelearningbyacquiringcontrastiveexamples/full.md b/activelearningbyacquiringcontrastiveexamples/full.md new file mode 100644 index 0000000000000000000000000000000000000000..41b7ba3563683234e7bee8c1ac188698a6ae1bd7 --- /dev/null +++ b/activelearningbyacquiringcontrastiveexamples/full.md @@ -0,0 +1,335 @@ +# Active Learning by Acquiring Contrastive Examples + +Katerina Margatina† Giorgos Vernikos‡* Loic Barrault† Nikolaos Aletras† †University of Sheffield ‡EPFL *HEIG-VD + +{k.margatina, l.barrault, n.aletras}@sheffield.ac.uk georgios.vernikos@epfl.ch + +# Abstract + +Common acquisition functions for active learning use either uncertainty or diversity sampling, aiming to select difficult and diverse data points from the pool of unlabeled data, respectively. In this work, leveraging the best of both worlds, we propose an acquisition function that opts for selecting contrastive examples, i.e. data points that are similar in the model feature space and yet the model outputs maximally different predictive likelihoods. We compare our approach, CAL (Contrastive Active Learning), with a diverse set of acquisition functions in four natural language understanding tasks and seven datasets. Our experiments show that CAL performs consistently better or equal than the best performing baseline across all tasks, on both in-domain and out-of-domain data. We also conduct an extensive ablation study of our method and we further analyze all actively acquired datasets showing that CAL achieves a better trade-off between uncertainty and diversity compared to other strategies. + +# 1 Introduction + +Active learning (AL) is a machine learning paradigm for efficiently acquiring data for annotation from a (typically large) pool of unlabeled data (Lewis and Catlett, 1994; Cohn et al., 1996; Settles, 2009). Its goal is to concentrate the human labeling effort on the most informative data points that will benefit model performance the most and thus reducing data annotation cost. + +The most widely used approaches to acquiring data for AL are based on uncertainty and diversity, often described as the "two faces of AL" (Dasgupta, 2011). While uncertainty-based methods leverage the model predictive confidence to select difficult examples for annotation (Lewis and Gale, 1994; Cohn et al., 1996), diversity sampling exploits heterogeneity in the feature space by typically performing clustering (Brinker, 2003; Bodó + +![](images/e20917383a30fd61867accfe94810f8cb75d3fe9001ed6cef8f08042c5819b45.jpg) +Figure 1: Illustrative example of our proposed method CAL. The solid line (model decision boundary) separates data points from two different classes (blue and orange), the coloured data points represent the labeled data and the rest are the unlabeled data of the pool. + +et al., 2011). Still, both approaches have core limitations that may lead to acquiring redundant data points. Algorithms based on uncertainty may end up choosing uncertain yet uninformative repetitive data, while diversity-based methods may tend to select diverse yet easy examples for the model (Roy and McCallum, 2001). The two approaches are orthogonal to each other, since uncertainty sampling is usually based on the model's output, while diversity exploits information from the input (i.e. feature) space. Hybrid data acquisition functions that combine uncertainty and diversity sampling have also been proposed (Shen et al., 2004; Zhu et al., 2008; Ducoffe and Precioso, 2018; Ash et al., 2020; Yuan et al., 2020; Ru et al., 2020). + +In this work, we aim to leverage characteristics from hybrid data acquisition. We hypothesize that data points that are close in the model feature space (i.e. share similar or related vocabulary, or similar model encodings) but the model produces different predictive likelihoods, should be good candidates for data acquisition. We define such examples as contrastive (see example in Figure 1). For that purpose, we propose a new acquisition function that searches for contrastive examples in the pool of unlabeled data. Specifically, our method, Contrastive Active Learning (CAL) selects unlabeled + +data points from the pool, whose predictive likelihoods diverge the most from their neighbors in the training set. This way, CAL shares similarities with diversity sampling, but instead of performing clustering it uses the feature space to create neighborhoods. CAL also leverages uncertainty, by using predictive likelihoods to rank the unlabeled data. + +We evaluate our approach in seven datasets from four tasks including sentiment analysis, topic classification, natural language inference and paraphrase detection. We compare CAL against a full suite of baseline acquisition functions that are based on uncertainty, diversity or both. We also examine robustness by evaluating on out-of-domain data, apart from in-domain held-out sets. Our contributions are the following: + +1. We propose CAL, a new acquisition function for active learning that acquires contrastive examples from the pool of unlabeled data (§2); +2. We show that CAL performs consistently better or equal compared to all baselines in all tasks when evaluated on in-domain and out-of-domain settings (§4); +3. We conduct a thorough analysis of our method showing that CAL achieves a better trade-off between diversity and uncertainty compared to the baselines (§6). + +We release our code online ${}^{1}$ . + +# 2 Contrastive Active Learning + +In this section we present in detail our proposed method, CAL: Contrastive Active Learning. First, we provide a definition for contrastive examples and how they are related to finding data points that are close to the decision boundary of the model (§2.1). We next describe an active learning loop using our proposed acquisition function (§2.2). + +# 2.1 Contrastive Examples + +In the context of active learning, we aim to formulate an acquisition function that selects contrastive examples from a pool of unlabeled data for annotation. We draw inspiration from the contrastive learning framework, that leverages the similarity between data points to push those from the same class closer together and examples from different classes further apart during training (Mikolov et al., + +2013; Sohn, 2016; van den Oord et al., 2019; Chen et al., 2020; Gunel et al., 2021). + +In this work, we define as contrastive examples two data points if their model encodings are similar, but their model predictions are very different (maximally disagreeing predictive likelihoods). + +Formally, data points $x_{i}$ and $x_{j}$ should first satisfy a similarity criterion: + +$$ +d \left(\Phi \left(x _ {i}\right), \Phi \left(x _ {j}\right)\right) < \epsilon \tag {1} +$$ + +where $\Phi(.)\in \mathbb{R}^{d'}$ is an encoder that maps $x_{i},x_{j}$ in a shared feature space, $d(.)$ is a distance metric and $\epsilon$ is a small distance value. + +A second criterion, based on model uncertainty, is to evaluate that the predictive probability distributions of the model $p(y|x_i)$ and $p(y|x_j)$ for the inputs $x_i$ and $x_j$ should maximally diverge: + +$$ +\operatorname {K L} \left(p \left(y | x _ {i}\right) | | p \left(y | x _ {j}\right)\right)\rightarrow \infty \tag {2} +$$ + +where KL is the Kullback-Leibler divergence between two probability distributions ${}^{2}$ . + +For example, in a binary classification problem, given a reference example $x_{1}$ with output probability distribution (0.8, 0.2)3 and similar candidate examples $x_{2}$ with (0.7, 0.3) and $x_{3}$ with (0.6, 0.4), we would consider as contrastive examples the pair $(x_{1}, x_{3})$ . However, if another example $x_{4}$ (similar to $x_{1}$ in the model feature space) had a probability distribution (0.4, 0.6), then the most contrastive pair would be $(x_{1}, x_{4})$ . + +Figure 1 provides an illustration of contrastive examples for a binary classification case. All data points inside the circle (dotted line) are similar in the model feature space, satisfying Eq. 1. Intuitively, if the divergence of the output probabilities of the model for the gray and blue shaded data points is high, then Eq. 2 should also hold and we should consider them as contrastive. + +From a different perspective, data points with similar model encodings (Eq. 1) and dissimilar model outputs (Eq. 2), should be close to the model's decision boundary (Figure 1). Hence, we hypothesize that our proposed approach to select + +# Algorithm 1 Single iteration of CAL + +Input: labeled data $\mathcal{D}_{\mathrm{lab}}$ , unlabeled data $\mathcal{D}_{\mathrm{pool}}$ , acquisition size $b$ , model $\mathcal{M}$ , number of neighbours $k$ , model representation (encoding) function $\Phi(.)$ + +1 for $x_{p}$ in $\mathcal{D}_{\mathrm{pool}}$ do +2 $\left\{(x_{l}^{(i)},y_{l}^{(i)})\right\} ,i = 1,\dots,k\gets \mathrm{KNN}\bigl {(}\Phi (x_{p}),\Phi (\mathcal{D}_{\mathbf{lab}}),k\bigr)$ find neighbours in $\mathcal{D}_{\mathbf{lab}}$ +3 $p(y|x_l^{(i)})\gets \mathcal{M}(x_l^{(i)}),i = 1,\ldots ,k$ compute probabilities +4 $p(y|x_{p})\gets \mathcal{M}(x_{p})$ +5 $\begin{array}{rlr}{\mathrm{KL}}\big(p(y|x_l^{(i)})||p(y|x_p)\big),i = 1,\dots,k \end{array}$ compute divergence +6 $s_{x_p} = \frac{1}{k}\sum_{i = 1}^{k}\mathrm{KL}\bigl (p(y|x_l^{(i)})||p(y|x_p)\bigr)$ + +7 end + +8 $Q = \operatorname{argmax}_{x_p \in \mathcal{D}_{\mathrm{pool}}} s_{x_p}, |Q| = b$ 1 select batch + +Output: $Q$ + +contrastive examples is related to acquiring difficult examples near the decision boundary of the model. Under this formulation, CAL does not guarantee that the contrastive examples lie near the model's decision boundary, because our definition is not strict. In order to ensure that a pair of contrastive examples lie on the boundary, the second criterion should require that the model classifies the two examples in different classes (i.e. different predictions). However, calculating the distance between an example and the model decision boundary is intractable and approximations that use adversarial examples are computationally expensive (Ducoffe and Precioso, 2018). + +# 2.2 Active Learning Loop + +Assuming a multi-class classification problem with $C$ classes, labeled data for training $\mathcal{D}_{\mathrm{lab}}$ and a pool of unlabeled data $\mathcal{D}_{\mathrm{pool}}$ , we perform AL for $T$ iterations. At each iteration, we train a model on $\mathcal{D}_{\mathrm{lab}}$ and then use our proposed acquisition function, CAL (Algorithm 1), to acquire a batch $Q$ consisting of $b$ examples from $\mathcal{D}_{\mathrm{pool}}$ . The acquired examples are then labeled4, they are removed from the pool $\mathcal{D}_{\mathrm{pool}}$ and added to the labeled dataset $\mathcal{D}_{\mathrm{lab}}$ , which will serve as the training set for training a model in the next AL iteration. In our experiments, we use a pretrained BERT model $\mathcal{M}$ (Devlin et al., 2019), which we fine-tune at each AL iteration using the current $\mathcal{D}_{\mathrm{lab}}$ . We begin the AL loop by training a model $\mathcal{M}$ using an initial labeled dataset $\mathcal{D}_{\mathrm{lab}}^5$ . + +Find Nearest Neighbors for Unlabeled Candidates The first step of our contrastive acquisition function (cf. line 2) is to find examples that are similar in the model feature space (Eq. 1). Specifically, we use the [CLS] token embedding of BERT as our encoder $\Phi(.)$ to represent all data points in $\mathcal{D}_{\mathrm{lab}}$ and $\mathcal{D}_{\mathrm{pool}}$ . We use a K-Nearest-Neighbors (KNN) implementation using the labeled data $\mathcal{D}_{\mathrm{lab}}$ in order to query similar examples $x_{l} \in \mathcal{D}_{\mathrm{lab}}$ for each candidate $x_{p} \in \mathcal{D}_{\mathrm{pool}}$ . Our distance metric $d(.)$ is Euclidean distance. To find the most similar data points in $\mathcal{D}_{\mathrm{lab}}$ for each $x_{p}$ , we select the top $k$ instead of selecting a predefined threshold $\epsilon$ (Eq. 1)6. This way, we create a neighborhood $N_{x_{p}} = \{x_{p}, x_{l}^{(1)}, \ldots, x_{l}^{(k)}\}$ that consists of the unlabeled data point $x_{p}$ and its $k$ closest examples $x_{l}$ in $\mathcal{D}_{\mathrm{lab}}$ (Figure 1). + +Compute Contrastive Score between Unlabeled Candidates and Neighbors In the second step, we compute the divergence in the model predictive probabilities for the members of the neighborhood (Eq. 2). Using the current trained model $\mathcal{M}$ to obtain the output probabilities for all data points in $N_{x_p}$ (cf. lines 3-4), we then compute the Kullback-Leibler divergence (KL) between the output probabilities of $x_p$ and all $x_l \in N_{x_p}$ (cf. line 5). To obtain a score $s_{x_p}$ for a candidate $x_p$ , we take the average of all divergence scores (cf. line 6). + +Rank Unlabeled Candidates and Select Batch We apply these steps to all candidate examples $x_{p}\in \mathcal{D}_{\mathrm{pool}}$ and obtain a score $s_{x_p}$ for each. With + +
DATASETTASKDOMAINOOD DATASETTRAINVALTESTCLASSES
IMDBSentiment AnalysisMovie ReviewsSST-222.5K2.5K25K2
SST-2Sentiment AnalysisMovie ReviewsIMDB60.6K6.7K8712
AGNEWSTopic ClassificationNews-114K6K7.6K4
DBPEDIATopic ClassificationNews-20K2K70K14
PUBMEDTopic ClassificationMedical-180K30.2K30.1K5
QNLINatural Language InferenceWikipedia-99.5K5.2K5.5K2
QQPParaphrase DetectionSocial QA QuestionsTWITTERPPDB327K36.4K80.8K2
+ +Table 1: Dataset statistics. + +our scoring function we define as contrastive examples the unlabeled data $x_{p}$ that have the highest score $s_{x_p}$ . A high $s_{x_p}$ score indicates that the unlabeled data point $x_{p}$ has a high divergence in model predicted probabilities compared to its neighbors in the training set (Eq. 1, 2), suggesting that it may lie near the model's decision boundary. To this end, our acquisition function selects the top $b$ examples from the pool that have the highest score $s_{x_p}$ (cf. line 8), that form the acquired batch $Q$ . + +# 3 Experimental Setup + +# 3.1 Tasks & Datasets + +We conduct experiments on sentiment analysis, topic classification, natural language inference and paraphrase detection tasks. We provide details for the datasets in Table 1. We follow Yuan et al. (2020) and use IMDB (Maas et al., 2011), SST-2 (Socher et al., 2013), PUBMED (Dernoncourt and Lee, 2017) and AGNEWS from Zhang et al. (2015) where we also acquired DBPEDIA. We experiment with tasks requiring pairs of input sequences, using QQP and QNLI from GLUE (Wang et al., 2019). To evaluate robustness on out-of-distribution (OOD) data, we follow Hendrycks et al. (2020) and use SST-2 as OOD dataset for IMDB and vice versa. We finally use TWITTERPPDB (Lan et al., 2017) as OOD data for QQP as in Desai and Durrett (2020). + +# 3.2 Baselines + +We compare CAL against five baseline acquisition functions. The first method, ENTROPY is the most commonly used uncertainty-based baseline that acquires data points for which the model has the highest predictive entropy. As a diversity-based baseline, following Yuan et al. (2020), we use BERTKM that applies k-means clustering using the $l_{2}$ normalized BERT output embeddings of the fine-tuned model to select $b$ data points. We compare against BADGE (Ash et al., 2020), an acquisition function that aims to combine diversity and + +uncertainty sampling, by computing gradient embeddings $g_{x}$ for every candidate data point $x$ in $\mathcal{D}_{\mathrm{pool}}$ and then using clustering to select a batch. Each $g_{x}$ is computed as the gradient of the cross-entropy loss with respect to the parameters of the model's last layer, aiming to be the component that incorporates uncertainty in the acquisition function7. We also evaluate a recently introduced cold-start acquisition function called ALPS (Yuan et al., 2020) that uses the masked language model (MLM) loss of BERT as a proxy for model uncertainty in the downstream classification task. Specifically, aiming to leverage both uncertainty and diversity, ALPS forms a surprisal embedding $s_{x}$ for each $x$ , by passing the unmasked input $x$ through the BERT MLM head to compute the cross-entropy loss for a random 15% subsample of tokens against the target labels. ALPS clusters these embeddings to sample $b$ sentences for each AL iteration. Lastly, we include RANDOM, that samples data from the pool from a uniform distribution. + +# 3.3 Implementation Details + +We use BERT-BASE (Devlin et al., 2019) adding a task-specific classification layer using the implementation from the HuggingFace library (Wolf et al., 2020). We evaluate the model 5 times per epoch on the development set following Dodge et al. (2020) and keep the one with the lowest validation loss. We use the standard splits provided for all datasets, if available, otherwise we randomly sample a validation set from the training set. We test all models on a held-out test set. We repeat all experiments with five different random seeds resulting into different initializations of the parameters of the model's extra task-specific output feedfor + +![](images/5b2df3228a291818aa46ff04b2661bda6d677314d7130ccca9ac35f24634b287.jpg) + +![](images/eaafb10928e52016e0657c606265a8b02200a063981921f1f5249e5a0786c07b.jpg) + +![](images/880321a7fb0eab7e64b7827f562f345719cb4cbd0088069209a281681d15cf21.jpg) + +![](images/4eb3c4197a2fd736ff05ccf9515849f2e35e5b0179687bd8930cccc834a586c6.jpg) + +![](images/c396d86fb3a97930c0a92ba995b5f52615300a2bd8d20a0b44fba3c9cf02b204.jpg) +Figure 2: In-domain (ID) test accuracy during AL iterations for different acquisition functions. + +![](images/48cfcbc0143605d3926fc8e4ca6bd1cdd5c5ae2bb13ea913aa66c6e12d4d2ea3.jpg) + +![](images/77e2733be909b6d63f6ddeb2082d877c8f6b4805123d9a3902670d62d0a1e011.jpg) + +![](images/a19b8122efb885800531fe47d2426b2bc9b447424a049accf7980fa1f5d9ca73.jpg) + +ward layer and the initial $\mathcal{D}_{\mathrm{lab}}$ . For all datasets we use as budget the $15\%$ of $\mathcal{D}_{\mathrm{pool}}$ , initial training set $1\%$ and acquisition size $b = 2\%$ . Each experiment is run on a single Nvidia Tesla V100 GPU. More details are provided in the Appendix A.1. + +# 4 Results + +# 4.1 In-domain Performance + +We present results for in-domain test accuracy across all datasets and acquisition functions in Figure 2. We observe that CAL is consistently the top performing method especially in DBPEDIA, PUBMED and AGNEWS datasets. + +CAL performs slightly better than ENTROPY in IMDB, QNLI and QQP, while in SST-2 most methods yield similar results. ENTROPY is the second best acquisition function overall, consistently performing better than diversity-based or hybrid baselines. This corroborates recent findings from Desai and Durrett (2020) that BERT is sufficiently calibrated (i.e. produces good uncertainty estimates), making it a tough baseline to beat in AL. + +BERTKM is a competitive baseline (e.g. SST-2, QNLI) but always underperforms compared to CAL and ENTROPY, suggesting that uncertainty is the most important signal in the data selection + +process. An interesting future direction would be to investigate in depth whether and which (i.e. which layer) representations of the current (pretrained language models) works best with similarity search algorithms and clustering. + +Similarly, we can see that BADGE, despite using both uncertainty and diversity, also achieves low performance, indicating that clustering the constructed gradient embeddings does not benefit data acquisition. Finally, we observe that ALPS generally underperforms and is close to RANDOM. We can conclude that this heterogeneous approach to uncertainty, i.e. using the pretrained language model as proxy for the downstream task, is beneficial only in the first few iterations, as shown in Yuan et al. (2020). + +Surprisingly, we observe that for the SST-2 dataset ALPS performs similarly with the highest performing acquisition functions, CAL and ENTROPY. We hypothesize that due to the informal textual style of the reviews of SST-2 (noisy social media data), the pretrained BERT model can be used as a signal to query linguistically hard examples, that benefit the downstream sentiment analysis task. This is an interesting finding and a future research direction would be to investigate the correlation between the difficulty of an example in a + +
TRAIN (ID)SST-2IMDBQQP
TEST (OOD)IMDBSST-2TWITTERPPDB
RANDOM76.28 ± 0.7282.50 ± 3.6185.86 ± 0.48
BERTKM75.99 ± 1.0184.98 ± 1.22-
ENTROPY75.38 ± 2.0485.54 ± 2.5285.06 ± 1.96
ALPS77.06 ± 0.7883.65 ± 3.1784.79 ± 0.49
BADGE76.41 ± 0.9285.19 ± 3.01-
CAL79.00 ± 1.3984.96 ± 2.3686.20 ± 0.22
+ +Table 2: Out-of-domain (OOD) accuracy of models trained with the actively acquired datasets created with different AL acquisition strategies. + +downstream task with its perplexity (loss) of the pretrained language model. + +# 4.2 Out-of-domain Performance + +We also evaluate the out-of-domain (OOD) robustness of the models trained with the actively acquired datasets of the last iteration (i.e. $15\%$ of $\mathcal{D}_{\mathrm{pool}}$ or $100\%$ of the AL budget) using different acquisition strategies. We present the OOD results for SST-2, IMDB and QQP in Table 2. When we test the models trained with SST-2 on IMDB (first column) we observe that CAL achieves the highest performance compared to the other methods by a large margin, indicating that acquiring contrastive examples can improve OOD generalization. In the opposite scenario (second column), we find that the highest accuracy is obtained with ENTROPY. However, similarly to the ID results for SST-2 (Figure 2), all models trained on different subsets of the IMDB dataset result in comparable performance when tested on the small SST-2 test set (the mean accuracies lie inside the standard deviations across models). We hypothesize that this is because SST-2 is not a challenging OOD dataset for the different IMDB models. This is also evident by the high OOD accuracy, $85\%$ on average, which is close to the $91\%$ SST-2 ID accuracy of the full model (i.e. trained on $100\%$ of the ID data). Finally, we observe that CAL obtains the highest OOD accuracy for QQP compared to RANDOM, ENTROPY and ALPS. Overall, our empirical results show that the models trained on the actively acquired dataset with CAL obtain consistently similar or better performance than all other approaches when tested on OOD data. + +# 5 Ablation Study + +We conduct an extensive ablation study in order to provide insights for the behavior of every component of CAL. We present all AL experiments on the AGNEWS dataset in Figure 3. + +![](images/cc90505eb66b26fcd58eec3b4a7f8ed59d8f867fa48511072f7509ef1fa246ed.jpg) +Figure 3: In-domain (ID) test accuracy with different variants of CAL (ablation). + +Decision Boundary We first aim to evaluate our hypothesis that CAL acquires difficult examples that lie close to the model's decision boundary. Specifically, to validate that the ranking of the constructed neighborhoods is meaningful, we run an experiment where we acquire candidate examples that have the minimum divergence from their neighbors opposite to CAL (i.e. we replace $\mathrm{argmax}(.)$ with $\mathrm{argmin}(.)$ in line 8 of Algorithm 1). We observe (Fig. 3 - CAL opposite) that even after acquiring $15\%$ of unlabeled data, the performance remains unchanged compared to the initial model (of the first iteration), even degrades. In effect, this finding denotes that CAL does select informative data points. + +Neighborhood Next, we experiment with changing the way we construct the neighborhoods, aiming to improve computational efficiency. We thus modify our algorithm to create a neighborhood for each labeled example (instead of unlabeled). This way we compute a divergence score only for the neighbors of the training data points. However, we find this approach to slightly underperform (Fig. 3 - CAL per labeled example), possibly because only a small fraction of the pool is considered and thus the uncertainty of all the unlabeled data points is not taken into account. + +Scoring function We also experiment with several approaches for constructing our scoring function (cf. line 6 in Algorithm 1). Instead of computing the KL divergence between the predicted probabilities of each candidate example and its labeled neighbors (cf. line 5), we used cross entropy between the output probability distribution and the gold labels of the labeled data. The intuition is to evaluate whether information of the actual label is more useful than the model's predictive probability distribution. We observe this scoring function to result in a slight drop in performance (Fig. 3 - Cross Entropy). We also experimented with various pooling operations to aggregate the KL divergence scores for each candidate data point. We found maximum and median (Fig. 3 - Max/Median) to perform similarly with the average (Fig. 3 - CAL), which is the pooling operation we decided to keep in our proposed algorithm. + +Feature Space Since our approach is related to to acquiring data near the model's decision boundary, this effectively translates into using the [CLS] output embedding of BERT. Still, we opted to cover several possible alternatives to the representations, i.e. feature space, that can be used to find the neighbors with KNN. We divide our exploration into two categories: intrinsic representations from the current fine-tuned model and extrinsic using different methods. For the first category, we examine representing each example with the mean embedding layer of BERT (Fig. 3 - Mean embedding) or the mean output embedding (Fig. 3 - Mean output). We find both alternatives to perform worse than using the [CLS] token (Fig. 3 - CAL). The motivation for the second category is to evaluate whether acquiring contrastive examples in the input feature space, i.e. representing the raw text, is meaningful (Gardner et al., 2020)9. We thus examine contextual representations from a pretrained BERT language model (Fig. 3-BERT-pr [CLS]) (not fine-tuned in the task or domain) and non-contextualized TF-IDF vectors (Fig. 3-TF-IDF). We find both approaches, along with Mean embedding, to largely underperform compared to our approach that acquires ambiguous data near the model decision boundary. + +9This can be interpreted as comparing the effectiveness of selecting data near the model decision boundary vs. the task decision boundary, i.e. data that are similar for the task itself or for the humans (in terms of having the same raw input/vocabulary), but are from different classes. + +# 6 Analysis + +Finally, we further investigate CAL and all acquisition functions considered (bases), in terms of diversity, representativeness and uncertainty. Our aim is to provide insights on what data each method tends to select and what is the uncertainty-diversity trade-off of each approach. Table 3 shows the results of our analysis averaged across datasets. We denote with $L$ the labeled set, $U$ the unlabeled pool and $Q$ an acquired batch of data points from $U^{10}$ . + +# 6.1 Diversity & Uncertainty Metrics + +Diversity in input space (DIV.-I) We first evaluate the diversity of the actively acquired data in the input feature space, i.e. raw text, by measuring the overlap between tokens in the sampled sentences $Q$ and tokens from the rest of the data pool $U$ . Following Yuan et al. (2020), we compute DIV.-I as the Jaccard similarity between the set of tokens from the sampled sentences $Q$ , $\mathcal{V}_{\mathcal{Q}}$ , and the set of tokens from the unsampled sentences $\mathcal{U} \backslash \mathcal{Q}$ , $\mathcal{V}_{\mathcal{Q}'}$ , $\mathcal{J}(\mathcal{V}_{\mathcal{Q}}, \mathcal{V}_{\mathcal{Q}'}) = \frac{|\mathcal{V}_{\mathcal{Q}} \cap \mathcal{V}_{\mathcal{Q}'}|}{|\mathcal{V}_{\mathcal{Q}} \cup \mathcal{V}_{\mathcal{Q}'}|}$ . A high DIV.-I value indicates high diversity because the sampled and unsampled sentences have many tokens in common. + +Diversity in feature space (DIV.-F) We next evaluate diversity in the (model) feature space, using the [CLS] representations of a trained BERT model $^{11}$ . Following Zhdanov (2019) and Ein-Dor et al. (2020), we compute DIV.-F of a set $Q$ as $\left(\frac{1}{|U|}\sum_{x_i\in U}\min_{x_j\in Q}d(\Phi (x_i),\Phi (x_j))\right)^{-1}$ , where $\Phi (x_{i})$ denotes the [CLS] output token of example $x_{i}$ obtained by the model which was trained using $L$ , and $d(\Phi (x_i),\Phi (x_j))$ denotes the Euclidean distance between $x_{i}$ and $x_{j}$ in the feature space. + +Uncertainty (UNC.) To measure uncertainty, we use the model $\mathcal{M}_f$ trained on the entire training dataset (Figure 2 - Full supervision). As in Yuan et al. (2020), we use the logits from the fully trained model to estimate the uncertainty of an example, as it is a reliable estimate due to its high performance after training on many examples, while + +
DIV.-IDIV.-FUNC.REPR.
RANDOM0.7660.3560.1321.848
BERTKM0.7170.3630.1452.062
ENTROPY0.7540.3230.2402.442
ALPS0.7710.3600.1262.038
BADGE0.6550.3390.1232.013
CAL0.7680.3350.2312.693
+ +Table 3: Uncertainty and diversity metrics across acquisition functions, averaged for all datasets. + +it offers a fair comparison across all acquisition strategies. First, we compute predictive entropy of an input $x$ when evaluated by model $\mathcal{M}_f$ and then we take the average over all sentences in a sampled batch $Q$ . We use the average predictive entropy to estimate uncertainty of the acquired batch $Q$ for each method $-\frac{1}{|Q|} \sum_{x \in Q} \sum_{c=1}^{C} p(y = c|x) \log p(y = c|x)$ . As a sampled batch $Q$ we use the full actively acquired dataset after completing our AL iterations (with 15% of the data). + +Representativeness (REPR.) We finally analyze the representativeness of the acquired data as in Ein-Dor et al. (2020). We aim to study whether AL strategies tend to select outlier examples that do not properly represent the overall data distribution. We rely on the KNN-density measure proposed by Zhu et al. (2008), where the density of an example is quantified by one over the average distance between the example and its K most similar examples (i.e., K nearest neighbors) within $U$ , based on the [CLS] representations as in Div.-F. An example with high density degree is less likely to be an outlier. We define the representativeness of a batch $Q$ as one over the average KNN-density of its instances using the Euclidean distance with $K = 10$ . + +# 6.2 Discussion + +We first observe in Table 3 that ALPS acquires the most diverse data across all approaches. This is intuitive since ALPS is the most linguistically-informed method as it essentially acquires data that are difficult for the language modeling task, thus favoring data with a more diverse vocabulary. All other methods acquire similarly diverse data, except BADGE that has the lowest score. Interestingly, we observe a different pattern when evaluating diversity in the model feature space (using the [CLS] representations). BERTKM has the highest + +DIV.-F score, as expected, while CAL and ENTROPY have the lowest. This supports our hypothesis that uncertainty sampling tends to acquire uncertain but similar examples, while CAL by definition constrains its search in similar examples in the feature space that lie close to the decision boundary (contrastive examples). As for uncertainty, we observe that ENTROPY and CAL acquire the most uncertain examples, with average entropy almost twice as high as all other methods. Finally, regarding representativeness of the acquired batches, we see that CAL obtains the highest score, followed by ENTROPY, with the rest AL strategies to acquire less representative data. + +Overall, our analysis validates assumptions on the properties of data expected to be selected by the various acquisition functions. Our findings show that diversity in the raw text does not necessarily correlate with diversity in the feature space. In other words, low DIV.-F does not translate to low diversity in the distribution of acquired tokens (DIV.-I), suggesting that CAL can acquire similar examples in the feature space that have sufficiently diverse inputs. Furthermore, combining the results of our AL experiments (Figure 2) and our analysis (Table 3) we conclude that the best performance of CAL, followed by ENTROPY, is due to acquiring uncertain data. We observe that the most notable difference, in terms of selected data, between the two approaches and the rest is uncertainty (UNC.), suggesting perhaps the superiority of uncertainty over diversity sampling. We show that CAL improves over ENTROPY because our algorithm "guides" the focus of uncertainty sampling by not considering redundant uncertain data that lie away from the decision boundary and thus improving representativeness. We finally find that RANDOM is evidently the worst approach, as it selects the least diverse and uncertain data on average compared to all methods. + +# 7 Related Work + +Uncertainty Sampling Uncertainty-based acquisition for AL focuses on selecting data points that the model predicts with low confidence. A simple uncertainty-based acquisition function is least confidence (Lewis and Gale, 1994) that sorts data in descending order from the pool by the probability of not predicting the most confident class. Another approach is to select samples that maximize the predictive entropy. Houlsby et al. (2011) + +propose Bayesian Active Learning by Disagreement (BALD), a method that chooses data points that maximize the mutual information between predictions and model's posterior probabilities. Gal et al. (2017) applied BALD for deep neural models using Monte Carlo dropout (Gal and Ghahramani, 2016) to acquire multiple uncertainty estimates for each candidate example. Least confidence, entropy and BALD acquisition functions have been applied in a variety of text classification and sequence labeling tasks, showing to substantially improve data efficiency (Shen et al., 2017; Siddhant and Lipton, 2018; Lowell and Lipton, 2019; Kirsch et al., 2019; Shelmanov et al., 2021; Margatina et al., 2021). + +On the other hand, diversity or representative sampling is based on selecting batches of unlabeled examples that are representative of the unlabeled pool, based on the intuition that a representative set of examples once labeled, can act as a surrogate for the full data available. In the context of deep learning, Geifman and El-Yaniv (2017) and Sener and Savarese (2018) select representative examples based on core-set construction, a fundamental problem in computational geometry. Inspired by generative adversarial learning, Gissin and Shalev-Shwartz (2019) define AL as a binary classification task with an adversarial classifier trained to not be able to discriminate data from the training set and the pool. Other approaches based on adversarial active learning, use out-of-the-box models to perform adversarial attacks on the training data, in order to approximate the distance from the decision boundary of the model (Ducoffe and Precioso, 2018; Ru et al., 2020). + +Hybrid There are several existing approaches that combine representative and uncertainty sampling. Such approaches include active learning algorithms that use meta-learning (Baram et al., 2004; Hsu and Lin, 2015) and reinforcement learning (Fang et al., 2017; Liu et al., 2018), aiming to learn a policy for switching between a diversity-based or an uncertainty-based criterion at each iteration. Recently, Ash et al. (2020) propose Batch Active learning by Diverse Gradient Embeddings (BADGE) and Yuan et al. (2020) propose Active Learning by Processing Surprisal (ALPS), a cold-start acquisition function specific for pretrained language models. Both methods construct representations for the unlabeled data based on uncertainty, and then use them for clustering; hence combining + +both uncertainty and diversity sampling. The effectiveness of AL in a variety of NLP tasks with pretrained language models, e.g. BERT (Devlin et al., 2019), has empirically been recently evaluated by Ein-Dor et al. (2020), showing substantial improvements over random sampling. + +# 8 Conclusion & Future Work + +We present CAL, a novel acquisition function for AL that acquires contrastive examples; data points which are similar in the model feature space and yet the model outputs maximally different class probabilities. Our approach uses information from the feature space to create neighborhoods for each unlabeled example, and predictive likelihood for ranking the candidate examples. Empirical experiments on various in-domain and out-of-domain scenarios demonstrate that CAL performs better than other acquisition functions in the majority of cases. After analyzing the actively acquired datasets obtained with all methods considered, we conclude that entropy is the hardest baseline to beat, but our approach improves it by guiding uncertainty sampling in regions near the decision boundary with more informative data. + +Still, our empirical results and analysis show that there is no single acquisition function to outperform all others consistently by a large margin. This demonstrates that there is still room for improvement in the AL field. + +Furthermore, recent findings show that in specific tasks, as in Visual Question Answering (VQA), complex acquisition functions might not outperform random sampling because they tend to select collective outliers that hurt model performance (Karamcheti et al., 2021). We believe that taking a step back and analyzing the behavior of standard acquisition functions, e.g. with Dataset Maps (Swayamdipta et al., 2020), might be beneficial. Especially, if similar behavior appears in other NLP tasks too. + +Another interesting future direction for CAL, related to interpretability, would be to evaluate whether acquiring contrastive examples for the task (Kaushik et al., 2020; Gardner et al., 2020) is more beneficial than contrastive examples for the model, as we do in CAL. + +# Acknowledgments + +KM and NA are supported by Amazon through the Alexa Fellowship scheme. + +# References + +Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gradient lower bounds. In Proceedings of the International Conference on Learning Representations. +Yoram Baram, Ran El-Yaniv, and Kobi Luz. 2004. Online choice of active learning algorithms. Journal of Machine Learning Research, 5:255-291. +Zalán Bodó, Zsolt Minier, and Lehel Csató. 2011. Active learning with clustering. In Proceedings of the Active Learning and Experimental Design workshop In conjunction with AISTATS 2010, volume 16, pages 127-139. +Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the International Conference on Machine Learning, pages 59-66. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, volume 119, pages 1597-1607. +David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. 1996. Active learning with statistical models. Journal of Artificial Intelligence Research, 4(1):129-145. +Sanjoy Dasgupta. 2011. Two faces of active learning. Theoretical Computer Science, 412(19):1767-1781. Algorithmic Learning Theory (ALT 2009). +Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308-313. +Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 295-302. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186. +Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. ArXiv. +Melanie Ducoffe and Frederic Precioso. 2018. Adversarial active learning for deep networks: a margin based approach. + +Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active learning for BERT: An empirical study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 7949-7962. +Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 595-605. +Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, volume 48, pages 1050-1059. +Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian active learning with image data. In Proceedings of the International Conference on Machine Learning, volume 70, pages 1183-1192. +Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307-1323. +Yonatan Geifman and Ran El-Yaniv. 2017. Deep active learning over the long tail. CoRR, abs/1711.00941. +Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. CoRR, abs/1907.06347. +Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In Proceedings of the International Conference on Learning Representations. +Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 2744-2751. +Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian active learning for classification and preference learning. ArXiv. +Wei-Ning Hsu and Hsuan-Tien Lin. 2015. Active learning by learning. In Proceedings of the Conference of the Association for the Advancement of Artificial Intelligence, pages 2659-2665. + +Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher Manning. 2021. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing, pages 7265-7281. +Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In Proceedings of the International Conference on Learning Representations. +Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. 2019. BatchBALD: Efficient and diverse batch acquisition for deep bayesian active learning. In Proceedings of the Conference on Neural Information Processing Systems, pages 7026-7037. +Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1224-1234. +David D. Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Machine Learning Proceedings 1994, pages 148-156. +David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In *In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval*. +Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning how to actively learn: A deep imitation learning approach. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874-1883. +David Lowell and Zachary C Lipton. 2019. Practical obstacles to deploying active learning. Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing, pages 21-30. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150. +Katerina Margatina, Loic Barrault, and Nikolaos Aletras. 2021. Bayesian active learning with pretrained language models. CoRR, abs/2104.08320. + +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the International Conference on Neural Information Processing Systems, page 3111-3119. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024-8035. +Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through sampling estimation of error reduction. In Proceedings of the International Conference on Machine Learning, page 441-448. +Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Weinan Zhang, Yong Yu, and Lei Li. 2020. Active sentence learning by adversarial uncertainty sampling in discrete space. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4908-4917, Online. Association for Computational Linguistics. +Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In Proceedings of the International Conference on Learning Representations. +Burr Settles. 2009. Active learning literature survey. Computer sciences technical report. +Artem Shelmanov, Dmitri Puzyrev, Lyubov Kupriyanova, Denis Belyakov, Daniil Larionov, Nikita Khromov, Olga Kozlova, Ekaterina Artemova, Dmitry V. Dylov, and Alexander Panchenko. 2021. Active learning for sequence tagging with deep pre-trained models and Bayesian uncertainty estimates. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 1698-1712. +Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew-Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 589-596. +Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the Workshop on Representation Learning for NLP, pages 252-256. +Aditya Siddhant and Zachary C Lipton. 2018. Deep bayesian active learning for natural language processing: Results of a Large-Scale empirical study. + +In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2904-2909. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1631-1642. +Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc. +Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 9275-9293. +Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2019. Representation learning with contrastive predictive coding. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45. +Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. 2020. Cold-start active learning through self-supervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935-7948, Online. Association for Computational Linguistics. +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28, pages 649-657. Curran Associates, Inc. +Fedor Zhdanov. 2019. Diverse mini-batch active learning. + +Jingbo Zhu, Huizhen Wang, Tianshun Yao, and Benjamin K Tsou. 2008. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the International Conference on Computational Linguistics, pages 1137-1144. + +# A Appendix + +# A.1 Data & Hyperparameters + +In this section we provide details of all the datasets we used in this work and the hyperparparameters used for training the model. For QNLI, IMDB and SST-2 we randomly sample $10\%$ from the training set to serve as the validation set, while for AGNEWS and QQP we sample $5\%$ . For the DBPEDIA dataset we undersample both training and validation datasets (from the standard splits) to facilitate our AL simulation (i.e. the original dataset consists of 560K training and 28K validation data examples). For all datasets we use the standard test set, apart from SST-2, QNLI and QQP datasets that are taken from the GLUE benchmark (Wang et al., 2019) we use the development set as the held-out test set and subsample a development set from the training set. + +For all datasets we train BERT-BASE (Devlin et al., 2019) from the HuggingFace library (Wolf et al., 2020) in Pytorch (Paszke et al., 2019). We train all models with batch size 16, learning rate $2e - 5$ , no weight decay, AdamW optimizer with epsilon $1e - 8$ . For all datasets we use maximum sequence length of 128, except for IMDB that contain longer input texts, where we use 256. To ensure reproducibility and fair comparison between the various methods under evaluation, we run all experiments with the same five seeds that we randomly selected from the range [1, 9999]. We evaluate the model 5 times per epoch on the development set following Dodge et al. (2020) and keep the one with the lowest validation loss. We use the code provided by Yuan et al. (2020) for ALPS, BADGE and BERTKM. + +# A.2 Efficiency + +In this section we compare the efficiency of the acquisition functions considered in our experiments. We denote $m$ the number of labeled data in $\mathcal{D}_{\mathrm{lab}}$ , $n$ the number of unlabeled data in $\mathcal{D}_{\mathrm{pool}}$ , $C$ the number of classes in the downstream classification task, $d$ the dimension of embeddings, $t$ is fixed number of iterations for k-MEANS, $l$ the maximum sequence length and $k$ the acquisition size. In our experiments, following (Yuan et al., 2020), $k = 100$ , $d = 768$ , $t = 10$ , and $l = 128^{12}$ . ALPS requires $\mathcal{O}(tknl)$ considering that the surprisal embeddings are computed. BERTKM and BADGE, the + +most computationally heavy approaches, require $\mathcal{O}(knd)$ and $\mathcal{O}(Cknd)$ respectively, given that gradient embeddings are computed for BADGE $^{13}$ . On the other hand, ENTROPY only requires $n$ forward passes though the model, in order to obtain the logits for all the data in $\mathcal{D}_{\mathrm{pool}}$ . Instead, our approach, CAL, first requires $m + n$ forward passes, in order to acquire the logits and the CLS representations of the data (in $\mathcal{D}_{\mathrm{pool}}$ and $\mathcal{D}_{\mathrm{lab}}$ ) and then one iteration for all data in $\mathcal{D}_{\mathrm{pool}}$ to obtain the scores. + +We present the runtimes in detail for all datasets and acquisition functions in Tables 4 and 5. First, we define the total acquisition time as a sum of two types of times; inference and selection time. Inference time is the time that is required in order to pass all data from the model to acquire predictions or probability distributions or model encodings (representations). This is explicitly required for the uncertainty-based methods, like ENTROPY, and our method CAL. The remaining time is considered selection and essentially is the time for all necessary computations in order to rank and select the $b$ most important examples from $\mathcal{D}_{\mathrm{pool}}$ . + +We observe in Table 4 that the diversity-based functions do not require this explicit inference time, while for ENTROPY it is the only computation that is needed (taking the argmax of a list of uncertainty scores is negligible). CAL requires both inference and selection time. We can see that inference time of CAL is a bit higher than ENTROPY because we do $m + n$ forward passes instead of $n$ , that is equivalent to both $\mathcal{D}_{\mathrm{pool}}$ and $\mathcal{D}_{\mathrm{lab}}$ instead of only $\mathcal{D}_{\mathrm{pool}}$ . The selection time for CAL is the for-loop as presented in our Algorithm 1. We observe that it is often less computationally expensive than the inference step (which is a simple forward pass through the model). Still, there is room for improvement in order to reduce the time complexity of this step. + +In Table 5 we present the total time for all datasets (ordered with increasing $\mathcal{D}_{\mathrm{pool}}$ size) and the average time for each acquisition function, as a means to rank their efficiency. Because we do not apply all acquisition functions to all datasets we compute three different average scores in order to ensure fair comparison. AVG.-ALL is the average time across all 7 datasets and is used to compare RANDOM, ALPS, ENTROPY and CAL. AVG.-3 is the average time across the first 3 datasets (IMDB, SST-2 and DBPEDIA) and is used to compare all + +
DBPEDIAIMDBSST-2QNLIAGNEWSPUBMEDQQP
RANDOM(0,0)(0,0)(0,0)(0,0)(0,0)(0,0)(0,0)
ALPS(0,181)(0,222)(0,733)(0,1607)(0,2309)(0,5878)(0,14722)
BERTKM(0,467)(0,431)(0,4265)(0,8138)(0,9344)(0,25965)(-,-)
BADGE(0,12871)(0,3816)(0,25640)(-,-)(-,-)(-,-)(-,-)
ENTROPY(103,1)(107,0)(173,0)(331,0)(402,0)(596,0)(1070,0)
CAL(133,49)(212,61)(464,244)(528,376)(656,628)(1184,1445)(1541,2857)
+ +Table 4: Runtimes (in seconds) for all datasets and acquisition functions. In each cell of the table we present a tuple $(i,s)$ where $i$ is the inference time and $s$ the selection time. Inference time is the time for the model to perform a forward pass for all the unlabeled data in $\mathcal{D}_{\mathrm{pool}}$ and selection time is the time that each acquisition function requires to rank all candidate data points and select $b$ for annotation (for a single iteration). Since we cannot report the runtimes for every model in the AL pipeline (at each iteration the size of $\mathcal{D}_{\mathrm{pool}}$ changes), we provide the median. + +
DBPEDIAIMDBSST-2QNLIAGNEWSPUBMEDQQPAVG.-ALLAVG.-3AVG.-6
RANDOM0000000000
ALPS1812227331607230958781472236643781821
BERTKM46743142658138934425965--17218101
BADGE12871381625640-----14109-
ENTROPY1041071733314025961070397128285
CAL1822737089041284262943981482387996
+ +Table 5: Runtimes (in seconds) for all datasets and acquisition functions. In each cell of the table we present the total acquisition time (inference and selection). AVG.-ALL shows the average acquisition time for each acquisition function for all datasets, AVG.-6. for all datasets except QQP and AVG.-3 for the 3 first datasets only (DBPEDIA, IMDB, SST-2). + +acquisition functions. Finally, AVG.-6 is the average time across all datasets apart from QQP and is used to compare RANDOM, ALPS, BERTKM, ENTROPY and CAL. + +We first observe that ENTROPY is overall the most efficient acquisition function. According to the AVG.-ALL column, we observe that CAL is the second most efficient function, followed by ALPS. According to the AVG.-6 we observe the same pattern, with BERTKM to be the slowest method. Finally, we compare all acquisition functions in the 3 smallest (in terms of size of $\mathcal{D}_{\mathrm{pool}}$ ) datasets and find that ENTROPY is the fastest method followed by ALPS and CAL that require almost 3 times more computation time. The other clustering methods, BERTKM and BADGE, are significantly more computationally expensive, requiring respectively 13 and $100(!)$ times more time than ENTROPY. + +Interestingly, we observe the effect of the acquisition size (2% of $\mathcal{D}_{\mathrm{pool}}$ in our case) and the size of $\mathcal{D}_{\mathrm{pool}}$ in the clustering methods. As these parameters increase, the computation of the corresponding acquisition function increases dramatically. For example, we observe that in the 3 smallest datasets that ALPS requires similar time to CAL. However, + +when we increase $b$ and $m$ (i.e. as we move from DBPEDIA with $20K$ examples in $D_{\mathrm{pool}}$ to QNLI with $100K$ etc - see Table 1) we observe that the acquisition time of ALPS becomes twice as much as that of CAL. For instance, in QQP with acquisition size 3270 we see that ALPS requires 14722 seconds on average, while CAL 4398. This shows that even though our approach is more computationally expensive as the size of $D_{\mathrm{pool}}$ increases, the complexity is linear, while for the other hybrid methods that use clustering, the complexity grows exponentially. + +# A.3 Reproducibility + +All code for data preprocessing, model implementations, and active learning algorithms is made available at https://github.com/mourga/contrastive-active-learning. For questions regarding the implementation, please contact the first author. \ No newline at end of file diff --git a/activelearningbyacquiringcontrastiveexamples/images.zip b/activelearningbyacquiringcontrastiveexamples/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..69a5949a1fbff794e4160550dfc0604248611588 --- /dev/null +++ b/activelearningbyacquiringcontrastiveexamples/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99b4fe9c74f5b62e32008959ff7b167ff1f0cf0ce5bb31dd7d1689093c1f429f +size 374596 diff --git a/activelearningbyacquiringcontrastiveexamples/layout.json b/activelearningbyacquiringcontrastiveexamples/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0353ca400bf1f5bc9ae10e48d9a3e0a932266ceb --- /dev/null +++ b/activelearningbyacquiringcontrastiveexamples/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:275480ecad34c022ee4611b14967171f50931cd10a645a6546462d326931114d +size 531096 diff --git a/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_content_list.json b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fff447d337dc40b3eeef94b65d2c4a07999e2627 --- /dev/null +++ b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d3c23221e0b2532d8ca53914db0944d261f020f21ab9664262b47dc60c641eb +size 105134 diff --git a/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_model.json b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_model.json new file mode 100644 index 0000000000000000000000000000000000000000..95a29e6c8f53aca6be4b2ceaf1781e8decf94e08 --- /dev/null +++ b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85368e47a9b999224613ba8c77fc0ce94110e76d381bd9efd0e631d824394e2e +size 123827 diff --git a/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_origin.pdf b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..15bee8cf4666ecf1eb76ffdd1298b439dce1e617 --- /dev/null +++ b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07524abce34d126484f9ef8632e1a1a2a9d65c02041e0c1abbf96920f191c51a +size 2701060 diff --git a/adapterdropontheefficiencyofadaptersintransformers/full.md b/adapterdropontheefficiencyofadaptersintransformers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..10cb392b3dfaf3b5808075a65f7755e15e18bdab --- /dev/null +++ b/adapterdropontheefficiencyofadaptersintransformers/full.md @@ -0,0 +1,394 @@ +# AdapterDrop: On the Efficiency of Adapters in Transformers + +Andreas Rücklé* and Gregor Geigle and Max Glockner, Tilman Beck and Jonas Pfeiffer and Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP) Department of Computer Science, Technische Universität Darmstadt www.ukp.tu-darmstadt.de + +# Abstract + +Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training light-weight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that AdapterDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely. + +# 1 Introduction + +While transfer learning has become the go-to method for solving NLP tasks (Pan and Yang, 2010; Torrey and Shavlik, 2010; Ruder, 2019; Howard and Ruder, 2018; Peters et al., 2018), transformer-based models are notoriously deep requiring millions or even billions of parameters (Radford et al., 2018; Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Brown et al., 2020). This results in slow inference and large storage requirements. + +At least three independent lines of research have recently evolved to tackle these shortcomings. (1) Smaller and faster models that are either distilled or trained from scratch (Sanh et al., 2019; Sun et al., 2020; Bai et al., 2021; Wang et al., 2020). (2) Robustly trained transformers in which the model depth can be reduced at run-time, thereby decreasing inference time dynamically (Fan et al., 2020; Elbayad et al., 2020; Xin et al., 2020; Hou et al., 2020). (3) Adapters, which, instead of fully fine-tuning the model, only train a newly introduced set of weights at every layer, thereby sharing + +the majority of parameters between tasks (Houlsby et al., 2019; Bapna and First, 2019; Pfeiffer et al., 2020a). Adapters have been shown to work well for machine translation (Bapna and First, 2019), cross-lingual transfer (Pfeiffer et al., 2020b, 2021b; Üstün et al., 2020; Vidoni et al., 2020; Ansell et al., 2021), community QA (Rücklé et al., 2020), and task composition for transfer learning (Stickland and Murray, 2019; Pfeiffer et al., 2021a; Lauscher et al., 2020; Wang et al., 2021; Poth et al., 2021). Despite their recent popularity, the computational efficiency of adapters has not been explored beyond parameter efficiency. + +We close this gap and establish the computational efficiency of two adapter architectures at training and inference time. We investigate different strategies to further improve the efficiency of adapter-based models by incorporating ideas from all three directions mentioned above. Our strategies rely on dropping out adapters from transformers, at training and inference time, resulting in models that are dynamically adjustable regarding the available computational resources. Our approaches are agnostic to the pre-trained transformer model (e.g., base, large), which makes them broadly applicable. + +# Contributions: + +1. We are the first to establish the computational efficiency of adapters compared to full fine-tuning. We show that the training steps of adapters can be up to $60\%$ faster than full model fine-tuning with common hyperparameter choices, while being $4 - 6\%$ slower at inference. Hence, adapters are a suitable choice for researchers interested in achieving faster training times, or when requiring extensive hyperparameter tuning. + +2. We propose AdapterDrop, the efficient and dynamic removal of adapters with minimal impact on the task performances. We show that dropping adapters from lower transformer layers considerably improves the inference speed in + +
SettingAdapterRelative speed (for Seq.Len./Batch)
128/16128/32512/16512/32
TrainingHoulsby1.481.531.361.33
Pfeiffer1.571.601.411.37
InferenceHoulsby0.940.940.960.96
Pfeiffer0.950.950.960.96
+ +Table 1: Relative speed of adapters compared to fully fine-tuned models. For example, 1.6 for training with the Pfeiffer adapter means that we can perform 1.6 training steps with this adapter in the time of one training step with full model fine-tuning. + +multi-task settings. For example, with adapters dropped from the first five layers, AdapterDrop is $39\%$ faster when performing inference on 8 tasks simultaneously. This can be beneficial for researchers working on models that need to make multiple predictions on each input. + +3. We prune adapters from adapter compositions in AdapterFusion (Pfeiffer et al., 2021a) and retain only the most important adapters after transfer learning, resulting in faster inference while maintaining the task performances entirely. This is suitable for settings with little labeled training data, where AdapterFusion can achieve ample improvements over standard single task models. + +# 2 Efficiency of Adapters + +We first establish the computational efficiency of adapters without AdapterDrop. As illustrated in Figure 1, significant differences exist in the forward and backward pass when fine-tuning adapters compared to fully fine-tuning the model. In the forward pass, adapters add complexity with the additional components; however, it is not necessary to backpropagate through the entire model during the backward pass. We compare the training and inference speed of full model fine-tuning against the adapter architectures of Houlsby et al. (2019) and Pfeiffer et al. (2021a) (depicted in Figure 1) using the AdapterHub.ml framework (Pfeiffer et al., 2020a). We conduct our measurements with the transformer configuration of BERT base and verify them with different GPUs. $^{1}$ + +We provide measurements corresponding to common experiment configurations in Table 1. + +Training. Adapters can be considerably faster compared to full model fine-tuning— $60\%$ faster + +in some configurations. The two adapter architectures differ only marginally in terms of training efficiency: due to its simpler architecture, training steps of the Pfeiffer adapters are slightly faster. The magnitude of the differences depends on the input size; the available CUDA cores are the primary bottleneck. $^2$ We do not observe any particular differences between adapters and full fine-tuning regarding the training convergence. $^3$ + +The training speedup can be explained by the decreased overhead of gradient computation. Most of the parameters are frozen when using adapters and it is not necessary to backpropagate through the first components (see Figure 1). + +Inference. The two adapter architectures are 94- $96\%$ as fast as fully fine-tuned models, which varies depending on the input size. This can have a considerable impact when deployed at scale. + +# 3 AdapterDrop + +We have established that adapters are more efficient in terms of training time, however, there is a perpetuate need for sustainable and efficient models (Strubell et al., 2019). Backpropagating through as few layers as possible would further improve the efficiency of training adapters. The efficiency for inference can be improved by sharing representations at lower transformer layers when simultaneously performing inference for multiple tasks—in other words, when performing multiple independent classifications on the same input. We establish this in Table 2, finding that models are up to $8.4\%$ faster with every shared layer (16 tasks). + +Motivated by these observations, we propose AdapterDrop: Dynamically removing adapters from lower transformer layers (depicted in Figure 1). AdapterDrop is similar to dropping out entire transformer layers (Fan et al., 2020), however, specialized to adapter settings—where lower layers often have a small impact on the task performances (Houlsby et al., 2019). + +We study two training methods for AdapterDrop: (1) Specialized AdapterDrop: Removing adapters from the first $n$ transformer layers, where $n$ is fixed during training. This yields separate models for each possible $n$ . (2) Robust AdapterDrop: Drawing the integer $n$ randomly from [0, 11] for each + +![](images/0f472eaacbab9297062e5d23d62fafc47b7292547e4c894a18c8a77a4f7d4dab.jpg) +Figure 1: Standard adapter fine-tuning vs. Adapter-Drop fine-tuning. The left model includes adapters at every layer whereas the right model has adapters dropped at the first layer. The arrows to the right of each model indicate the information flow for the Forward and Backward pass through the model. + +
Simultaneous Tasks24816
Speedup (each layer)4.3%6.6%7.8%8.4%
+ +Table 2: Speedup for each shared transformer layer when performing inference for multiple tasks simultaneously (details are given in Appendix G.2) + +training batch. $^{4}$ This yields one robust model that is applicable to a varying number of dropped layers. We study the effectiveness of AdapterDrop on the devsets of the GLUE benchmark (Wang et al., 2018) using RoBERTa base (Liu et al., 2019). $^{5}$ + +Figure 2 shows that specialized AdapterDrop maintains good results even with several dropped layers. With the first five layers dropped, specialized AdapterDrop maintains $97.1\%$ of the original performance (averaged over all eight GLUE tasks; see Table 8). Moreover, robust AdapterDrop achieves comparable results, and with five layers dropped it maintains $95.4\%$ of the original performance (on avg). The advantage of robust over specialized AdapterDrop is that the robust variant can be dynamically scaled. Based on current available computational resources, robust AdapterDrop can (de)activate layers with the same set of parameters, whereas specialized AdapterDrop needs to be trained for every setting explicitly. + +The efficiency gains can be large. When performing inference for multiple tasks simultaneously, we measure inference speedups of $21 - 42\%$ with five + +![](images/2e589ffd99f21743caa53dbb20e7070c4c39aaa62b6eb1daf2980c739f732ff3.jpg) +Figure 2: Task performances in relation to dropped layers during evaluation (Figure 13 shows all tasks). 'Standard adapter' is trained with no dropped layers. + +dropped layers—depending on the number of simultaneous tasks (Table 2).6 Training of our robust adapters is also more efficient, which increases the speed of training steps by $26\%$ .7 + +# 4 Efficiency of AdapterFusion + +AdapterFusion (Pfeiffer et al., 2021a) leverages the knowledge of several adapters from different tasks and learns an optimal combination of the adapters' output representations for a single target task (see Figure 3). AdapterFusion (AF) is particularly useful for small training sets where learning adequate models is difficult. Despite its effectiveness, AF is computationally expensive because all included adapters are passed through sequentially. $^{8}$ + +Table 3 shows that the differences can be substantial for both training and inference. For instance, compared to a fully fine-tuned model, AF with eight adapters is around $47\%$ slower at training time and $62\%$ slower at inference.[9] + +# 5 AdapterDrop for AdapterFusion + +There exists considerable potential for improving the efficiency of AF, especially at inference time. We address this with two variants of AdapterDrop + +
AdaptersAF vs. Full FTAF vs. Adapter
TrainingInferenceTrainingInference
20.920.640.570.68
80.530.380.330.40
160.330.240.210.26
+ +Table 3: Relative speed of AdapterFusion (with 2/8/16 adapters) compared to a fully fine-tuned model and compared to a single-task adapter (right). Measured with a batch size of 32, and a sequence length of 128. + +![](images/405cb206fe76e76a325be921cddd59c50b9ae6fe5e22c5a89606b558224d2bbf.jpg) +Figure 3: Standard AdapterFusion vs. AdapterFusion pruning, each with 3 adapters initially. The left model includes all adapters at every layer whereas the right model has one adapter pruned at every layer. + +for AF by (1) removing entire AF layers; (2) pruning the least important adapters from AF models. + +# 5.1 Removing AdapterFusion Layers + +We fuse the adapters from all eight GLUE tasks and observe the largest gains of AF on RTE and CoLA. We additionally train robust AF models with the same procedure as in §3. We investigate from how many lower layers we can remove AF at test time while still outperforming the corresponding single-task adapter (without AdapterDrop). + +Figure 4 shows that AF performs better than the + +![](images/2e63812e48f1761e456720b578b90800a5733a254b8c5168595c6b5eacaba391.jpg) +Figure 4: Comparison of AdapterFusion with (orange) and without (blue) AdapterDrop training during inference when omitting early AF layers. + +![](images/911ff9993454205541555b6a7a700af09e76942e24352e53722bfe9ccfeeef01.jpg) +Figure 5: Task performance of AdapterFusion Pruning. AF is trained with eight adapters, and we gradually remove the least important from the model. + +single-task adapter on RTE until removing AF from the first five layers. This improves the inference efficiency by $26\%$ . On CoLA, we observe a different trend. Removing AF from the first layer results in more noticeable performance decreases, achieving lower task performances than the single-task adapter. This is in line with recent work showing that some linguistic tasks heavily rely on information from the first layers (Vulic et al., 2020). We deliberately highlight that AdapterDrop might not be suitable for all tasks. However, Figure 13 shows that CoLA represents the most extreme case. Nevertheless, our results suggest that researchers need to be cautious when removing AdapterFusion layers as there may exist a considerable performance/efficiency tradeoff. + +# 5.2 AdapterFusion Pruning + +The inference efficiency of AF largely depends on the number of fused adapters, see Table 3. We can, therefore, achieve efficiency improvements by pruning adapters from the trained AF models (depicted in Figure 3). Our hypothesis is that we can safely remove adapters if they are not usually activated by AF, which means that they do not contribute much to the output representations. In each fusion layer, we record the average adapter activations—their relative importance—using all instances of the respective AF training set. We then remove the adapters with lowest activations. + +Figure 5 demonstrates that we can remove most adapters in AF without affecting the task performance. With two remaining adapters, we achieve comparable results to the full AF models with eight adapters and improve the inference speed by $68\%$ . + +We therefore recommend performing Adaper-Fusion pruning before deploying these models in practice. This is a simple yet effective technique + +to achieve efficiency gains even when aiming at maintaining performance entirely. + +# 6 Conclusion + +Adapters have emerged as a suitable alternative to full model fine-tuning, and their most widely claimed computational advantage is the small model size. In this work, we have demonstrated that the advantages of adapters go far beyond mere parameter efficiency. Even without our extensions, the training steps of two common adapter architectures are up to $60\%$ faster. However, these improvements come at the cost of $4 - 6\%$ slower inference speed. Thus, if training is more important, adapters can be advantageous over full model fine-tuning. + +AdapterDrop expands these advantages by dropping a variable number of adapters from lower transformer layers. We dynamically reduce the computational overhead at run-time when performing inference over multiple tasks and maintain task performances to a large extent. This benefits researchers working on models that need to make multiple independent predictions on a single input. + +Finally, we also investigated the computational efficiency of AdapterFusion models. We find that dropping entire AdapterFusion layers comes at a considerable performance/efficiency tradeoff, whereas pruning of the least activated adapters in each layer can improve the model efficiency while maintaining performance entirely. + +We believe that our work can be widely extended and that there exist many more directions to obtain efficient adapter-based models. For instance, we could explore more efficient pre-trained adapters, $^{11}$ sharing the adapter weights across layers, $^{12}$ or pruning adapters from AdapterFusion at training time. $^{13}$ In the Appendix to this paper, we present preliminary results for several related ideas, which may serve as a starting point for future work. + +# Acknowledgments + +This work has received financial support from multiple sources. (1) The German Federal Ministry of + +Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. (2) The European Regional Development Fund (ERDF) and the Hessian State Chancellery – Hessian Minister of Digital Strategy and Development under the promotional reference 20005482 (TexPrax). (3) The German Research Foundation (DFG) as part of the Research Training Group KRITIS No. GRK 2222. (4) The German Federal Ministry of Education and Research (BMBF) as part of the Software Campus program under the promotional reference 01|S17050. (5) The LOEWE initiative (Hesse, Germany) within the emergenCITY center. (6) The German Research Foundation (DFG) as part of the UKP-SQuARE project (grant GU 798/29-1). Finally, we gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research. + +# References + +Alan Ansell, Edoardo Maria Ponti, Jonas Pfeiffer, Sebastian Ruder, Goran Glavaš, Ivan Vulić, and Anna Korhonen. 2021. MAD-G: Multilingual Adapter Generation for Efficient Cross-Linguual Transfer. In Findings of the Association for Computational Linguistics: EMNLP 2021. +Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jin Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. 2021. BinaryBERT: Pushing the limit of BERT quantization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL 2021), pages 4334-4348. +Ankur Bapna and Orhan First. 2019. Simple, Scalable Adaptation for Neural Machine Translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP 2019), pages 1538-1548. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 4171-4186. +Maha Elbayad, Jiatao Gu, Edouard Grave, and Michael Auli. 2020. Depth-adaptive transformer. In 8th International Conference on Learning Representations (ICLR 2020). +Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing Transformer Depth on Demand with Structured Dropout. In 8th International Conference on Learning Representations, (ICLR 2020). +Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. 2020. Dynabert: Dynamic BERT with adaptive width and depth. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020). +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzkebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-Efficient Transfer Learning for NLP. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019). +Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, (ACL 2018), pages 328-339. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations (ICLR 2020). +Anne Lauscher, Olga Majewska, Leonardo F. R. Ribeiro, Iryna Gurevych, Nikolai Rozanov, and Goran Glavaš. 2020. Common sense or world knowledge? investigating adapter-based knowledge injection into pretrained transformers. In Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 43-49. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. +Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359. + +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (NAACL 2018), pages 2227-2237. +Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021a. AdapterFusion: Non-Destructive Task Composition for Transfer Learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021), pages 487-503. +Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020a. AdapterHub: A Framework for Adapting Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pages 46-54. +Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2020b. MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 7654-7673. +Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2021b. UNKs Everywhere: Adapting Multilingual Language Models to New Scripts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Online, November, 2021. +Clifton Poth, Jonas Pfeiffer, Andreas Rückle, and Iryna Gurevych. 2021. What to pre-train on? efficient intermediate task selection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021). +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training. Technical report, OpenAI. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Technical report, OpenAI. +Andreas Rücklé, Jonas Pfeiffer, and Iryna Gurevych. 2020. MultiCQA: Exploring the Zero-Shot Transfer of Text Matching Models on a Massive Scale. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 2471-2486. +Sebastian Ruder. 2019. Neural Transfer Learning for Natural Language Processing. Ph.D. thesis, National University of Ireland, Galway. + +Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. +Asa Cooper Stickland and Iain Murray. 2019. BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning. In Proceedings of the 36th International Conference on Machine Learning, (ICML 2019), pages 5986-5995. +Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Conference of the Association for Computational Linguistics, (ACL 2019), pages 3645-3650. +Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pages 2158-2170. +Lisa Torrey and Jude Shavlik. 2010. Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pages 242-264. IGI Global. +Ahmet Üstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language Adaptation for Truly Universal Dependency Parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 2302-2315. +Marko Vidoni, Ivan Vulić, and Goran Glavaš. 2020. Orthogonal language and task adapters in zero-shot cross-lingual transfer. arXiv preprint arXiv:2012.06460. +Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing Pretrained Language Models for Lexical Semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 7222-7240. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355. +Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418. +Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression + +of pre-trained transformers. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020). +Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. Deebert: Dynamic early exiting for accelerating BERT inference. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pages 2246-2251. + +# A Measuring Computational and Task Performance + +# A.1 Computational Efficiency + +We use Python 3.6, PyTorch 1.5.1, CUDA 10.1 for all measurements. We repeat them with two different GPUs: NVIDIA Tesla V100 PCIe (32GB) and a NVIDIA Titan X Pascal (12GB). We make use of the torch.cuda.Event class and torch.cuda.synchronize to measure only the exact period of time of a training (or inference) step. $^{14}$ For both inference and training, we repeat the respective step 300 times. We report the median to mitigate the impact of outliers caused by GPU warmup. + +Relativ speed. We define the relative speed of an adapter compared full model fine-tuning as: $\frac{S_a}{S_f}$ where $S_{a}$ and $S_{f}$ are the time of one step with the adapter model and the fully fine-tuned model, respectively. For example, a relative speed of 1.5 means that the adapter model can perform 1.5 steps in the time the fully fine-tuned model performs one step. + +Speedup. Speedup describes the positive change in relative speed of an adapter model when using AdapterDrop (or another method). A speedup of $p\%$ means that the adapter model with AdapterDrop requires only $(1 - p / 100) \times$ of the runtime than the adapter model without AdapterDrop. + +The speedup of AdapterDrop (and AdapterFusion) are additive. If dropping one layer results in $p\%$ speedup, dropping two layers results in $2p\%$ speedup, etc. + +# A.2 Task Performances + +We study the task performances of adapter models on the popular GLUE benchmark (Wang et al., 2018). Following Devlin et al. (2019), we exclude + +the WNLI because of the problematic data construction.15 We perform our analyses using RoBERTa base (Liu et al., 2019) as our pre-trained model and report the mean and standard deviation over three runs of the best development performance evaluated after every epoch. We train larger data sets (SST-2, MNLI, QNLI, and QQP) for 10 epochs and the rest of the data sets for 20 epochs. We use a batch size of 32 and, if not otherwise noted, the default hyperparameters for adapter fine-tuning as in (Pfeiffer et al., 2021a). + +# B Adapter Initialization and Convergence + +Besides measuring training and inference time, we are interested in (1) how using adapters compare to standard RoBERTa-base with regards to downstream task convergence, and (2) if initializing adapters with pre-trained weights using masked language modeling can lead to faster convergence. + +First, we compare RoBERTa-base with adapter models using the architecture proposed by Pfeiffer et al. (2021a). Second, we pretrain an adapter with masked language modeling (MLM) using documents from the English Wikipedia. $^{16}$ The results for both experiments are visualized in Figure 12. When comparing RoBERTa-base with randomly initialized adapters, We find that adapters do not come at the cost of requiring more training steps for convergence (1). For several of the eight GLUE tasks, we observe similar convergence behavior with the standard RoBERTa-base model and its counterpart using adapters. + +Further, we observe across all tasks that initializing the adapter weights with MLM pre-training does not have a substantial impact on the downstream task convergence (compared to a randomly initialized adapter). Thus, we find no evidence that pre-training of adapters with our masked language modeling objective leads to better convergence performance in our experiments (2). + +# C Detailed Results: AdapterDrop Task Performances + +We plot the detailed task performances of AdapterDrop with the different training strategies in Figure 13. The relative differences of AdapterDrop to + +a standard adapter with no AdapterDrop are given in Table 8. + +# D Adapter with Cross-Layer Parameter Sharing + +We can further reduce the number of parameters required for each task by sharing the weights of the adapters across all transformer layers. This is similar to weight sharing in ALBERT (Lan et al., 2020), but specialized on adapters and can therefore be applied to a wide range of pre-trained models. + +We use the Pfeiffer adapter architecture in our experiments with the same hyperparameters as in Appendix A.2. Because cross-layer parameter sharing reduces the capacity of adapter models, we study the impact of the adapter compression rate. The compression rate refers to the down-projection factor in the adapter's bottleneck layer and thus impacts the its capacity (the compression rate specifies by how much 'FF Down' in Figure 1 compresses the representations). The standard compression rate is 16, and smaller values result in a larger model capacity. + +Table 6 shows that cross-layer parameter sharing with the same compression rate of 16 largely maintains the performance compared to separate weights with an average difference of $2.35\%$ . With a smaller compression rate of 4, we close this gap by more than $50\%$ while still requiring $66\%$ fewer parameters.[17] The resulting models are lightweight: our shared adapter with a compression rate of 16 requires only 307KB storage space. + +# E Training AdapterFusion with Dropout + +We investigate the random dropout of adapters from AdapterFusion during training (using our eight task adapters as in §4) to improve the speed of training steps. Each layer randomly selects different adapters to drop out. This means that the model itself may still use the knowledge from all tasks, although not in the layers individually. + +Table 7 shows the results for the four smallest GLUE tasks in terms of training data size. The speedup that we achieve with AdapterFusion dropout can be substantial: with a dropout rate of $75\%$ (i.e., dropping out 6 out of our 8 adapters) each training step is $74\%$ faster on average (with a sequence length of 128, a batch size of 32). We observe no clear trend in terms of task performances. Fusion dropout leads to consistent decreases on + +RTE and CoLA, only a small impact on STS-B (no difference when dropping out $25\%$ of adapters), and yields improvements on MRPC. + +The effectiveness of Fusion dropout, thus, depends on the individual downstream task. Nevertheless, we believe that this methods could be suitable, e.g., for resource-constrained settings. + +# F Detailed Results: Removing AdapterFusion Layers + +The computational overhead of AF can be reduced during inference by decreasing the number of adapters. We investigate how dropping AF layers impacts the performance on the four smallest GLUE tasks (MRPC, STS-B, CoLA, RTE) and visualize the results in Figure 7. + +In this experiment we compare the performance of AF with and without AdapterDrop during training. For both, we use standard adapters as well as adapters created via AdapterDrop as basis for AF. Unsurprisingly, the performance of AF without AdapterDrop within the adapters or fusion drops fastest on all four datasets. Using AdapterDrop when creating the adapters, applying AdapterDrop on AF, or the combination of both significantly reduces the performance drop when omitting fusion layers during inference. On RTE and MRPC, multiple AF layers can be omitted while still performing en par with or better compared to a single task adapter. We further find this robustness to be task dependent. Even AF with AdapterDrop shows a steep fall in performance on RTE and CoLA, while being relatively stable on MRPC and STS-B, even with most layers omitted. + +# G Detailed Efficiency Measurements + +In this section, we present detailed results of our efficiency measurements for V100 and TitanX GPUs. + +# G.1 Adapters + +We present the efficiency results for adapters and fully fine-tuned models in Figure 6, where we plot the required time (absolute numbers) during training and inference. The relative speed of adapters compared to fully fine-tuned models is given in Table 9. + +# G.2 AdapterDrop + +Multi-task inference. In Figure 8, we plot the speed of adapters in a multi-task setting compared + +to fully fine-tuned models with sequential processing of inputs. In Table 11, we present the relative speed of adapters in this setting and show the speedup gained with AdapterDrop for each dropped layer. The average speedup in Table 2 is calculated as the average speedup over the batch sizes 16, 32 and 64 in Table 11. + +Training adapters with dropped layers. Table 5 shows the speedup of AdapterDrop when training a single adapter. The average speedup for training with AdapterDrop is $4.7\%$ per layer for the V100 and $4.5\%$ for the TitanX. This is the average result over batch sizes 16, 32, 64 and sequence length 64, 128, 256, and 256 (see Table 5). + +# G.3 AdapterFusion + +We plot the speed of AdapterFusion with different numbers of included adapters in Figure 9. In Table 10, we present the relative speed of AdapterFusion compared to a fully-finetuned model and a model with one adapter. This also shows the computational overhead (slowdown) that results from adding more adapters to AdapterFusion. + +# G.4 AdapterDrop for AdapterFusion + +Table 4 shows the speedup gained with AdapterDrop for AdapterFusion during training and inference. Figure 10 shows the required time as a function of the dropped layers. + +# H Parallel Implementation of AdapterFusion + +AdapterHub's implementation of AdapterFusion passes through each task adapter sequentially. We hypothesized that a better efficiency can be achieved with parallel processing of adapters. We implement the parallel computation of the different adapters by reformulation the linear layers as two convolutions. + +The first convolution is a convolution with a kernel size equal to the hidden dimension of the transformer and output channels equal to the number of adapters times the downprojection dimension of the adapters. The second convolution is a grouped convolution18 which processes the channels in blocks the size of the downprojection dimension. It outputs channels equal to the number of adapters times the hidden dimension. + +
AdaptersSpeedup (per dropped layer)
InferenceTraining
V100TitanXV100TitanX
23.0%3.1%6.3%6.4%
44.0%4.1%6.8%6.8%
85.2%5.2%7.3%7.3%
166.3%6.3%7.8%-
+ +Table 4: The speedup for each dropped layer for AdapterFusion during training and inference. Measurements were conducted with a batch size of 32 and sequence length of 128. Missing values are due to insufficient GPU memory. + +
Batch SizeSeq. LenSpeedup
V100TitanX
16644.6%4.4%
161284.6%4.6%
162564.8%4.6%
165124.7%-
32644.6%4.5%
321284.7%4.5%
322564.6%4.7%
325124.8%-
64644.7%4.5%
641284.6%4.5%
642564.7%-
64512--
+ +We show in Figure 11 and in Table 12 that the iterative implementation is faster than the parallel implementation for larger input sizes (e.g., batch sizes greater than). This indicates that once the input can no longer be processed entirely in parallel on the GPU (due to limited CUDA cores) the iterative implementation seems to be more efficient. + +Table 5: Speedup for each dropped layer during training with AdapterDrop on the V100 and TitanX. + +
StandardCross-Layer Parameter Sharing
Compression rate = 161.33416
SST-294.7 ±0.394.2 ±0.394.2 ±0.194.1 ±0.4
QNLI93.0 ±0.292.4 ±0.193.1 ±0.190.6 ±1.4
MNLI87.3 ±0.187.0 ±0.187.1 ±0.086.2 ±0.2
QQP90.6 ±0.090.8 ±0.190.2 ±0.088.6 ±0.5
CoLA62.6 ±0.960.3 ±1.660.8 ±0.457.2 ±1.0
MRPC88.4 ±0.188.2 ±0.788.5 ±1.186.8 ±0.5
RTE75.9 ±2.269.4 ±0.571.5 ±2.771.5 ±1.0
STS-B90.3 ±0.189.5 ±0.189.7 ±0.389.0 ±0.7
Average85.3583.9884.3983.0
Params884k884k295k74k
+ +Table 6: Task performance scores of the standard approach with separate adapter weights vs. cross-layer parameter sharing. The compression rate denotes the factor by which 'FF Down' in Figure 1 compresses the representations. The number of parameters is given without classification heads. + +
Fusion Dropout
0%25%50%75%
CoLA63.9 ±0.662.9 ±0.862.4 ±0.760.4 ±0.2
MRPC88.4 ±0.189.2 ±0.589.2 ±0.489.3 ±0.1
RTE85.4 ±0.782.8 ±1.982.1 ±0.380.9 ±1.1
STS-B90.2 ±0.190.2 ±0.190.1 ±0.189.9 ±0.1
Speedup (8)-15.9%39.4%73.7%
Speedup (16)-22.5%58.2%120.6%
+ +Table 7: Development scores of AdapterFusion (compression rate 16x) with or without fusion dropout during training. Fusion dropout of $50\%$ means that each adapter has a $50\%$ chance of not being used as input to the fusion layer. The speedup depends on the total number of adapters used in AdapterFusion (8 adapters in our setting here, 16 used by Pfeiffer et al. (2021a)) + +
Dropped Layers
01234567891011
Standard adapter100.098.597.195.392.089.082.274.664.554.549.343.3
Specialized AdapterDrop (12 models)100.099.598.998.297.697.195.995.395.194.392.582.9
Robust AdapterDrop98.597.797.396.896.195.494.593.392.289.985.962.0
+ +Table 8: Model performances with AdapterDrop in relation to a standard adapter with no dropped layers. We report the percentage of retained task performance compared to the standard adapter with no dropped layers during evaluation. The results are averaged over all eight GLUE task. A value of 97.1 for specialized AdapterDrop with five dropped layers means that the model achieves $97.1\%$ of the performance compared to the standard adapter with no dropped layers. Performance scores for each task can be found in Figure 13. + +
Sequence Len.Batch SizeV100TitanX
TrainingInferenceTrainingInference
HoulsbyPfeifferHoulsbyPfeifferHoulsbyPfeifferHoulsbyPfeiffer
64160.981.700.920.941.611.690.930.94
64321.701.810.940.951.481.550.930.94
64641.461.540.940.951.401.460.940.94
641281.481.550.950.961.371.420.940.94
128161.481.570.940.951.451.520.930.94
128321.531.600.940.951.381.440.940.95
128641.471.530.950.961.351.400.940.95
1281281.421.480.950.96----
256161.421.490.940.951.341.380.940.95
256321.401.460.950.961.311.360.940.96
256641.401.450.950.96----
256128--------
512161.361.410.960.96----
512321.331.370.960.96----
51264--------
512128--------
+ +Table 9: Relative speed of adapters compared to fully fine-tuned models. Missing values are due to insufficient GPU memory. + +
Seq. LenBatch SizeV100TitanX
vs. FFvs. Adap.Slowdownvs. FFvs. AdapSlowdown
Tr.Inf.Tr.Inf.Tr.Inf.Tr.Inf.Tr.Inf.Tr.Inf.
64160.770.620.450.668.2%10.6%0.880.620.520.6610.3%10.2%
64321.030.640.570.6812.0%11.1%0.800.610.520.6411.2%11.0%
64640.870.640.570.6712.6%12.0%0.760.610.520.6511.6%11.4%
128160.910.650.580.6912.0%11.0%0.800.610.530.6510.9%10.8%
128320.920.640.570.6812.5%11.8%0.760.620.530.6611.4%11.1%
128640.870.650.570.6812.5%11.6%------
256160.880.660.590.6912.1%11.3%0.770.650.560.6810.8%10.4%
256320.860.680.590.7011.9%11.3%------
25664------------
512160.870.690.620.7211.2%10.1%------
51232------------
51264------------
+ +Table 10: Relative speed of AdapterFusion for different sequence lengths and batch sizes. We compute the training (Tr.) speed and inference (Inf.) speed with two adapters in AdapterFusion. We compare this to: FF, a fully fine-tuned model; Adap, an adapter model (Pfeiffer architecture). The slowdown denotes the computational overhead of each additional adapter composed in AdapterFusion (calculated as the average slowdown for adding one adapter to AF consisting of 2-16 adapters). Missing values are due to insufficient GPU memory. + +![](images/98800071f50f5970d5da16115fd7e9612d83ed328e68940a4611f18a1b71be7b.jpg) +(a) V100 Inference + +![](images/0cf129d94e38093c820c65ddfb5f7da35ce2dc87ffb8ded909b92b051974896d.jpg) +(b) TitanX Inference + +![](images/a88539103bf783721d63710e31f71681ad21387826d3cd991184f40b3eea1719.jpg) +(c) V100 Training +Figure 6: The absolute time for each inference or training step. We compare a transformer model without adapters and an adapter model with Pfeiffer or Houlsby architectures. We note that for small inputs, i.e., batch size 1 or 8, the time does not increase with the sequence length because the GPU is not working at capacity. Figure (b) with batch size 1 shows the transition from working under and working at capacity. + +![](images/7499f3f5c338e49b86b54bfd197ec83de6ec12d750610cd5ebe00a691a9080a6.jpg) +(d) TitanX Training + +![](images/d5730d01e387205e6f71f7d3dd57a4c7d3d2d0bfcc71b4af6d76402aea22114e.jpg) +Figure 7: Performance of AF by the number of dropped AF layers. We show the results for AF and the used adapters (both with and without AdapterDrop), and compare the performance with a standard single task adapter. + +![](images/9fbc50cdad157818478e74d12f65be39096158e7bbc59e00bde0f171dba1f342.jpg) + +![](images/8e52e86a167df85de053c97416ce92dc6268895c1a2798a97c8bc0b688acbbe6.jpg) +Standard Fusion Adapter Training +—— Single Task Adapter --- AdapterDrop + +![](images/cdf94ece171caf38f82883daab3aeb3901c7f1c50d2bf523cc904f6063d7f51e.jpg) +AdapterDrop Training Standard + +
DeviceBatch SizeAdaptersInferenceSpeedup
V100121.252.6%
141.973.7%
182.804.9%
1162.976.5%
1621.134.1%
1641.146.5%
1681.207.7%
16161.168.4%
3221.084.5%
3241.146.6%
3281.117.9%
32161.118.5%
6421.084.3%
6441.056.7%
6481.067.9%
64161.068.4%
TitanX3221.074.4%
3241.096.6%
3281.097.8%
32161.068.4%
CPU120.984.2%
141.036.5%
181.057.7%
1161.068.4%
+ +Table 11: The relative inference speed of simultaneous processing of multiple tasks with adapters compared to sequential processing of tasks with fully fine-tuned models. Gray columns show the speedup of AdapterDrop for every additional dropped layer. All measurements use a sequence length of 128. Batch size 1 for the V100 is an outlier in both speedup and relative speed compared to the other results due to the small input size (compare with Figure 8). + +
AdaptersSeq. LenBatch SizeRel. Speed
V100TitanX
210010.930.94
310010.890.88
510010.770.76
1010010.601.29
2100161.021.44
3100161.121.58
5100161.171.80
10100161.272.14
2100321.011.48
3100321.171.62
5100321.231.85
10100321.322.24
220010.931.24
320010.881.37
520010.771.55
1020010.521.87
2200161.011.46
3200161.171.59
5200161.231.82
10200161.322.21
2200321.001.11
3200321.181.17
5200321.26-
10200321.34-
230010.931.37
330010.881.50
530010.911.70
1030010.942.03
2300161.001.48
3300161.161.63
5300161.221.88
10300161.32-
2300321.00-
3300321.20-
5300321.27-
10300321.36-
240011.041.39
340011.091.51
540011.101.74
1040011.102.08
2400161.00-
3400161.18-
5400161.25-
10400161.34-
2400321.00-
3400321.20-
5400321.27-
1040032--
+ +Table 12: Relative speed of AdapterFusion with the iterative implementation versus the parallel implementation with different batch sizes, sequence lengths and numbers of adapters for the V100 and TitanX. The parallel implementation is faster if the input is sufficiently small (batch size 1 or 2 adapters) as the GPU is not working at capacity and is able to use the parallel implementation. + +![](images/11d79d54df503e9d43132c60e297316bc1b5eb909dbe7b147eae485375b3a1c0.jpg) + +![](images/d86ea30c0e28e12d9222b959e15bfeb67c5d6eee6f24f48fa36692151517ea79.jpg) + +Dropped Layers +![](images/0ed1c3818acc3d0011be8246483201d1e452aff68bb47db6d198a3f0faad50b8.jpg) +2 adapter 8 adapter Parallelized NFF models 4 adapter 16 adapter + +![](images/fa301a74207234ca4dc98e2da55c8569ce08650732c4c0d43292336c6f46fa85.jpg) + +Figure 8: The absolute time required for performing inference for multiple tasks on the same input. The measurements are conducted with a sequence length of 128. N FF models denotes $N$ fully fine-tuned models, executed sequentially. Parallelized denotes the time required by N fully fine-tuned models running fully parallelized. Batch size 1 on the V100 is an outlier compared to the other results with a smaller speedup for each dropped layer but a higher relative speed compared to the fine-tuned models due to the small input size. +![](images/b6cf38d1a31287f696032dc005a652ba00a57bf7a07955f25d976802759716b3.jpg) +AdapterFusion No adapter 1adapter(no fusion) + +![](images/7064666a453d8f7a9849ec70f0789cc5fdae8bf8f0449cea204e485438f0ff2d.jpg) + +(a) V100 +(b) TitanX +![](images/8d017de02685be67c492199a6c8c203717947aa752f285918b36ee429f65d93b.jpg) +Number of adapters AdapterFusion No adapter 1 adapter (no fusion) + +![](images/8956b7d2682880482b85c79f636ddf0697b9816c420702697a81e2de14868162.jpg) +Figure 9: Absolute time measurements for AdapterFusion at inference (left) and training (right) as a function of the number of adapters. The measurements were conducted with a batch size of 32 (V100) and 16 (TitanX), and a sequence length of 128. + +![](images/42146d3f23f6e42d8d22d3fcaccc8f14f2a9ef6867dfa0b39784b66ea9bca6a1.jpg) + +![](images/c21711dbddd46ad491a1b4bd083ef404d5c9e2b9088208ed798ec55dcaae46e8.jpg) +Figure 10: Absolute time measurements for AdapterFusion with AdapterDrop at inference (left) and training (right) as a function of the number of dropped layers. The measurements were conducted with a batch size of 32 and a sequence length of 128. We additionally plot the time of an adapter (without AdapterDrop) and a model without adapters to provide a more thorough comparison. + +![](images/627bfb0d71d4311bf012e7d077cfa3741e7d3e246833b2850c8f2cdf3368643a.jpg) +(a) V100 + +![](images/6eb0ee0e56f58066f56c653b7d8f761d2370d0fc9e92b75d7ccebc94f5a0da04.jpg) +(b) TitanX +Figure 11: The difference in inference time between iterative and parallel implementations of AdapterFusion. Negative values indicate that the iterative implementation is faster. We calculate the difference as $t_i - t_p$ , where $t_i, t_p$ are the times for iterative and parallel implementation, respectively. In Figure (a), the parallel implementation is faster if the input is sufficiently small as the GPU is not working at capacity and is able to use the parallel implementation. + +![](images/b1ee77193e72a88555533411ff9474c15b2115d668889d4ac33919317b949874.jpg) + +![](images/0072dd3e5d9fe813f669e25049832d3ccc58762c38e2ec58addb4971332acb22.jpg) + +![](images/b9db4a235227346af34be0c6b92aac8df4ff39107443694bec7e8ac410006593.jpg) + +![](images/fb93917854df3e8f0bf5824d9e328177f8fe6dbce34b18edf6926ca61c67a5ef.jpg) + +![](images/5376885305bbf38a611274575e3942403a3f4d82a15f6bf7c6e7e3c5dc599e41.jpg) + +![](images/3b5ef371d7acd8226178a04c7d0fcbcc166606eeaaa3b62d10583b0922fd7ec4.jpg) +Steps + +![](images/cd70ec9eb769ac47599e1dba7ea52481394d0a6064edaf90d9fdd071a96bdf59.jpg) +Figure 12: Evaluation performance of fine-tuning RoBERTA-base in comparison with different initialization strategies for adapters (randomly initialized vs. pre-trained on masked language modeling task). Training was conducted for 10k steps with a learning rate of 5e-05 for RoBERTa-base and 0.0001 for adapters, respectively. + +![](images/4396bd99457e2c373145188c58b2356014d7ee42b4df36a534c51fb9451600d7.jpg) + +![](images/0ae730f3045190630eb5b52d16ab68a7e7a420cf31bc0a454be32db5600be53b.jpg) + +![](images/9900c0a7f03841706ec092758ab41c1bac243764fd3052134a861f8fe9bf0732.jpg) + +![](images/2849cd09ce8bbaa49edc8a28db59a79cd90656a587715d8ecb4ae8645b7dd6c5.jpg) + +![](images/ccca22990865db8bf4b6f08b95b692cb56f6ea63c7a1a53c92f6ae321ef87bd4.jpg) + +![](images/bf56f924029424492fc3fae1625e8bcbf41533587e7cc3339f691d0978035c2f.jpg) + +![](images/f3725945fd90d2bd43ba9d2d13c7615c5cd8a9f6eefa67b5ef293b7f878100a4.jpg) + +![](images/2e7023ac71c3308ef82706b9d1b31b6894522ccc162468d6254176d2be526427.jpg) +Figure 13: The AdapterDrop task performances for all eight GLUE tasks in relation to the dropped layers. '12 specialized adapters' refers to the performance of individual models trained for each AdapterDrop setting separately (i.e., 12 models); 'Standard adapter' refers to the adapter that is trained with no dropped layers; AdapterDrop training refers to the adapter that is trained with our proposed training procedure. + +![](images/7118cf81ff9ba09e2cececf713880517340c6c4f3a78dc0df3572e202b01ed9c.jpg) \ No newline at end of file diff --git a/adapterdropontheefficiencyofadaptersintransformers/images.zip b/adapterdropontheefficiencyofadaptersintransformers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ceaa7eaf85a9fdcb0a66560e2d3770e3d10b736f --- /dev/null +++ b/adapterdropontheefficiencyofadaptersintransformers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f7474e5644b0dde41ef1b0a0a2b1309efaa32324db27ded234345b647b4e3e3 +size 1255152 diff --git a/adapterdropontheefficiencyofadaptersintransformers/layout.json b/adapterdropontheefficiencyofadaptersintransformers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6ac09dc16f146c0866fc20fa2bf1fabc549b62b1 --- /dev/null +++ b/adapterdropontheefficiencyofadaptersintransformers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd62f23370e01fcad41060cd7e1319ab18afe0726d62019322be9be9ef9021a8 +size 467007 diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_content_list.json b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8be5068be08f1f9a6e8dfe4176f4c5e658b66310 --- /dev/null +++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b42dd9cb1cf80be3a6245b2141f9f227a8c412f2c552421c75220bf93f7ae23 +size 69251 diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_model.json b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5b3fcac88c1d955e63e72a01fa53524077a146b8 --- /dev/null +++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8dd99ffa73fb0a8ed3a9326ccebe337d99a68d40a3cd6aaebabca7f65f7a9002 +size 86675 diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_origin.pdf b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..32bfed8c00bc3bc383e42d791f639680904cec84 --- /dev/null +++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:868c3dc1bf5f1426e4bc745fd8aa54897c486d1d006a7116436f9864e0c66dd9 +size 582603 diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/full.md b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c4322f0930a0e3fed9e4ec74f85decbdbfadf4a8 --- /dev/null +++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/full.md @@ -0,0 +1,297 @@ +# Adaptive Bridge between Training and Inference for Dialogue Generation + +Haoran Xu $^{1,2*}$ , Hainan Zhang $^{3\dagger}$ , Yanyan Zou $^{3}$ , Hongshen Chen $^{3}$ , Zhuoye Ding $^{3}$ , Yanyan Lan $^{4}$ + +$^{1}$ Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China + +2 University of Chinese Academy of Sciences, Beijing, China + +3Data Science Lab, JD.com, Beijing, China + +$^{4}$ Institute of AI Industry Research, Tsinghua University, Beijing, China + +xuhaoran18s@ict.ac.cn,zhanghainan1990@163.com,zouyanyan6@jd.com + +ac@chenhongshen.com,dingzhuoye@jd.com,langyanyan@tsinghua.edu.cn + +# Abstract + +Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics. Therefore, due to the much bigger gap between various ground-truth responses and the generated synthetic response, exposure bias is more challenging in dialogue generation task. What's more, as MLE encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may further lead to the common response generation problem, such as "I don't know" and "HaHa?" In this paper, we propose a novel adaptive switching mechanism, which learns to automatically transit between ground-truth learning and generated learning regarding the word-level matching score, such as the cosine similarity. Experimental results on both Chinese STC dataset and English Reddit dataset, show that our adaptive method achieves a significant improvement in terms of metric-based evaluation and human evaluation, as compared with the state-of-the-art exposure bias approaches. Further analysis on NMT task also shows that our model can achieve a significant improvement. + +# 1 Introduction + +Auto-regressive models(ARM) are widely used for natural language generation(NLG) tasks, such as machine translation (Sutskever et al., 2014; Wu et al., 2018), dialogue response generation (Li et al., 2017), image captioning (Lin et al., 2014; Vinyals et al., 2015) and video description (Donahue et al., 2015). They utilize the encoder-decoder framework to predict the next token conditioned + +
Dialogue 1
context听说广州已成避暑胜地 (I heard that Guangzhou has become a summer resort)
response1确实,这边很凉快。(Indeed, it's cool here.)
response2晚上睡觉都没开风扇了。 (There is no need to turn on the fan at night.)
Dialogue 2
context哈哈,看看可爱的小猫咪(Ha ha, look at this lovely kitten)
response1这是什么品种的猫哇?好可爱,我也想要 (What kind of cat is this? So cute, I want it too.)
response2好想要一只这样的猫,可以陪我儿子玩 (I really want a cat like this to play with my son.)
response3哇,好可爱哇,我屋也有一只这样的小猫 (Wow, It's so cute, I have a kitten like this in my house.)
+ +Table 1: The two Dialogues in STC dataset, and the red part of responses in Dialogue 2 are the common words. + +on the previous tokens, and minimize the cross-entropy between the generation and ground-truths as their objective function. Specifically, at training time, the ground-truth is utilized as the previous tokens, which forces the model directly to learn the distribution of ground truths. But at inference, the previous tokens come from the ARM decoder itself, which is different from the input distribution at training time. + +Although this discrepancy, named exposure bias, has been studied in some classic NLG tasks, such as neural machine translation(NMT) (Bengio et al., 2015; Venkatraman et al., 2015; Zhang et al., 2019), it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In human dialogue, given the context, people can reply many relevant and appropriate responses, not only with various expressions but also with different topics. Take the Dialogue 1 in Table 1 as an example, given the context "I heard that Guangzhou has become a summer resort", the response 1 and response 2 are in the same topic but with different tokens. In this various expression situation, like NMT task, data distribution and model distribution are easy to fit, relatively, even with exposure bias problem. However, in different topics situation, data distribution is often + +different from the model, because it is too divergent and covers various word distribution of each topic. Through our data analysis, we find that in dialogue generation task, the various ground-truth responses and the generated sentences have a bigger gap than in NMT tasks. We calculate the overlap measures at word-level and semantic-level, i.e., BLEU and cosine similarity, between the generated sentence and the ground-truth sentences. The results show that on NMT WMT'14 dataset, the BLEU and similarity are 27.38 and 0.96, respectively, while on dialogue Reddit dataset, the BLEU and similarity are 2.17 and 0.81, respectively. We can see that the overlap measures of the dialogue generation task are significantly lower than that of the NMT task, which indicates the severity of the exposure bias problem in the dialogue generation. + +What's more, as Maximum Likelihood Estimation(MLE) encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may aggravate the common response problem of the generation model, due to the strict matching between the generated response and the ground-truth responses. Take the Dialogue 2 in Table 1 as an example, the response 1 is "What kind of cat is this? So cute, I want it too.", the response 2 is "I really want a cat like this to play with my son." and the response 3 is "Wow, it's so cute, I have a kitten like this in my house". If we train the model with word-level strict matching between the generated response and the ground-truth, it can only learn the common words, i.e., "So cute, I want it", but ignore the specific parts, i.e., "What kind of cat is this?". Therefore, it is beneficial to improve the strict matching mechanism for the dialogue generation task. + +In this paper, we propose a novel Adaptive switch mechanism as a Bridge(AdapBridge), which introduces the generator distribution to the training phase and learns to automatically transit between the ground-truth learning and the generated learning, with respect to the word-level matching scores, such as the cosine similarity. Specifically, at each training step, we calculate the cosine similarity for each generated word with respect to all its ground-truths. If the matching score is bigger than the threshold, the generated word is fed to the decoder, while if lower, the ground-truth is fed for training. The threshold is increasing as the training epoch grows. With this adaptive sampling scheme, + +the switch mechanism can consider the generation quality of every word, i.e., relevance between the generated word and the ground-truth, to decide whether utilizing the generated learning or not. + +We evaluate the proposed models on two public datasets, i.e. the Chinese STC and the English Reddit dataset. Experimental results show that our models significantly outperform the state-of-the-art exposure bias models with respect to both metric-based evaluations and human judgments. Further analysis on NMT task also shows that our model can achieve a significant improvement. + +The main contributions of this paper include: + +- We study the exposure bias problem in dialogue generation task, one of the one-to-varying generation scenarios. And find that the exposure bias may further lead to the common response generation problem. +- We propose the adaptive switch mechanism with word-level matching scores to determine the training input source, in order to resolve the common response problem. +- We evaluate AdapBridge on two public dialogue datasets and conduct rigorous experiments to demonstrate the effectiveness of our proposed models. Further analysis on NMT task also shows that our model can achieve a significant improvement. + +# 2 Related Work + +This section briefly introduces recent research progresses related to this work in literature. + +To solve the exposure bias problem in autoregressive or seq2seq models (Sutskever et al., 2014; Welleck et al., 2019; Holtzman et al., 2019), Venkatraman et al. (2015) tried to use data as Demonstrator(DAD) to augment the training set through the tokens predicted by the model, so as to make the training set to meet the test distribution. The method of Scheduled Sampling(SS) proposed by Bengio et al. (2015) attempted to randomly sample the previous generated words to replace the ground-truth words for the model input during training time. Zhang et al. (2019) made a further exploration of this method by sampling the previous words with decay not only from word-level oracle but also from the sentence-level oracle with a semantic metric. The main idea of this kind of method is to introduce the model's prediction information to its input at training time, and reduce + +![](images/60429db626b371a643b838cbd988d1f45fb804478f7766c410fcf6927aee42fa.jpg) +Figure 1: The illustration of our AdapBridge Model. + +the discrepancy between training and inference to alleviate the exposure bias problem. In comparison to those methods and related ideas (Qi et al., 2020; Goodman et al., 2020), our proposed method adaptively determines whether the input words of model during training are ground truth or predicted by scoring each generated word. + +Alternative based on Reinforcement Learning(RL) (Williams, 1992) methods have been explored for generation tasks, in particular for NMT. Mixed Incremental Cross-Entropy Reinforce (MIXER) (Ranzato et al., 2016) leverage hybrid loss function which combines both cross-entropy and reinforce to directly optimized the metrics used at test time, such as BLEU or ROUGE. There are many other similar works (Shen et al., 2016; Wu et al., 2016; Shao et al., 2018). More recently, text generation via Generative Adversarial Networks(GAN) (Goodfellow et al., 2014) called Text GANs has attracted of researchers (Nie et al., 2019; Zhou et al., 2019; Wu et al., 2021; Scialom et al., 2020). They framed the problem under the GAN paradigm, which uses the RL-Based (Williams, 1992) algorithms to get the gradient estimation, as the text generation is discrete. However, both RL and Text GANs cannot be avoided the high variance of gradient estimation caused by sparse rewards, which consequently makes the training process unstable and limits improvements. + +Different from traditional methods, our proposed model can adaptively determine whether the current input word is from ground truth or from generation with the word-level matching scores. + +# 3 Proposed Method + +Given a context sentence $X^{k} = \{x_{1}^{k}, x_{2}^{k}, \dots, x_{S_{k}}^{k}\}$ , and a target response sentence $Y^{k} = \{y_{1}^{k}, y_{2}^{k}, \dots, y_{T_{k}}^{k}\}$ , where $S_{k}$ and $T_{k}$ are the word length of context and response, respectively. The dialogue generation model based on sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) framework, directly models the response probability: + +$$ +P \left(Y ^ {k} \mid X ^ {k}, \theta\right) = \prod_ {t = 1} ^ {T _ {k}} p \left(y _ {t} ^ {k} \mid y _ {< t} ^ {k}, X ^ {k}, \theta\right) \tag {1} +$$ + +where $\theta$ are the parameters of model and $y_{ \beta)$ is a Indicator function, if input is upper $\beta$ , output will be 1, otherwise, output will be 0. $\alpha$ and $\beta$ are both increasing as the training epochs grows. + +- Decoders predict $t - 1$ words $y_{\beta}(s_i)$ +10: end for +11: $y_{< t}^{k}\wedge \leftarrow y_{< t}^{k}{}^{*}\otimes P + y_{< t}^{k}\otimes (1 - P)$ +12: else +13: $y_{< t}^{k}\wedge \leftarrow y_{< t}^{k}$ +14: end if +15: return $y_{1, consists 4,391,266, 23,532 and 21,161 dialogue context-response pairs for training, validation and testing sets, respectively. We remove those pairs whose context contains response or contrary, and obtain 4,295,557, 23,039 and 20,749 pairs for three data sets. The average number of responses corresponding to each context + +in STC is 19.7. The English Reddit dialogue corpus is extracted from the Reddit comments-posts by the script $^2$ , named Reddit. The original data consists of 6 million dialogues from all even months of 2011. We use the official script to tokenize, and the duplicates and sentences with length less than 3 or longer than 64 are removed. If the number of responses of one context is more than 20, we will randomly select 20 responses and ignore others, and the average number of responses corresponding to each context in Reddit is 6.2. Finally, we randomly split the data to training, validation and testing sets, which contains 1,107,860, 23,183, 12,429 pairs, respectively. + +# 4.1.2 Baselines and Parameters Setting + +Three baselines are used for comparison, including Transformer-based model (Vaswani et al., 2017), Random Sampling with word(RS-word) and sentence(RS-Sentence) (Zhang et al., 2019). + +For STC, we utilize the Chinese word as input, and set the vocabulary size as 10,599. For Reddit, context-response pairs are encoded using byte-pair encoding(BPE) (Sennrich et al., 2016) with vocabularies of 11,527 tokens. For a fair comparison among all baseline models and our model, the dimension of all word embedding is 512, and beam size in testing is 5. The transformer model has 6 layers in both encoder and decoder, while 8 heads in multi-head attention. All parameters are initialized by the uniform distribution over $[-0.1, 0.1]$ . We adopt the optimizer Adam (Kingma and Ba, 2015) with $\beta_1 = 0.9$ , $\beta_2 = 0.98$ and with a weight decay of $\epsilon = 10^{-8}$ . We set the learning rate as 0.0007 and the maximum tokens of a batch as 8192 with the update frequency 2. We run all models on 4 Tesla P40 GPU cards with PyTorch3. The code will be released when this paper is accepted. + +# 4.1.3 Evaluation Measures + +The evaluation of quantitative metrics and human judgements are used in our experiments. Specifically, quantitative metrics contains traditional metrics, such as PPL and BLEU score (Papineni et al., 2002), and Distinct (Li et al., 2016) metric which is recently proposed to evaluate the degree of diversity of the generated responses by calculating the number of distinct unigrams and bigrams in the generated responses. We also evaluate each generated response by calculating BLEU score with + +all reference responses, and use the highest BLEU score to represent the quality of generated response. The average of all highest BLEU score in the testing set is named AH-BLEU. In addition, BLEU score is calculated by using the toolkit of NLTK4. + +For human evaluation, given 300 randomly sampled contexts and their responses which are generated by different models, three annotators (all CS majors students) are required to give the score of those context-response pairs, e.g. 3, 2, 1 means relevant, common and no-relevant, respectively, based on the coherence of the generated responses with respect to the contexts. The mean score is the average of all scores given by the three annotators with context-response pairs generated by a model. Meanwhile, in order to get the relative score of different models, we also evaluate the ground-truth context-response pairs by human evaluation. + +# 4.2 Experimental Results + +In this section, we demonstrate our experimental results on the two public datasets. + +# 4.2.1 Metric-based Evaluation + +Table 3 shows the quantitative evaluation results. From this table, we can see that models with switch mechanism, such as RS-Word, RS-Sentence and AdapBridge, outperform the traditional Transformer-based model in terms of BLEU, Distinct-1 and Distinct-2 evaluations. The results show that the switch mechanism plays an important role in the dialogue generation task. + +RS-Word and RS-Sentence both replace the ground truth tokens by the generated tokens with a random scheduled sampling. However, their performances are both worse than our proposed model, as our model considers the relevance between the generated words and the ground truth with word-level matching scores. For the BLEU score on STC dataset as an example, the BLEU-4 score of AdapBridge is 2.17, which is better than that of RS-Word and RS-Sentence, i.e., 2.05 and 2.12. Especially, our model achieves best AH-BLEU-2 score on both two datasets, which is the significant performance gains. It shows that the responses of our model have higher quantity than other baselines. + +The diversity of responses can be evaluated by Distinct score. As shows in Table 3, our AdapBridge achieves significant performance gains. Take the results of Reddit in Table 3 as an example, the proposed AdapBridge model improves the + +
STC Datasets
ModelPPLBLEU-2(%)BLEU-4(%)DIS-1(%)DIS-2(%)AH-BLEU-2(%)
Transformer28.863.741.370.230.9014.43
RS-Word Oracle28.915.122.050.331.2515.21
RS-Sentence Oracle26.755.502.120.351.3815.52
AdapBridge29.365.352.170.431.7416.38
+ +
Reddit Datasets
ModelPPLBLEU-2(%)BLEU-4(%)DIS-1(%)DIS-2(%)AH-BLEU-2(%)
Transformer40.833.990.770.792.917.03
RS-Word Oracle43.113.780.811.425.197.43
RS-Sentence Oracle40.723.490.761.335.087.05
AdapBridge48.013.560.831.565.567.60
+ +Table 3: The metric based evaluation results on STC and Reddit datasets. DIS represent the distinct score and AH-BLEU-2 represent the average of all highest BLEU-2 score. + +
STC Datasets
Model3(%)2(%)1(%)Mean
Ground Truth82.2313.564.232.78
Transformer48.6723.1128.222.20
RS-Word56.3320.8922.782.34
RS-Sentence55.3322.6722.002.33
AdapBridge59.5624.3316.112.43
Reddit Datasets
Model3(%)2(%)1(%)Mean
Ground Truth79.0015.675.332.74
Transformer49.7821.8928.332.21
RS-Word52.4423.0024.562.28
RS-Sentence53.1124.6722.222.31
AdapBridge55.6728.3316.002.40
+ +Table 4: The human evaluation on STC and Reddit. + +Transformer, RS-Word and RS-Sentence models by 2.65, 0.37 and 0.48 Distinct-2 points, respectively. We can also note that our model has the highest Distinct score on both STC and Reddit datasets, which indicates that our model can generate more diverse response and avoid generating common responses. In summary, our proposed AdapBridge model has the ability to generate high quality and diverse responses, compared with baselines. We also conducted the significant test, and the result shows that the improvements of our model are significant on both two datatests, i.e., $p - value < 0.01$ . + +# 4.2.2 Human Evaluation + +The human evaluation results are shown in Table 4. The percentages of relevant, common and norelevant are given to evaluate the quality of responses generated by different models. From the results we can see that our AdapBridge gets the highest score in human evaluation. Take the STC as an example, compared with Transformer, RS-Word Oracle and RS-Sentence Oracle, the AdapBridge achieves performance gains $22.38\%$ , $5.73\%$ , $7.65\%$ on the relevant score. For the Mean score, we can observe that our AdapBridge generates the most relevant responses, while generates less no-relevant responses, which indicates that the responses generated by our model are attractive to annotators. We also conducted the significant test, and the result shows that the improvements of our model are significant on both two datatests, i.e., $p - value < 0.01$ . + +# 4.3 Case study + +In this section, we conduct case studies to show our model can generate more relevant and diverse responses than other baseline models. + +We give two examples as in Table 5. In the Example 1, the response of Transformer is "Is this a fish or a fish?", which is an unreasonable sentence, as the words in the positions of two "fish" should be different according to common sense. For the response of RS-Word, we can see that it repeats "want to eat fish" twice, which is a part of the context. Although the response of RS-Sentence "How lovely! I want to eat, too." is relevant, it conforms + +
Example 1
context你不是天天嚷着要吃鱼嘛?鱼来了Don't you want to eat fish every day? The fish is coming.
Transformer这是鱼还是鱼啊?Is this a fish or a fish?
RS-Word要吃鱼,要吃鱼。Want to eat fish, want to eat fish
RS-Sentence好可爱!我也想吃。How lovely! I want to eat, too.
AdapBridge这是什么鱼?What kind of fish is this?
Example 2
context水城威尼斯,梦里的城市。Venice, the city of dreams.
Transformer梦里的城市,梦里的城市。City in dreams, city in dreams.
RS-Word好美的城市,好想去。Beautiful city. I really want to go.
RS-Sentence这是哪呀!我也想去。Where is this? I want to go too.
AdapBridge想和我的爸妈一起去。I want to go here with my parents.
+ +to a common response paradigm, such as "how . . . I want it, too" or "what's this, I want to . . ." If the context contains food, animal, locations etc., such responses all seem appropriate, which is not attractive to human. While the response generated by our AdapBridge is more specific and relevant, i.e. "What kind of fish is this?". We can also see the similar phenomenon in the Example 2 of Table 5, with the context "Venice, the city of water, the city of dreams.", Transformer repeats the same content that comes from the context, and the responses of RS-Word and RS-Sentence are both common responses as mentioned above. Compared with responses generated by baseline models, response of AdapBridge "I want to go here with my parents." is more relevant and attractive. Therefore, those results indicate that our proposed model can generate high quality and attractive responses with the adaptive switch mechanism. + +# 4.4 AdapBridge on NMT + +The method we propose can also be adapted for the neural machine translation(NMT) in an easy way. With this task we want to investigate if AdapBridge could be helping to improve the performance of NMT which is a classic natural language generation task. We perform experiments on the WMT'14 English $\rightarrow$ German(En $\rightarrow$ De) datasets, which contains 3,900,502, 39,414, 3,003 sentences for training, validation and testing sets, respectively. We train the Transformer-based model with the same setting described in Section 4.1.2, and then we measure the translation quality with BLEU. The evaluation results are listed in Table 6. + +From the results, we can see that our method can also achieve significant performance gains, and im + +Table 5: Two examples of generated responses on STC. + +
ModelBLEU-2(%)BLEU-4(%)
Transformer43.4026.43
RS-Word43.6626.84
RS-Sentence44.0827.21
AdapBridge43.9927.38
+ +Table 6: BLEU scores on $\mathrm{{En}} \rightarrow \mathrm{{De}}\mathrm{{NMT}}$ task. + +prove the Transformer-based model by 0.95 BLEU-4 points on average. For the BLEU-2 score, our model is slightly lower than RS-Sentences model same as the results in Table 3, it can attributed to the sentence-level information of RS-Sentence Oracles. In order to analyze the gap of ground-truth and the generated sentences, we calculate the cosine similarity between the hidden representations of ground truth sentences and generated sentences, with a trained Bert model (Wolf et al., 2020), and get the similarity score 0.96 and 0.81 on WMT'14 and Reddit datasets, respectively. At the same time, we can also notice that BLEU score of NMT is much higher than that of dialogue generation task. The results of overlap measures indicate the severity of the exposure bias problem in dialogue generation as analyzed in Section 1. + +# 5 Conclusion + +In this paper, we propose a novel adaptive switch mechanism with word-level matching scores to solve the problem of exposure bias for the dialogue generation task, named AdapBridge. Our core idea is to utilize the word-level matching scores to determine the input is from ground truth or from prediction at each step of training. Experimental results show that our model significantly outperforms pre + +vious baseline models. Further analysis on NMT also indicates that our model can achieve significant improvement on different generation tasks. In future work, we plan to further design different scoring methods, i.e. Bert score or BLEU, to guide the model selects better words. It is also interesting to extend our AdapBridge model to other generation tasks, such as abstractive summarization. + +# Acknowledgements + +This work is supported by the Beijing Academy of Artificial Intelligence (BAAI), and the National Natural Science Foundation of China (NSFC) (No.61773362). + +# References + +Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1, pages 1171-1179. +Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2625-2634. +Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS, pages 2672-2680. +Sebastian Goodman, Nan Ding, and Radu Soricut. 2020. Teaform: Teacher-forcing with n-grams. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8704–8717. +Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. In International Conference on Learning Representations. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, + +pages 110-119, San Diego, California. Association for Computational Linguistics. +Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157-2169. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer. +Weili Nie, Nina Narodytska, and Ankit Patel. 2019. RelGAN: Relational generative adversarial networks for text generation. In International Conference on Learning Representations. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318. +Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2401-2410. +Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. +Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. Coldgans: Taming language gans with cautious sampling strategies. In Advances in Neural Information Processing Systems, volume 33, pages 18978-18989. Curran Associates, Inc. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Chenze Shao, Xilin Chen, and Yang Feng. 2018. Greedy search with probabilistic n-gram matching for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4778-4784. +Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum + +risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683-1692. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. +Arun Venkatraman, Martial Hebert, and J Bagnell. 2015. Improving multi-step prediction of learned time series models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29. +Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156-3164. +Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. In International Conference on Learning Representations. +Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'Tmi Louf, Morgan Funtopicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Lijun Wu, Yingce Xia, Fei Tian, Li Zhao, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018. Adversarial neural machine translation. In *Asian Conference on Machine Learning*, pages 534-549. PMLR. +Qingyang Wu, Lei Li, and Zhou Yu. 2021. Textgail: Generative adversarial imitation learning for text generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14067-14075. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith + +Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. +Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019. Bridging the gap between training and inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4334-4343, Florence, Italy. Association for Computational Linguistics. +Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, and Ming Zhou. 2019. Self-adversarial learning with comparative discrimination for text generation. In International Conference on Learning Representations. \ No newline at end of file diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/images.zip b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5de10a3f1c7959b11b181b07a672d546cac63703 --- /dev/null +++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:290b72b1c7029820efd5a07eb1b5bf2fd78aa6fb2c96bdee3e46a90866771a76 +size 436064 diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/layout.json b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d65a794576d9bb95ba2c711faa5fca602967cedf --- /dev/null +++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f58504eb091e9adea56c09adb2ebcf76511eafb9ef7ad91cd4086e19e6d9ec95 +size 368025 diff --git a/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_content_list.json b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..663e36745613e7c7a6e71a1ae3894f345ff68d2d --- /dev/null +++ b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:011685e7f0dc8320065bc23a284bc69ac3b6e9a4e273be5d0ba1d3568f7605d6 +size 96375 diff --git a/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_model.json b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_model.json new file mode 100644 index 0000000000000000000000000000000000000000..410dbb50ca9dc8ef015c3d05105817e23e1a28cd --- /dev/null +++ b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:519338a6abb09f2de906bf5015c592d0aa50faafb80481fec007169c51e16ff1 +size 116243 diff --git a/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_origin.pdf b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b90f8b6fbb21803928c2441e3e354386625be3a6 --- /dev/null +++ b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eeb2b6171162345432b1040404803cf54bd76ec56ff2d9e84e3f88ea19e04550 +size 1358209 diff --git a/adaptiveinformationseekingforopendomainquestionanswering/full.md b/adaptiveinformationseekingforopendomainquestionanswering/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0e75f42ea70db72bcc28bc5f9bab08be657cfdf9 --- /dev/null +++ b/adaptiveinformationseekingforopendomainquestionanswering/full.md @@ -0,0 +1,412 @@ +# Adaptive Information Seeking for Open-Domain Question Answering + +Yunchang Zhu†§, Liang Pang†*, Yanyan Lan◇*, Huawei Shen†§, Xueqi Cheng†§ + +†Data Intelligence System Research Center + +and ${}^{ \ddagger }$ CAS Key Lab of Network Data Science and Technology, + +Institute of Computing Technology, Chinese Academy of Sciences + +$^{\S}$ University of Chinese Academy of Sciences + +$\diamond$ Institute for AI Industry Research, Tsinghua University + +{zhuyunchang17s, pangliang, shenhuawei, cxq}@ict.ac.cn + +lanyanyan@tsinghua.edu.cn + +# Abstract + +Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus. Recently, iterative approaches have been proven to be effective for complex questions, by recursively retrieving new evidence at each step. However, almost all existing iterative approaches use predefined strategies, either applying the same retrieval function multiple times or fixing the order of different retrieval functions, which cannot fulfill the diverse requirements of various questions. In this paper, we propose a novel adaptive information-seeking strategy for open-domain question answering, namely AISO. Specifically, the whole retrieval and answer process is modeled as a partially observed Markov decision process, where three types of retrieval operations (e.g., BM25, DPR, and hyperlink) and one answer operation are defined as actions. According to the learned policy, AISO could adaptively select a proper retrieval action to seek the missing evidence at each step, based on the collected evidence and the reformulated query, or directly output the answer when the evidence set is sufficient for the question. Experiments on SQuAD Open and HotpotQA fullwiki, which serve as single-hop and multi-hop open-domain QA benchmarks, show that AISO outperforms all baseline methods with predefined strategies in terms of both retrieval and answer evaluations. + +# 1 Introduction + +Open-domain question answering (QA) (Voorhees et al., 1999) is a task of answering questions using a large collection of texts (e.g., Wikipedia). It relies on a powerful information-seeking method to efficiently retrieve evidence from the given large corpus. + +Traditional open-domain QA approaches mainly follow the two-stage retriever-reader pipeline (Chen et al., 2017; Yang et al., 2018; Karpukhin + +Question. + +What movie directed by Pitof in 2004 has a tie-in electronic game? + +Passages: + +P1: Pitof + +Jean-Chri + +Pty".In 20 + +P2:Catv + +12. Catw + +Catwoman + +same name + +P3: Catwoman (video game) + +Catwoman + +based on + +Strategies. + +1BM25C + +BM25(G) + +$^{2}\mathrm{DR}_{(\mathrm{MDR})}$ + +invc + +$\text{一} \mathrm { B M } 2 5 + 1$ + +Optimal + +Baidu) + +(Red level) + +… + +K(GRR) + +BM25(O) + +中 + +DR(Q) + +RM25(0) + +BM23 + +BM25(Q) + +)B + +$\because {P}_{1}\left( {-2,0}\right) .{P}_{2}\left( {0,2}\right)$ + +P3 + +(1) + +P1 + +) + +M25(F) + +→>11 + +OR([Q,P3]) + +LINK(P1) + +LINK(1:1) + +LINK(P1) + +BM: + +A + +X + +LIN + +P2 Bin# + +DR[ + +5(0) m + +C2 P2 3 + +(P2) + +(1.2) X + +2,P2]) P3 + +BM25 + +→ + +ANS( + +Figure 1: An example derived from HotpotQA development set. P1, P2 and P3 are the most relevant passages, of which P2 and P3 are supporting passages, which are essential to answer the question. Except for the adaptive strategy in the last row, fixed strategy methods such as using BM25 or dense retrieval multiple times and first using BM25 and then entity linking have failed, due to the rank of the remaining supporting passages larger than 1k. The number between two arrows indicates the highest rank of the remaining supporting passages in the retrieval list, unless ranked first. + +et al., 2020), in which the retriever uses a determinate sparse or dense retrieval function to retrieve evidence, independently from the reading stage. But these approaches have limitations in answering complex questions, which need multi-hop or logical reasoning (Xiong et al., 2021). + +To tackle this issue, iterative approaches have been proposed to recurrently retrieve passages and reformulate the query based on the original question and the previously collected passages. Nevertheless, all of these approaches adopt fixed information-seeking strategies in the iterative process. For example, some works employ a single retrieval function multiple times (Das et al., 2019a; Qi et al., 2019; Xiong et al., 2021), and the other works use a pre-defined sequence of retrieval functions (Asai et al., 2020; Dhingra et al., 2020). + +However, the fixed information-seeking strategies cannot meet the diversified requirements of + +various problems. Taking Figure 1 as an example, the answer to the question is 'Catwoman' in P3. Due to the lack of essential supporting passages, simply applying BM25/dense retrieval (DR) multiple times (strategy 1 (Qi et al., 2019) or 2 (Xiong et al., 2021)), or using the mixed but fixed strategy (strategy 3 (Asai et al., 2020)) cannot answer the question. Specifically, it is hard for Qi et al. (2019) to generate the ideal query 'Catwoman game' by considering P1 or P2, thus BM25 (Robertson and Zaragoza, 2009) suffers from the mismatch problem and fails to find the next supporting passage P3. The representation learning of salient but rare phrases (e.g. 'Pitof') still remains a challenging problem (Karpukhin et al., 2020), which may affect the effectiveness of dense retrieval, i.e., the supporting passage P3 is ranked 65, while P1 and P2 do not appear in the top-1000 list at the first step. Furthermore, link retrieval functions fail when the current passage, e.g., P2, has no valid entity links. + +Motivated by the above observations, we propose an Adaptive Information-Seeking approach for Open-domain QA, namely AISO. Firstly, the task of open-domain QA is formulated as a partially observed Markov decision process (POMDP) to reflect the interactive characteristics between the QA model (i.e., agent) and the intractable large-scale corpus (i.e., environment). The agent is asked to perform an action according to its state (belief module) and the policy it learned (policy module). Specifically, the belief module of the agent maintains a set of evidence to form its state. Moreover, there are two groups of actions for the policy module to choose, 1) retrieval action that consists of the type of retrieval function and the reformulated query for requesting evidence, and 2) answer action that returns a piece of text to answer the question, then completes the process. Thus, in each step, the agent emits an action to the environment, which returns a passage as the observation back to the agent. The agent updates the evidence set and generates the next action, step by step, until the evidence set is sufficient to trigger the answer action to answer the question. To learn such a strategy, we train the policy in imitation learning by cloning the behavior of an oracle online, which avoids the hassle of designing reward functions and solves the POMDP in the fashion of supervised learning. + +Our experimental results show that our approach achieves better retrieval and answering performance than the state-of-the-art approaches + +on SQuAD Open and HotpotQA fullwiki, which are the representative single-hop and multi-hop datasets for open-domain QA. Furthermore, AISO significantly reduces the number of reading steps in the inference stage. + +In summary, our contributions include: + +- To the best of our knowledge, we are the first to introduce the adaptive information-seeking strategy to the open-domain QA task; +- Modeling adaptive information-seeking as a POMDP, we propose AISO, which learns the policy via imitation learning and has great potential for expansion. +- The proposed AISO achieves state-of-the-art performance on two public dataset and wins the first place on the HotpotQA fullwiki leaderboard. Our code is available at https://github.com/zycdev/AISO. + +# 2 Related Work + +Traditional approaches of open-domain QA mainly follow the two-stage retriever-reader pipeline (Chen et al., 2017): a retriever first gathers relevant passages as evidence candidates, then a reader reads the retrieved candidates to form an answer. In the retrieval stage, most approaches employ a determinate retrieval function and treat each passage independently (Wang et al., 2018; Lin et al., 2018; Lee et al., 2018; Yang et al., 2018; Pang et al., 2019; Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020; Izacard and Grave, 2021). As an extension, some approaches further consider the relations between passages through hyperlinks or entity links and extend evidence with the linked neighbor passages (Nie et al., 2019; Das et al., 2019b; Zhao et al., 2020). However, pipeline approaches retrieve evidence independently from reader, leading to 1) introduce less-relevant evidence to the question, and 2) hard to model the complex question which has high-order relationship between question and evidence. + +Instead, recent iterative approaches sequentially retrieve new passages by updating the query inputted to a specific retrieval function at each step, conditioned on the information already gathered. At each step, Das et al. (2019a); Feldman and ElYaniv (2019); Xiong et al. (2021) reformulate the dense query vector in a latent space, while Ding et al. (2019); Qi et al. (2019); Zhang et al. (2020); + +![](images/c16af0bb0f1a1c6559326284342bd0cbebd9650e10f441b19bb7a297d645d886.jpg) +Figure 2: The overview of the AISO. + +Qi et al. (2020) update the natural language query. After the first step retrieval using TF-IDF, Asai et al. (2020) and Li et al. (2021) recursively select subsequent supporting passages on top of a hyperlinked passage graph. Nevertheless, all of these approaches adopt fixed information-seeking strategies, employing the same retrieval function multiple times (Das et al., 2019a; Feldman and ElYaniv, 2019; Xiong et al., 2021; Ding et al., 2019; Qi et al., 2019; Zhang et al., 2020; Qi et al., 2020) or pre-designated sequence of applying retrieval functions (Asai et al., 2020; Li et al., 2021). Due to the diversity of questions, these fixed strategies established in advance may not be optimal for all questions, or even fail to collect evidence. + +# 3 Method + +In this section, we first formulate the open-domain QA task as a partially observed Markov decision process (POMDP) and introduce the dynamics of the environment. Then, we elaborate on how the agent interacts with the environment to seek evidence and answer a question. Finally, to solve the POMDP, we describe how to train the agent via imitation learning. + +# 3.1 Open-Domain QA as a POMDP + +Given a question $q$ and a large corpus $\mathcal{P}$ composed of passages, the task of open-domain QA is to col + +lect a set of evidence $E\subset \mathcal{P}$ and answer the question based on the gathered evidence. + +The fashion of iterative evidence gathering, proven effective by previous works (Das et al., 2019a; Asai et al., 2020; Xiong et al., 2021), is essentially a sequential decision-making process. Besides, since the corpus is large, ranging from millions (e.g., Wikipedia) to billions (e.g., the Web), and the input length of a QA model is limited, the QA model can only observe a part of the corpus. Owing to the above two reasons, we model open-domain QA as a partially observed Markov decision process. + +In the POMDP we designed, as shown in Figure 2, the agent is the QA model that needs to issue actions to seek evidence from the largescale corpus hidden in the environment and finally respond to the question. By executing the received action, the environment can return a retrieved passage to the agent as an observation of the corpus. Formally, the POMDP is defined by $(S, \mathcal{A}, \mathcal{O}, \Omega, Z, R)$ , where $R$ is the reward function. + +Actions: At timestep $t = 0,1,\dots ,T$ , the action $a_{t}$ in the action space $\mathcal{A} = \mathcal{F}\times \mathcal{U}$ is a request for an executable function $f\in \mathcal{F}$ , expressed as $\langle f,u\rangle$ , where $u\in \mathcal{U}$ is the text argument that gets passed to $f$ . The space of executable functions $\mathcal{F}$ includes two groups of functions, 1) retrieval function that takes the query $u$ and corpus $\mathcal{P}$ as + +input and ranks a retrieval list of passages as $\mathcal{P}_{f(u)}$ , 2) answer function that replies to the question $q$ with the answer $u$ and ends the process. The action $a_{t}$ is performed following the policy $\Pi$ described in Subsection 3.2.2. + +States: The environment state $s_t$ in the state space $S$ contains revealing states of retrieval lists of all history retrieval actions. When the agent issues an action $a_t = \langle f, u \rangle$ , $s_t$ will transfer to $s_{t+1}$ governed by a deterministic transition dynamics $\Omega(s_t, a_t)$ . Specifically, $\Omega$ will mark the topmost unrevealed passage in the retrieval list $\mathcal{P}_{f(u)}$ as revealed. If the environment has never executed $a_t$ before, it will first search and cache $\mathcal{P}_{f(u)}$ for possible repeated retrieval actions in the future. + +Observations: On reaching the new environment state $s_{t+1}$ , the environment will return an observation $o_{t+1}$ from the observation space $\mathcal{O} = \{q\} \cup \mathcal{P}$ , governed by the deterministic observation dynamics $Z$ . At the initial timestep, the question $q$ will be returned as $o_0$ . In other cases, $Z$ is designed to return only the last passage marked as revealed in $\mathcal{P}_{f(u)}$ at a time. For example, if the action $\langle f, u \rangle$ is received for the $k$ th time, the $k$ th passage in $\mathcal{P}_{f(u)}$ will be returned. + +# 3.2 Agent + +The agent interacts with the environment to collect evidence for answering the question. Without access to the environment state $s_t$ , the agent can only perform sub-optimal actions based on current observations. It needs to build its belief $b_t$ in the state that the environment may be in, based on its experience $h_t = (o_0, a_0, o_1, \dots, a_{t-1}, o_t)$ . Therefore, the agent consists of two modules: belief module $\Phi$ that generates the belief state $b_t = \Phi(h_t)$ from the experience $h_t$ , and policy module $\Pi$ that prescribes the action $a_t = \Pi(b_t)$ to take for current belief state $b_t$ . + +Both belief and policy modules are constructed based on pretrained Transformer encoders (Clark et al., 2020), respectively denoted as $\Psi^{belief}$ and $\Psi^{policy}$ , which encode each inputted token into a $d$ -dimensional contextual representation. The input of both encoders is a belief state, formatted as "[CLS] [YES] [NO] [NONE] question [SEP] title $_o$ [SOP] content $_o$ [SEP] title $_1$ [SOP] ... content $_{|E|$ [SEP]", where the subscript $_o$ denotes the observation passage, and the others passages come from the collected evidence set $E$ , [SOP] is a special token to separate the title and con + +tent of a passage, [YES] and [NO] are used to indicate yes/no answer, and [NONE] is generally used to indicate that there is no desired answer/query/evidence. In this way, the self-attention mechanism across the concatenated sequence allows each passage in the input to interact with others, which has been shown crucial for multi-hop reasoning (Wang et al., 2019a). + +# 3.2.1 Belief Module + +The belief module $\Phi$ transforms the agent's experience $h_t$ into a belief state $b_t$ by maintaining a set of evidence $E_{t-1}$ . At the end of the process, the evidence set $E$ is expected to contain sufficient evidence necessary to answer the question and no irrelevant passage. In the iterative process, the agent believes that all the passages in $E$ may help answer the question. In other words, those passages that were observed but excluded from the evidence set, i.e., $o_{1:t-1} \setminus E_{t-1}$ , are believed to be irrelevant to the question. + +For simplicity, assuming that the negative passages $o_{1:t-1} \setminus E_{t-1}$ and action history $a_{ \phi \left(p _ {0} \mid b _ {t}\right), p _ {i} \in C _ {t} \right\}. \tag {2} +$$ + +It is worth noting that these evidence candidates are scored jointly since encoded together in the same input, different from conventional rerankers that score separately. + +# 3.2.2 Policy Module + +The policy module $\Pi$ decides the next action $a_{t}$ to be taken based on the current belief state $b_{t}$ . In this paper, we equipped the agent with three retrieval functions and one answer function, which means that the action space $\mathcal{A}$ consists of three types of retrieval actions and one type of answer actions. However, unlike the finite space of executable functions $\mathcal{F}$ , the space of function arguments $\mathcal{U}$ includes all possible natural-language queries and answers. To narrow the search space, for each executable function, we employ a suggester to propose a plausible query or answer as the argument passed to the function. Finally, we apply an action scoring function in the narrowed action space and select the action with the highest score. + +Equipped Functions Formally, the space of executable functions is defined as $\mathcal{F} = \{f_s, f_d, f_l, f_o\}$ . + +Among them, except $f_{o}$ is the answer function used to reply to the question, the rest are three distinct off-the-shelf retrieval functions (RF) used to explore the corpus. $f_{s}$ is a sparse RF, implemented as BM25 (Robertson and Zaragoza, 2009). It performs well when the query is concise and contains highly selective keywords but often fails to capture the semantics of the query. $f_{d}$ is a dense RF, implemented as MDR (Xiong et al., 2021) for multi-hop questions, and DPR (Karpukhin et al., 2020) for single-hop questions. Dense RFs can capture lexical variations and semantic relationships, but they struggle when encountering out-of-vocabulary words. $f_{l}$ is a link RF, implemented as hyperlink. When hyperlink markings are available in a source passage, it can readily map a query (i.e., anchor text) to the target passage. + +Argument Generation The space of function arguments $\mathcal{U}$ , composed of textual queries and answers, is too large to perform an exhaustive search due to the complexity of natural language. To reduce the search complexity, inspired by Yao et al. (2020), we employ four argument generators to generate the most plausible query/answer for the equipped functions. + +$g_{o}$ is a trainable reading comprehension model for $f_{o}$ . It is a span extractor built upon the contextual representations outputted by the encoder $\Psi^{policy}$ . Like conventional extractive reading comprehension models (Yang et al., 2018; Clark et al., 2020), $g_{o}$ uses the contextual representations to + +calculate the start and end positions of the most plausible answer $u_{o}$ . If the current context $C_t$ is insufficient to answer the question, the special token [NONE] will be extracted. + +$g_{s}$ is a query reformulation model for $f_{s}$ . In this work, we directly employ the well-trained query reformulator from Qi et al. (2019) for multi-hop questions, which takes the belief state $b_{t}$ as input and outputs a span of the input sequence as the sparse query $u_{s}$ . As for single-hop questions, since there exists no off-the-shelf multi-step query reformulator, we leave $g_{s}$ as an identity function that returns the original question directly. In this case, requesting the same RF multiple times is equivalent to traverse the retrieval list of original question. + +$g_{d}$ is a query reformulator for $f_{d}$ . For multi-hop questions, $g_{d}$ concatenates the question $q$ and the passage with the highest score in evidence set $E_{t}$ as the dense query $u_{d}$ , the same as the input of MDR (Xiong et al., 2021). If $E_{t}$ is empty, $u_{d}$ is equal to the question $q$ . Similar to $g_{s}$ , $g_{d}$ for single-hop questions also leaves original questions unchanged. + +$g_{l}$ is a trainable multi-class classifier for $f_{l}$ . It selects the most promising anchor text from the belief state $b_{t}$ . To enable rejecting all anchors, [NONE] is also treated as a candidate anchor. $g_{l}$ shares the encoder $\Psi^{policy}$ , where each anchor is represented by the average of contextual representations of its tokens. Upon $\Psi^{policy}$ , we use a linear layer to project the hidden representations of candidate anchors to real values and select the anchor with the highest value as the link query $u_{l}$ . + +In this way, the action space is narrowed down to $\check{A} = \{\langle f_s,u_s\rangle ,\langle f_d,u_d\rangle ,\langle f_l,u_l\rangle ,\langle f_o,u_o\rangle \}$ + +Action Selection The action scoring function $\pi$ is also built upon the output of $\Psi^{policy}$ . To score an action $\langle f, u \rangle$ for current belief state $b_{t}$ , an additional two-layer $(3d \times 4d \times 1)$ MLP, with a ReLU activation in between, projects the concatenated representation of $b_{t}$ , executable function $f$ , and function argument $u$ , i.e., $\mathbf{v}_{[\mathrm{CLS}]}$ , $\mathbf{w}_{f}$ , and $\mathbf{v}_{u}$ , into a real value. $\mathbf{w}_{f} \in \mathbb{R}^{d}$ is a trainable embedding for each executable function, the same dimension as the token embedding. $\mathbf{v}_{u}$ is specific for each function. Since $u_{s}$ , $u_{l}$ and $u_{o}$ have explicit text span in the $b_{t}$ , thus their $\mathbf{v}_{u}$ are the averages of their token representations. As for $u_{d}$ , if $g_{d}$ does not expand the original question, $\mathbf{v}_{u_{d}}$ is the contextual representation of [NONE]. Otherwise, $\mathbf{v}_{u_{d}}$ is the [SOP] of the passage concatenated to the question. + +In short, the next action is selected from the narrowed action space $\check{A}$ by the scoring function $\pi$ , + +$$ +a _ {t} = \Pi (b _ {t}) = \underset {a \in \tilde {A}} {\arg \max } \pi (a | b _ {t}). \tag {3} +$$ + +# 3.3 Training + +In the agent, in addition to the encoders $\Psi^{belief}$ and $\Psi^{policy}$ , we need to train the evidence scoring function $\phi$ , link classifier $g_{l}$ , answer extractor $g_{o}$ , and action scoring function $\pi$ , whose losses are $L_{\phi}, L_{l}, L_{o}$ , and $L_{\pi}$ . Since the policy module is dependent on the belief module, we train the agent jointly using the following loss function, + +$$ +L = L _ {\phi} + L _ {l} + L _ {o} + L _ {\pi}. \tag {4} +$$ + +Unlike $\phi$ , $g_{l}$ and $g_{o}$ that can be trained in supervised learning through human annotations in QA datasets, the supervision signal for $\pi$ is hard to be derived directly from QA datasets. Even though policies are usually trained via reinforcement learning, reinforcement learning algorithms (Sutton et al., 2000; Mnih et al., 2015) are often sensitive to the quality of reward functions. For a complex task, the reward function $R$ is often hard to specify and exhaustive to tune. Inspired by Choudhury et al. (2017), we explore the use of imitation learning (IL) by querying a model-based oracle online and imitating the action $a^{\star}$ chose by the oracle, which avoids the hassle of designing $R$ and solves the POMDP in the fashion of supervised learning. Thus, the loss of $\pi$ is defined as the cross entropy, + +$$ +L _ {\pi} = - \log \frac {e ^ {\pi (a ^ {\star} | b)}}{\sum_ {a \in \check {A}} e ^ {\pi (a | b)}}, \tag {5} +$$ + +where $b$ is the belief state of the agent. + +The link classifier $g_{l}$ and the answer extractor $g_{o}$ are also optimized with multi-class cross-entropy losses. For $g_{l}$ , denoting its loss as $L_{l}$ , the classification label is set to the anchor text that links to a gold supporting passage, if there is no such anchor, then the pseudo hyperlink [NONE] is labeled. $g_{o}$ is trained as a classifier of start and end position following previous work (Clark et al., 2020), denoting its loss as $L_{o}$ . Considering the belief state $b = \langle q, \{p_{1}, p_{2}, \dots, p_{|C|}\} \rangle$ , the ListMLE (Xia et al., 2008) ranking loss of the evidence scoring function $\phi$ is defined as the negative log likelihood of the ground truth permutation, + +$$ +L _ {\phi} (\boldsymbol {y}, b) = - \log P \left(\tau_ {\boldsymbol {y}} \mid \left\{\phi \left(p _ {i} | b\right) \right\} _ {i = 0} ^ {| C |}\right), \tag {6} +$$ + +where $\pmb{y}$ is the relevance label of $\{p_0,p_1,\dots ,p_{|C|}\}$ and $\tau_{\pmb{y}}$ is their ground truth permutation. To learn the dynamic threshold $\phi (p_0|b)$ , we set the relevance label of the pseudo passage $p_0$ to $\pmb{y}_0 = 0.5$ . And passages in $C$ are labeled as $1 / 0$ according to whether they are gold supporting passages. + +Model-based Oracle The model-based oracle has full access to the environment and can foresee the gold evidence and answer of every question, which means that the oracle can infer the rank of a supporting passage in the retrieval list of any retrieval action. Thus, given a state, the oracle can easily select a near-optimal one from candidate actions according to a greedy policy $\pi^{\star}$ . Specifically, if all gold evidence is collected and the argument of an answer action is a correct answer, the oracle will select the answer action. Otherwise, the oracle will use a greedy algorithm to select the retrieval action that helps to gather a missing passage of evidence in the fewest steps. + +Belief States Sampling We train the agent on sampled belief states instead of long trajectories. In every epoch, one belief state is sampled for each question. To sample a belief state $\langle q,C\rangle$ , we first uniformly sample a subset from $q$ 's gold evidence as $C$ , which could be an empty set. However, at testing time, it is impossible for the candidate evidence set $C$ to contain only gold evidence. To alleviate the mismatch of the state distribution between training and testing, we inject a few negative passages into $C$ and shuffle them. We treat the first passage in the candidate set as the observation, and the others as evidence collected before. + +The distribution of injected negative passages can affect the test performance. In this work, to make it simple, we sample 0~2 passages from all top-ranked negative passages in retrieval lists of $f_{s}$ , $f_{d}$ , and $f_{l}$ . + +# 4 Experiments + +We evaluate AISO and baselines on two Wikipedia-sourced benchmarks. We first introduce the experimental setups, then describe the experimental results on evidence gathering and question answering. Furthermore, detailed analyses are discussed. + +# 4.1 Experimental Setup + +Data HotpotQA (Yang et al., 2018), a multi-hop QA benchmark. We focus on its fullwiki (open- + +domain) setting1. It requires gathering two supporting passages (paragraphs) to answering a question, given the introductory (first) paragraphs of 5M Wikipedia articles dumped on October 1, 2017. + +SQuAD Open (Chen et al., 2017), a single-hop QA benchmark, whose questions are from the SQuAD dataset (Rajpurkar et al., 2016) and can be answered based on a single passage. We preprocess the Wikipedia dump on December 21, 2016 and extract hyperlinks using WikiExtractor2. Following Karpukhin et al. (2020), we split articles into some disjoint passages, resulting in 20M passages in total. We add two extra hyperlinks to each passage, one linking to its previous passage in the article, the other to the next passage. + +Metrics To test whether the top-2 passages in the evidence set exactly cover both gold supporting passages, we use Supporting Passage Exact Match (P EM) as the evaluation metric following (Asai et al., 2020). To test the performance of answer extraction, we use EM and F1 as our metrics following (Yang et al., 2018). + +Implementation Details For sparse retrieval, we index all passages in the corpus with Elucidsearch and implement BM25 following Qi et al. $(2019)^{3}$ . For dense retrieval, we leverage the trained passage encoder and query encoder from Karpukhin et al. $(2020)^{4}$ and Xiong et al. $(2021)^{5}$ and index all passage vectors using FAISS (Johnson et al., 2019) offline. During training, we use the HNSW-based index for efficient low-latency retrieval; in test time, we use the exact inner product search index for better retrieval results. For link retrieval, the filtered hyperlinks are used, whose targets have to be another article from this dump. + +Based on Huggingface Transformers (Wolf et al., 2020), we use ELECTRA (Clark et al., 2020) $(d = 768 / 1024$ for base/large) $^{6}$ as the initializations for our encoders $\Psi^{belief}$ and $\Psi^{policy}$ . The maximum number of passages inputted into the encoders is set to 3 and the length of input tokens is limited to + +
StrategyMethodP EM# read
fsBM2511.112
BM25 + Reranker29.6020
fdDPR (Karpukhin et al., 2020)14.182
fs○flSemantic Retrieval*◇69.3539.4
Entity Centric IR*○34.90-
fs○fsGoldEn Retriever47.7710
fd○fdMDR (Xiong et al., 2021)64.522
MDR + Reanker†*81.20≥200
Ballen†* (Khattab et al., 2021)86.70-
fsnCogQA* (Ding et al., 2019)57.80-
DDRQA†* (Chen et al., 2017)79.80-
IRRR†* (Qi et al., 2020)84.10≥150
fs○fln-1GRR†* (Asai et al., 2020)75.70≥500
HopRetriever†* (Li et al., 2021)82.54≥500
HopRetriever-plus†*86.94>500
TPRR†* (Xinyu et al., 2021)86.19≥500
(fs||fd)nDrKit* (Dhingra et al., 2020)38.30-
(fs|fd|fl)nAISObase85.6936.7
AISOLarge88.1735.7
+ +Table 1: Evidence gathering performance and reading cost on the HotpotQA fullwiki development set. The symbol $\dagger$ denotes the baseline methods use the large version of pretrained language models comparable to our $\mathrm{AISO}_{\mathrm{large}}$ . The results with $*$ are from published papers, otherwise they are our implementations. The symbol $\circ$ denotes sequential apply RFs, $f^n$ denotes apply the RF $f$ multiple times, $||$ denotes combining the results of different RFs, and $(\cdot|\cdot)_{\Pi}$ means choosing one of RFs to use according to the policy II. $\diamond$ : (Nie et al., 2019), $\heartsuit$ : (Qi et al., 2019), $\clubsuit$ : (Qi et al., 2019) + +512. To avoid the high confidence passages from being truncated, we input the passages of evidence in descending order of their belief scores from the previous step. + +To accelerate the model training, for the first 24 epochs, $\Psi^{belief}$ and $\Psi^{policy}$ share parameters, for the next 6 epochs, they are trained separately. The batch size is 32. We use Adam optimization with learning rate $2 \times 10^{-5}$ . To select the best agent (QA model), we first save several checkpoints that perform well on heuristic single-step metrics, such as action accuracy. Then we choose the one that performs best in the whole process on the development set. In test time, the number of interaction steps is limited to $T$ . We set the maximum number of steps to $T = 1000$ if not specified. Once the agent has exhausted its step budget, it is forced to answer the question. + +# 4.2 Results + +Evidence Gathering We first evaluate the performance and reading cost on the evidence gathering, illustrating the effectiveness and efficiency of AISO. In Table 1, we split evidence gathering methods into different groups according to their + +
MethodDevTest
AnsSupJointAnsSupJoint
EMF1EMF1EMF1EMF1EMF1EMF1
Semantic Retrieval (Nie et al., 2019)46.558.839.971.526.649.245.357.338.770.825.147.6
GoldEn Retriever (Qi et al., 2019)------37.949.830.764.618.039.1
CogQA (Ding et al., 2019)37.649.423.158.512.235.337.148.922.857.712.434.9
\( DDRQA^† \) (Zhang et al., 2020)62.976.951.379.1--62.575.951.078.936.063.9
\( IRRR+^†* \) (Qi et al., 2020)------66.379.957.282.643.169.8
MUPPET (Feldman and El-Yaniv, 2019)31.140.417.047.711.827.630.640.316.747.310.927.0
\( MDR^† \) (Xiong et al., 2021)62.375.156.579.442.166.362.375.357.580.941.866.6
\( GRR^† \) (Asai et al., 2020)60.573.349.276.135.861.460.073.049.176.435.461.2
\( HopRetriever^† \) (Li et al., 2021)62.275.252.578.937.864.560.873.953.179.338.063.9
\( HopRetriever-plus^† \) (Li et al., 2021)66.679.256.081.842.069.064.877.856.181.841.067.8
EBS-Large*------66.279.357.384.042.070.0
\( TPRR^†* \) (Xinyu et al., 2021)67.380.160.284.545.371.467.079.559.484.344.470.8
\( AISO_{base} \)63.576.555.181.940.266.9------
\( AISO_{large} \)68.180.961.586.545.972.567.580.561.286.044.972.0
+ +Table 2: Answer extraction and supporting sentence identification performance on HotpotQA fullwiki. The methods with $\dagger$ use the large version of pretrained language models comparable to $\mathrm{AISO}_{\mathrm{large}}$ . The results marked with $*$ are from the official leaderboard otherwise originated from published papers. + +
MethodEMF1# read
DrQA (Chen et al., 2017)27.1-5
Multi-passage BERT (Wang et al., 2019b)53.060.9100
DPR (Karpukhin et al., 2020)29.8-100
BM25+DPR (Karpukhin et al., 2020)36.7-100
Multi-step Reasoner (Das et al., 2019a)31.939.25
MUPPET (Feldman and El-Yaniv, 2019)39.346.245
GRR† (Asai et al., 2020)56.563.8≥ 500
SPARTA† (Zhao et al., 2021)59.366.5-
IIRR† (Qi et al., 2020)56.863.2≥ 150
AISOLarge59.567.624.8
+ +Table 3: Question answering performance on SQuAD Open benchmark. † denotes the methods use the large pretrained language models comparable to AISOlarge. + +strategies. Moreover, the first three groups are the traditional pipeline approaches, and the others are iterative approaches. + +For effectiveness, we can conclude that 1) almost all the iterative approaches perform better than the pipeline methods, 2) the proposed adaptive information-seeking approach $\mathrm{AISO}_{\mathrm{large}}$ outperforms all previous methods and achieves the state-of-the-art performance. Moreover, our $\mathrm{AISO}_{\mathrm{base}}$ model outperforms some baselines that use the large version of pretrained language models, such as HopRetriever, GRR, IRRR, DDRQA, and MDR. + +For efficiency, the cost of answering an open-domain question includes the retrieval cost and reading cost. Since the cost of reading a passage along with the question online is much greater than the cost of a search, the total cost is linear in # read, reported in the last column of Table 1. # read means + +the total number of passages read along with the question throughout the process, which is equal to the adaptive number of steps. We can find that the number of read passages in AISO model, i.e., the is about 35, which is extremely small than the competitive baselines (P EM $>80$ ) that need to read at least 150 passages. That is to say, our AISO model is efficient in practice. + +Question Answering Benefit from high-performance evidence gathering, as shown in Tables 2 and 3, AISO outperforms all existing methods across the evaluation metrics on the HotpotQA fullwiki and SQuAD Open benchmarks. This demonstrates that AISO is applicable to both multi-hop questions and single-hop questions. Notably, on the HotpotQA fullwiki blind test set7, $\mathrm{AISOLarge}$ significantly outperforms the second place TPRR (Xinyu et al., 2021) by $2.02\%$ in Sup F1 (supporting sentence identification) and $1.69\%$ on Joint F1. + +# 4.3 Analysis + +We conduct detailed analysis of $\mathrm{AISO}_{\mathrm{base}}$ on the HotpotQA fullwiki development set. + +The effect of the belief and policy module As shown in the second part of Table 4, we examine the variations of AISO with the oracle evidence scoring function $\phi^{\star}$ or oracle action scoring function $\pi^{\star}$ , which are key components of the belief + +
ModelP EMAns F1# read
AISObase85.6976.4536.64
w. φ*97.5279.9940.01
w. φ* + π*98.8880.348.92
fs^t68.5167.3358.74
fd^t79.8072.9168.63
(f_d|f_l)_{\Pi}^n83.9774.9361.41
(f_s|f_l)_{\Pi}^n82.4474.4437.76
(f_s|f_d)_{\Pi}^n79.6673.3642.01
+ +Table 4: Analysis experiments on HotpotQA fullwiki. + +and policy module. When we replace our learned evidence scoring function with $\phi^{\star}$ that can identify supporting passage perfectly, the performance increase a lot while the reading cost do not change much. This means that the belief module has a more impact on the performance than the cost. If we further replace the learned $\pi$ with $\pi^{\star}$ , the cost decreases a lot. This shows that a good policy can greatly improve the efficiency. + +The impact of retrieval functions As shown in the last part Table 4, the use of a single RF, such as $f_{s}^{t}$ and $f_{d}^{t}$ , leads to poor performance and low efficiency. Moreover, lack of any RF will degrade performance, which illustrates that all RFs contribute to performance. Specifically, although the link RF $f_{l}$ cannot be used alone, it contributes the most to performance and efficiency. Besides, the sparse RF $f_{s}$ may be better at shortening the information-seeking process than the dense RF $f_{d}$ , since removing $f_{s}$ from the action space leads to the number of read passages increase from 36.64 to 61.41. We conjecture this is because $f_{s}$ can rank the evidence that matches the salient query very high. + +The impact of the maximum number of steps As shown in Figure 3, with the relaxation of the step limit $T$ , $\mathrm{AISO}_{\mathrm{base}}$ can filter out negative passages and finally observe low-ranked evidence through more steps, so its performance improves and tends to converge. However, the cost is more paragraphs to read. Besides, once $T$ exceeds 1000, only a few questions (about $1\%$ ) can benefit from the subsequent steps. + +The ability to recover from mistakes We count three types of mistakes in gathering evidence on the HotpotQA development set. In the process of collecting evidence for 7405 questions, false evidence was added into the evidence set for 1061 questions, true evidence was missed for 449 questions, and + +![](images/350f7921757e7a2541fc68291b08d7674b7424ab7b4dfa2bcb4107e917359f13.jpg) +Figure 3: Performance and cost of $\mathrm{AISO_{base}}$ on the HotpotQA development set with different step limits. + +true evidence was deleted from the evidence set for 131 questions. And we find that AISO recovered from $17.7\%$ , $43.9\%$ , and $35.9\%$ of these three types of errors respectively, which implies that even without beam search, $\mathrm{AISO}_{\mathrm{base}}$ can make up for previous mistakes to some extent. Besides, we can see that false evidence is the most harmful to evidence gathering and the most difficult to remedy. + +# 5 Conclusion and Future Work + +This work presents an adaptive information-seeking approach for open-domain question answering, called AISO. It models the open-domain QA task as a POMDP, where the environment contains a large corpus and the agent is asked to sequentially select retrieval function and reformulate query to collect the evidence. AISO achieves state-of-the-art results on two public datasets, which demonstrates the necessity of different retrieval functions for different questions. In the future, we will explore other adaptive retrieval strategies, like directly optimizing various information-seeking metrics by using reinforcement learning techniques. + +# Ethical Considerations + +We honor and support the ACL code of Ethics. The paper focuses on information seeking and question answering tasks, which aims to answer the question in the open-domain setting. It can be widely used in search engine and QA system, and can help people find the information more accuracy and efficiency. Simultaneously, the datasets we used in this paper are all from previously published works and do not involve privacy or ethical issues. + +# Acknowledgements + +This work was supported by National Natural Science Foundation of China (NSFC) under Grants No. 61906180, No. 61773362 and No. 91746301, National Key R&D Program of China under Grants 2020AAA0105200. The authors would like to thank Changying Hao for valuable suggestions on this work. + +# References + +Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph for question answering. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics. +Sanjiban Choudhury, Ashish Kapoor, Gireeja Ranade, Sebastian A. Scherer, and Debadeepta Dey. 2017. Adaptive information gathering via imitation learning. In Robotics: Science and Systems 2017, volume 13. +Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019a. Multi-step retriever-reader interaction for scalable open-domain question answering. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. +Rajarshi Das, Ameya Godbole, Dilip Kavarthapu, Zhiyu Gong, Abhishek Singhal, Mo Yu, Xiaoxiao Guo, Tian Gao, Hamed Zamani, Manzil Zaheer, and Andrew McCallum. 2019b. Multi-step entity-centric information retrieval for multi-hop question answering. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 113-118, Hong Kong, China. Association for Computational Linguistics. +Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, and William W. Cohen. 2020. Differentiable reasoning over a virtual knowledge base. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. + +Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2694-2703, Florence, Italy. Association for Computational Linguistics. +Yair Feldman and Ran El-Yaniv. 2019. Multi-hop paragraph retrieval for open-domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2296-2309, Florence, Italy. Association for Computational Linguistics. +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasapat, and Ming-Wei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 3929-3938. PMLR. +Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics. +Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics. +Omar Khattab, Christopher Potts, and Matei Zaharia. 2021. Baleen: Robust multi-hop reasoning at scale via condensed retrieval. arXiv preprint arXiv:2101.00436. +Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 565-569, Brussels, Belgium. Association for Computational Linguistics. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. +Shaobo Li, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Chengjie Sun, Zhenzhou Ji, and Bingquan Liu. 2021. Hopretriever: Retrieve hops over wikipedia + +to answer complex questions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13279-13287. +Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736-1745, Melbourne, Australia. Association for Computational Linguistics. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature, 518(7540):529-533. +Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Revealing the importance of semantic retrieval for machine reading at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2553-2566, Hong Kong, China. Association for Computational Linguistics. +Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, and Xueqi Cheng. 2019. Has-qa: Hierarchical answer spans model for open-domain question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6875-6882. +Peng Qi, Haejun Lee, Oghenetegiri Sido, Christopher D Manning, et al. 2020. Retrieve, rerank, read, then iterate: Answering open-domain questions of arbitrary complexity from text. arXiv preprint arXiv:2010.12527. +Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. 2019. Answering complex open-domain questions through iterative query generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2590-2602, Hong Kong, China. Association for Computational Linguistics. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3(4):333-389. +Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057-1063. + +Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, volume 99, pages 77-82. Citeseer. +Haoyu Wang, Mo Yu, Xiaoxiao Guo, Rajarshi Das, Wenhan Xiong, and Tian Gao. 2019a. Do multi-hop readers dream of reasoning chains? In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 91-97, Hong Kong, China. Association for Computational Linguistics. +Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. +Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019b. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5878-5882, Hong Kong, China. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: theory and algorithm. In Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of ACM International Conference Proceeding Series, pages 1192-1199. ACM. +Zhang Xinyu, Zhan Ke, Hu Enrui, Fu Chengzhen, Luo Lan, Jiang Hao, Jia Yantao, Yu Fan, Dou Zhicheng, Cao Zhao, and Chen Lei. 2021. Answer complex questions: Path ranker is all you need. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '21, New York, NY, USA. Association for Computing Machinery. +Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain questions with multi-hop dense retrieval. In International Conference on Learning Representations. + +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics. +Shunyu Yao, Rohan Rao, Matthew Hausknecht, and Karthik Narasimhan. 2020. Keep CALM and explore: Language models for action generation in text-based games. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8736-8754, Online. Association for Computational Linguistics. +Yuyu Zhang, Ping Nie, Arun Ramamurthy, and Le Song. 2020. Ddrqa: Dynamic document reranking for open-domain multi-hop question answering. arXiv preprint arXiv:2009.07465. +Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul N. Bennett, and Saurabh Tiwary. 2020. Transformer-xh: Multi-evidence reasoning with extra hop attention. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2021. SPARTA: Efficient open-domain question answering via sparse transformer matching retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 565-575, Online. Association for Computational Linguistics. \ No newline at end of file diff --git a/adaptiveinformationseekingforopendomainquestionanswering/images.zip b/adaptiveinformationseekingforopendomainquestionanswering/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0f8b0f2bf206b29776481a2bfa1e997683c9db00 --- /dev/null +++ b/adaptiveinformationseekingforopendomainquestionanswering/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3f5b006bcfbe5f61d47347dc49b0681e3703551a7fc3b67ddfaabc53cb75a57 +size 378677 diff --git a/adaptiveinformationseekingforopendomainquestionanswering/layout.json b/adaptiveinformationseekingforopendomainquestionanswering/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dfeb7af0829828cd2681020013b80fdcfc2c9c0c --- /dev/null +++ b/adaptiveinformationseekingforopendomainquestionanswering/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3925653a3d5ae35ece96d6b8fefb6ac8bed48b141bf6628f70f0c0f9c56b511f +size 574977 diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_content_list.json b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ea2e5aa14d7e1d10f46dd53b591e57e848866bae --- /dev/null +++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f1b5f33b841996434ac16c492b0cae68191354144900cd00bf9e51c7cedcbc2 +size 77429 diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_model.json b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ce2c39aa88a783f4f35dae5f984bb1d10e836fd7 --- /dev/null +++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae4b8c8be2723cfcfc97fa339d7124d41fd7d1eec591160c2a765f3cb283ca6d +size 92029 diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_origin.pdf b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..02e033ff0f125ba8aed644bee830e5bf55939f11 --- /dev/null +++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa48eb1c354585d5abd194a2333ccdaa7276d4665fc4f79b59b4148403e8ac5f +size 1545440 diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/full.md b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6f6e1a3ada3dc2b5d675709fe446036e62f41e9a --- /dev/null +++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/full.md @@ -0,0 +1,338 @@ +# Adaptive Proposal Generation Network for Temporal Sentence Localization in Videos + +Daizong Liu $^{1,2*}$ , Xiaoye Qu $^{3*}$ , Jianfeng Dong $^{4}$ , Pan Zhou $^{1\dagger}$ + +1The Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology +2School of Electronic Information and Communication, Huazhong University of Science and Technology +3Huawei Cloud Zhejiang Gongshang University + +{dzliu,panzhou}@hust.edu.cn, quxiaoye@huawei.com, dongjf24@gmail.com + +# Abstract + +We address the problem of temporal sentence localization in videos (TSLV). Traditional methods follow a top-down framework which localizes the target segment with predefined segment proposals. Although they have achieved decent performance, the proposals are handcrafted and redundant. Recently, bottom-up framework attracts increasing attention due to its superior efficiency. It directly predicts the probabilities for each frame as a boundary. However, the performance of bottom-up model is inferior to the top-down counterpart as it fails to exploit the segment-level interaction. In this paper, we propose an Adaptive Proposal Generation Network (APGN) to maintain the segment-level interaction while speeding up the efficiency. Specifically, we first perform a foreground-background classification upon the video and regress on the foreground frames to adaptively generate proposals. In this way, the handcrafted proposal design is discarded and the redundant proposals are decreased. Then, a proposal consolidation module is further developed to enhance the semantic of the generated proposals. Finally, we locate the target moments with these generated proposals following the top-down framework. Extensive experiments on three challenging benchmarks show that our proposed APGN significantly outperforms previous state-of-the-art methods. + +# 1 Introduction + +Temporal sentence localization in videos is an important yet challenging task in natural language processing, which has drawn increasing attention over the last few years due to its vast potential applications in information retrieval (Dong et al., 2019; Yang et al., 2020) and human-computer interaction (Singha et al., 2018). It aims to ground the most relevant video segment according to a given + +![](images/06b2fadc2a176776cbd51b58d95132b614c8a682524feda4e82a02e1886c7a01.jpg) +(a) An example of temporal sentence localization in video + +![](images/910145ee7cf6d08baf0b59154a48a3075f87aae8ba7e8928a20c658e404b413d.jpg) +Figure 1: (a) An example of temporal sentence localization in videos. (b) The Top-Down framework predicts the confidence scores of a large number of pre-defined proposals for ranking. (c) The Bottom-Up framework regresses the probabilities of all frames as start or end boundaries. + +![](images/ed7dc48a2d260a91b82ca69712feb30a2df5eb2a0d0793eed63f669d43313274.jpg) + +sentence query. As shown in Figure 1 (a), most parts of video contents are irrelevant to the query (background) while only a short segment matches it (foreground). Therefore, video and query information need to be deeply incorporated to distinguish the fine-grained details of different video segments. + +Most previous works (Gao et al., 2017; Chen et al., 2018; Zhang et al., 2019; Yuan et al., 2019a; Zhang et al., 2020b; Liu et al., 2021, 2020a,b) follow the top-down framework which pre-defines a large set of segment candidates (a.k.a proposals) in the video with sliding windows, and measures the similarity between the query and each candidate. The best segment is then selected according to the similarity. Although these methods achieve significant performance, they are sensitive to the proposal quality and present slow localization speed due to redundant proposals. Recently, several works (Rodriguez et al., 2020; Zhang et al., 2020a; Yuan et al., 2019b) exploit the bottom-up framework which directly predicts the probabilities of each frame as the start or end boundaries of segment. These methods are proposal-free and much more efficient. However, they neglect the rich information between start and end boundaries without capturing the segment-level interaction. Thus, the performance of bottom-up models is behind the performance of top-down counterpart thus far. + +To avoid the inherent drawbacks of proposal design in the top-down framework and maintain the localization performance, in this paper, we propose an adaptive proposal generation network (APGN) for an efficient and effective localization approach. Firstly, we perform boundary regression on the foreground frames to generate proposals, where foreground frames are obtained by a foreground-background classification on the entire video. In this way, the noisy responses on the background frames are attenuated, and the generated proposals are more adaptive and discriminative compared to the pre-defined ones. Secondly, we perform proposal ranking to select target segment in a top-down manner upon these generative proposals. As the number of proposals is much fewer than the predefined methods, the ranking stage is more efficient. Furthermore, we additionally consider the proposal-wise relations to distinguish their fine-grained semantic details before the proposal ranking stage. + +To achieve the above framework, APGN first generates query-guided video representations after encoding video and query features and then predicts the foreground frames using a binary classification module. Subsequently, a regression module is utilized to generate a proposal on each foreground frame by regressing the distances from itself to start and end segment boundaries. After that, each generated proposal contains independent coarse semantic. To capture higher-level interactions among proposals, we encode proposal-wise features by incorporating both positional and semantic information, and represent these proposals as nodes to construct a proposal graph for reasoning correlations among them. Consequently, each updated proposal obtains more fine-grained details for following boundary refinement process. + +Our contributions are summarized as follows: + +- We propose an adaptive proposal generation network (APGN) for TSLV task, which adaptively generates discriminative proposals without handcrafted design, thus making localization both effective and efficient. +- To further refine the semantics of the generated proposals, we introduce a proposal graph to consolidate proposal-wise features by reasoning their higher-order relations. +- We conduct experiments on three challenging datasets (ActivityNet Captions, TACoS, and Charades-STA), and results show that our proposed APGN significantly outperforms the existing state-of-the-art methods. + +# 2 Related Work + +Temporal sentence localization in videos is a new task introduced recently (Gao et al., 2017; Anne Hendricks et al., 2017), which aims to localize the most relevant video segment from a video with sentence descriptions. Various algorithms (Anne Hendricks et al., 2017; Gao et al., 2017; Chen et al., 2018; Zhang et al., 2019; Yuan et al., 2019a; Zhang et al., 2020b; Qu et al., 2020; Yang et al., 2021) have been proposed within the top-down framework, which samples candidate segments from a video first, then integrates the sentence representation with those video segments individually and evaluates their matching relationships. Some of them (Anne Hendricks et al., 2017; Gao et al., 2017) propose to use the sliding windows as proposals and then perform a comparison between each proposal and the input query in a joint multi-modal embedding space. To improve the quality of the proposals, (Zhang et al., 2019; Yuan et al., 2019a) pre-cut the video on each frame by multiple pre-defined temporal scale, and directly integrate sentence information with fine-grained video clip for scoring. (Zhang et al., 2020b) further build a 2D temporal map to construct all possible segment candidates by treating each frame as the start or end boundary, and match their semantics with the query information. Although these methods achieve great performance, they are severely limited by the heavy computation on proposal matching/ranking, and sensitive to the quality of pre-defined proposals. + +Recently, many methods (Rodriguez et al., 2020; Chen et al., 2020; Yuan et al., 2019b; Mun et al., 2020; Zeng et al., 2020; Zhang et al., 2020a; Nan et al., 2021) propose to utilize the bottom-up framework to overcome above drawbacks. They do not rely on the segment proposals and directly select the starting and ending frames by leveraging cross-modal interactions between video and query. Specifically, they predict two probabilities at each frame, which indicate whether this frame is a start or end frame of the ground truth video segment. Although these methods perform segment localization more efficiently, they lose the segment-level interaction, and the redundant regression on background frames may provide disturbing noise for boundary decision, leading to worse localization performance than top-down methods. + +In this paper, we propose to preserve the segment-level interaction while speeding up the + +![](images/c2c7ebde98bd62cd5f28f9be70912de2221b3520fc32b28dfa77a7f335f5fc45.jpg) +Figure 2: Overall architecture of APGN. (a) Given a video and a query, we first encode and interact them to obtain query-guided video features. (b) Then, along with regressing boundaries on each frame, we perform foreground-background classification to identify the foreground frames whose corresponding predicted boundaries are further taken as the generated segment proposals. (c) We further encode each proposal and refine them using a graph convolutional network. (d) At last, we predict the confidence score and boundary offset for each proposal. + +localization efficiency. Specifically, we design a binary classification module on the entire video to filter out the background responses, which helps model focus more on the discriminative frames. At the same time, we replace the pre-defined proposals with the generated ones and utilize a proposal graph for refinement. + +# 3 The Proposed Method + +# 3.1 Overview + +Given an untrimmed video $V$ and a sentence query $Q$ , the TSLV task aims to localize the start and end timestamps $(\tau_s, \tau_e)$ of a specific video segment referring to the sentence query. We focus on addressing this task by adaptively generating proposals. To this end, we propose a binary classification module to filter out the redundant responses on background frames. Then, each foreground frame with its regressed start-end boundaries are taken as the generated segment proposal. In this way, the number of the generated proposals is much smaller than the number of pre-defined ones, making the model more efficient. Besides, a proposal graph is further developed to refine proposal features by learning their higher-level interactions. Finally, the confidence score and boundary offset are predicted for each proposal. Figure 2 illustrates the overall architecture of our APGN. + +# 3.2 Feature Encoders + +Video encoder. Given a video $V$ , we represent it as $V = \{v_{t}\}_{t=1}^{T}$ , where $v_{t}$ is the $t$ -th frame and $T$ is the length of the entire video. We first extract the features by a pre-trained network, and then employ a self-attention (Vaswani et al., 2017) module to capture the long-range dependencies among video frames. We also utilize a Bi-GRU (Chung et al., + +2014) to learn the sequential characteristic. The final video features are denoted as $\mathbf{V} = \{\mathbf{v}_t\}_{t=1}^T \in \mathbb{R}^{T \times D}$ , where $D$ is the feature dimension. + +Query encoder. Given a query $Q = \{q_{n}\}_{n=1}^{N}$ , where $q_{n}$ is the $n$ -th word and $N$ is the length of the query. Following previous works (Zhang et al., 2019; Zeng et al., 2020), we first generate the word-level embeddings using Glove (Pennington et al., 2014), and also employ a self-attention module and a Bi-GRU layer to further encode the query features as $\mathbf{Q} = \{\mathbf{q}_{n}\}_{n=1}^{N} \in \mathbb{R}^{N \times D}$ . + +Video-Query interaction. After obtaining the encoded features $V, Q$ , we utilize a co-attention mechanism (Lu et al., 2019) to capture the cross-modal interactions between video and query features. Specifically, we first calculate the similarity scores between $V$ and $Q$ as: + +$$ +\boldsymbol {S} = \boldsymbol {V} \left(\boldsymbol {Q} \boldsymbol {W} _ {S}\right) ^ {\mathrm {T}} \in \mathbb {R} ^ {T \times N}, \tag {1} +$$ + +where $\mathbf{W}_S\in \mathbb{R}^{D\times D}$ projects the query features into the same latent space as the video. Then, we compute two attention weights as: + +$$ +\boldsymbol {A} = \boldsymbol {S} _ {r} (\boldsymbol {Q} \boldsymbol {W} _ {S}) \in \mathbb {R} ^ {T \times D}, \boldsymbol {B} = \boldsymbol {S} _ {r} \boldsymbol {S} _ {c} ^ {\mathrm {T}} \boldsymbol {V} \in \mathbb {R} ^ {T \times D}, \tag {2} +$$ + +where $S_{r}$ and $S_{c}$ are the row- and column-wise softmax results of $S$ , respectively. We compose the final query-guided video representation by learning its sequential features as follows: + +$$ +\widetilde {\boldsymbol {V}} = \operatorname {B i G R U} ([ \boldsymbol {V}; \boldsymbol {A}; \boldsymbol {V} \odot \boldsymbol {A}; \boldsymbol {V} \odot \boldsymbol {B} ]) \in \mathbb {R} ^ {T \times D}, \tag {3} +$$ + +where $\widetilde{\boldsymbol{V}} = \{\widetilde{\boldsymbol{v}}_t\}_{t=1}^T$ , BiGRU( $\cdot$ ) denotes the Bi-GRU layers, $[;]$ is the concatenate operation, and $\odot$ is the element-wise multiplication. + +# 3.3 Proposal Generation + +Given the query-guided video features $\widetilde{\pmb{V}}$ , we aim to generate the proposal tuple $(t,l_s^t,l_e^t)$ based on + +each foreground frame $v_{t}$ , where $l_s^t, l_e^t$ denotes the distances from frame $v_{t}$ to the starting and ending segment boundaries, respectively. To this end, we first perform binary classification on the whole frames to distinguish the foreground and background frames, and then treat the foreground ones as positive samples and regress the segment boundaries on these frames as generated proposals. + +Foreground-Background classification. In the TSLV task, most videos are more than two minutes long while the lengths of annotated target segments only range from several seconds to one minute (e.g. on ActivityNet Caption dataset). Therefore, there exists much noises from the background frames which may disturb the accurate segment localization. To alleviate it, we first classify the background frames and filter out their responses in latter regression. By distinguishing the foreground and background frames with annotations, we design a binary classification module with three full-connected (FC) layers to predict the class $y_{t}$ on each video frame. Considering the unbalanced foreground/background distribution, we formulate the balanced binary cross-entropy loss as: + +$$ +\mathcal {L} _ {\text {c l a s s}} = - \sum_ {t = 1} ^ {T _ {\text {b a c k}}} \frac {T _ {\text {b a c k}}}{T} \log \left(y _ {t}\right) - \sum_ {t = 1} ^ {T _ {\text {f o r e}}} \frac {T _ {\text {f o r e}}}{T} \log \left(1 - y _ {t}\right), \tag {4} +$$ + +where $T_{\text{fore}}, T_{\text{back}}$ are the numbers of foreground and background frames. $T$ is the number of total video frames. Therefore, we can differentiate between frames from foreground and background during both training and testing. + +Boundary regression. With the query-guided video representation $\widetilde{V}$ and the predicted binary sequence of 0-1, we then design a boundary regression module to predict the distance from each foreground frame to the start (or end) frame of the video segment that corresponds to the query. We implement this module by three 1D convolution layers with two output channels. Given the predicted distance pair $(l_s^t,l_e^t)$ and ground-truth distance $(g_s^t,g_e^t)$ , we define the regression loss as: + +$$ +\mathcal {L} _ {r e g} = \frac {1}{T _ {f o r e}} \sum_ {t = 1} ^ {T _ {f o r e}} \left(1 - \operatorname {I o U} \left(\left(t, l _ {s} ^ {t}, l _ {e} ^ {t}\right), \left(t, g _ {s} ^ {t}, g _ {e} ^ {t}\right)\right)\right), \tag {5} +$$ + +where $\mathrm{IoU}(\cdot)$ computes the Intersection over Union (IoU) score between the predicted segment and its ground-truth. After that, we can represent the generated proposal as tuples $\{(t,l_s^t,l_e^t)\}_{t = 1}^{T_{\text{fore}}}$ based on the regression results of the foreground frames. + +![](images/a8c660b6dba8f909e8e23e0aecebb93d7174be4a8a3bb972eede6c27a4ed7281.jpg) +Figure 3: To distinguish above three proposals, both positional and semantic relations among proposals needs to be considered. + +# 3.4 Proposal Consolidation + +So far, we have generated a certain number of proposals that are significantly less than the predefined ones in existing top-down framework, making the final scoring and ranking process much efficient. To further refine the proposal features for more accurate segment localization, we explicitly model higher-order interactions between the generated proposals to learn their relations. As shown in Figure 3, proposal 1 and proposal 2 contain same semantics of "blue" and "hops", we need to model their positional distance to distinguish them and refine their features for better understanding the phrase "second time". Also, for the proposals (proposal 2 and 3) which are local neighbors, we have to learn their semantic distance to refine their representations. Therefore, in our APGN, we first encode each proposal feature with both positional embedding and frame-wise semantic features, and then define a graph convolutional network (GCN) over the proposals for proposal refinement. + +Proposal encoder. For each proposal tuple $(t, l_s^t, l_e^t)$ , we represent its segment boundary as $(t - l_s^t, t + l_e^t)$ . Before aggregating the features of its contained frames within this segment boundary, we first concatenate a position embedding $\boldsymbol{emb}_t^{pos}$ to each frame-wise feature $\widetilde{\boldsymbol{v}}_t$ , in order to inject position information on frame $t$ as follows: + +$$ +\widetilde {\boldsymbol {v}} _ {t} ^ {\prime} = \left[ \widetilde {\boldsymbol {v}} _ {t}; \boldsymbol {e m b} _ {t} ^ {\text {p o s}} \right] \in \mathbb {R} ^ {1 \times (D + d)}, \tag {6} +$$ + +where $emb_t^{pos}$ denotes the position embedding of the $t$ -th position, and $d$ is the dimension of $emb_t^{pos}$ . We follow (Vaswani et al., 2017) and use the sine and cosine functions of different frequencies to compose position embeddings: + +$$ +\boldsymbol {e m} \boldsymbol {b} _ {t} ^ {\text {p o s}} [ 2 j ] = \sin \left(\frac {t}{1 0 0 0 0 ^ {2 j / d}}\right), \tag {7} +$$ + +$$ +\boldsymbol {e m} \boldsymbol {b} _ {t} ^ {\text {p o s}} [ 2 j + 1 ] = \cos \left(\frac {t}{1 0 0 0 0 ^ {2 j / d}}\right), \tag {8} +$$ + +where $2j$ and $2j + 1$ are the even and odd indices of the position embedding. In this way, each dimension of the positional encoding corresponds to + +a sinusoid, allowing the model to easily learn to attend to absolute positions. Given the frame features $\{\widetilde{\pmb{v}}_t^{\prime}\}_{t = 1}^{T_{\text{fore}}}$ and a proposal segment $(t - l_s^t,t + l_e^t)$ we encode the vector feature $\pmb{p}_t$ of $t$ -th proposal by aggregating the features of the contained frames in the segment as: + +$$ +\boldsymbol {p} _ {t} = \operatorname {M L P} _ {2} (\operatorname {P o o l} \left(\operatorname {M L P} _ {1} \left([ \widetilde {\boldsymbol {v}} _ {\lceil t - l _ {s} ^ {t} \rceil}, \dots , \widetilde {\boldsymbol {v}} _ {\lceil t + l _ {e} ^ {t} \rceil} ]\right)\right)), \tag {9} +$$ + +where each MLP has two FC layers, $\mathrm{Pool}(\cdot)$ denotes the max-pooling. The frames from each proposal are independently processed by $\mathrm{MLP}_1$ before being pooled (channel-wise) to a single feature vector and passed to $\mathrm{MLP}_2$ where information from different frames are further combined. Thus, we can represent the encoded proposal feature as $p_t \in \mathbb{R}^{1 \times (D + d)}$ . + +Proposal graph. We construct a graph over the proposal features $\{\pmb{p}_t\}_{t=1}^{T_{\text{fore}}}$ , where each node of the graph is a proposal associated with both positions and semantic features. We full connect all node pairs, and define relations between each proposal-pair $(\pmb{p}_t, \pmb{p}_{t'})$ for edge convolution (Wang et al., 2018) as: + +$$ +\boldsymbol {e} _ {t, t ^ {\prime}} = \operatorname {R e l u} \left(\boldsymbol {p} _ {t} \boldsymbol {\theta} _ {1} + \left(\boldsymbol {p} _ {t ^ {\prime}} - \boldsymbol {p} _ {t}\right) \boldsymbol {\theta} _ {2}\right), \tag {10} +$$ + +where $\theta_{1}$ and $\theta_{2}$ are learnable parameters. We update each proposal feature $p_t$ to $\widehat{p}_t$ as follows: + +$$ +\widehat {\boldsymbol {p}} _ {t} = \operatorname {M a x P o o l} \left(\boldsymbol {e} _ {t}\right), \quad \boldsymbol {e} _ {t} = \left\{\boldsymbol {e} _ {t, t ^ {\prime}} \right\} _ {t ^ {\prime} = 1} ^ {T _ {\text {f o r e}}}. \tag {11} +$$ + +This GCN module consists of $k$ stacked graph convolutional layers. After the above proposal consolidation with graph, we are able to learn the refined proposal features. + +# 3.5 Localization Head + +After proposal consolidation, we feed the refined features $\widehat{P} = \{\widehat{p}_t\}_{t=1}^{T_{fore}}$ into two separate heads to predict their confidence scores and boundary offsets for proposal ranking and refinement. Specifically, we employ two MLPs on each feature $\widehat{p}_t$ as: + +$$ +r _ {t} = \operatorname {S i g m o i d} \left(\mathrm {M L P} _ {3} \left(\widehat {\boldsymbol {p}} _ {t}\right)\right), \tag {12} +$$ + +$$ +\left(\delta_ {s} ^ {t}, \delta_ {e} ^ {t}\right) = \mathrm {M L P} _ {4} (\widehat {\boldsymbol {p}} _ {t}), \tag {13} +$$ + +where $r_t \in (0,1)$ is the confidence score, and $(\delta_s^t, \delta_e^t)$ is the offsets. Therefore, the final predicted segment of proposal $t$ can be represented as $(t - l_s^t + \delta_s^t, t + l_e^t + \delta_e^t)$ . To learn the confidence + +scoring rule, we first compute the IoU score $o_t$ between each proposal segment with the ground-truth $(\tau_s, \tau_e)$ , then we adopt the alignment loss function as below: + +$$ +\mathcal {L} _ {\text {a l i g n}} = - \frac {1}{T _ {\text {f o r e}}} \sum_ {t = 1} ^ {T _ {\text {f o r e}}} o _ {t} \log \left(r _ {t}\right) + \left(1 - o _ {t}\right) \log \left(1 - r _ {t}\right). \tag {14} +$$ + +Given the ground-truth boundary offsets $(\hat{\delta}_s^t,\hat{\delta}_e^t)$ of proposal $t$ , we also fine-tune its offsets by a boundary loss as: + +$$ +\mathcal {L} _ {b} = \frac {1}{T _ {\text {f o r e}}} \sum_ {t = 1} ^ {T _ {\text {f o r e}}} \mathrm {S L} _ {1} \left(\hat {\delta} _ {s} ^ {t} - \delta_ {s} ^ {t}\right) + \mathrm {S L} _ {1} \left(\hat {\delta} _ {e} ^ {t} - \delta_ {e} ^ {t}\right), \tag {15} +$$ + +where $\mathrm{SL}_1(\cdot)$ denotes the smooth L1 loss function. + +At last, our APGN model is trained end-to-end from scratch using the multi-task loss: + +$$ +\mathcal {L} = \lambda_ {1} \cdot \mathcal {L} _ {\text {c l a s s}} + \lambda_ {2} \cdot \mathcal {L} _ {\text {r e g}} + \lambda_ {3} \cdot \mathcal {L} _ {\text {a l i g n}} + \lambda_ {4} \cdot \mathcal {L} _ {b}. \tag {16} +$$ + +# 4 Experiments + +# 4.1 Datasets and Evaluation + +ActivityNet Captions. It is a large dataset (Krishna et al., 2017) which contains 20k videos with 100k language descriptions. This dataset pays attention to more complicated human activities in daily life. Following public split, we use 37,417, 17,505, and 17,031 sentence-video pairs for training, validation, and testing, respectively. + +TACoS. This dataset (Regneri et al., 2013) collects 127 long videos, which are mainly about cooking scenarios, thus lacking the diversity. We use the same split as (Gao et al., 2017), which has 10146, 4589 and 4083 sentence-video pairs for training, validation, and testing, respectively. + +Charades-STA. (Gao et al., 2017) consists of 9,848 videos of daily life indoors activities. There are 12,408 sentence-video pairs for training and 3,720 pairs for testing. + +Evaluation Metric. Following (Zhang et al., 2019; Zeng et al., 2020), we adopt “R@n, IoU=m” as our evaluation metrics, which is defined as the percentage of at least one of top-n selected moments having IoU larger than m. + +# 4.2 Implementation Details + +Following (Zhang et al., 2020b; Zeng et al., 2020), for video input, we apply a pre-trained C3D network for all three datasets to obtain embedded features. We also extract the I3D (Carreira and Zisserman, 2017) and VGG (Simonyan and Zisserman, + +
MethodFeatureR@1, IoU=0.5R@1, IoU=0.7R@5, IoU=0.5R@5 IoU=0.7
TGNC3D28.47-43.33-
CTRLC3D29.0110.3459.1737.54
QSPNC3D33.2613.4362.3940.78
CBPC3D35.7617.8065.8946.20
SCDMC3D36.7519.8664.9941.53
GDPC3D39.27---
LGIC3D41.5123.07--
VSLNetC3D43.2226.16--
CMINC3D43.4023.8867.9550.73
DRNC3D45.4524.3677.9750.30
2DTANC3D44.5126.5477.1361.96
APGNC3D48.9228.6478.8763.19
+ +2014) features on Charades-STA. After that, we apply PCA to reduce their feature dimension to 500 for decreasing the model parameters. We set the length of video to 200 for ActivityNet Caption and TACoS, 64 for Charades-STA. For sentence input, we utilize Glove model to embed each word to 300 dimension features. The dimension $D$ is set to 512, $d$ is set to 256. The number of graph layer is $k = 2$ . We set the batchsize as 64. We train our model with an Adam optimizer for 100 epochs. The initial learning rate is set to 0.0001 and it is divided by 10 when the loss arrives on plateaus. $\lambda_1, \lambda_2, \lambda_3, \lambda_4$ in the loss function are 0.1, 1, 1, 1 and decided by the weight magnitude. + +# 4.3 Performance Comparison + +Compared methods. We compare our proposed APGN with state-of-the-art methods. We group them into: (1) top-down methods: TGN (Chen et al., 2018), CTRL (Gao et al., 2017), QSPN (Xu et al., 2019), CBP (Wang et al., 2020), SCDM (Yuan et al., 2019a), CMIN (Zhang et al., 2019), and 2DTAN (Zhang et al., 2020b). (2) bottom-up methods: GDP (Chen et al., 2020), LGI (Mun et al., 2020), VSLNet (Zhang et al., 2020a), DRN (Zeng et al., 2020). + +Quantitative comparison. As shown in Table 1, 2 and 3, our APGN outperforms all the existing methods by a large margin. Specifically, on ActivityNet Caption dataset, compared to the previous best top-down method 2DTAN, we do not rely on large numbers of pre-defined and outperform it by $4.41\%$ , $2.10\%$ , $1.74\%$ , $1.23\%$ in all metrics, respectively. Compared to the previous best bottom-up method DRN, our APGN brings significant improvement of $4.28\%$ and $12.89\%$ in the strict “R@1, IoU=0.7” and “R@5, IoU=0.7” metrics, respectively. Al + +Table 1: Performance compared with the state-of-the-art TSLV models on ActivityNet Captions dataset. + +
MethodFeatureR@1, IoU=0.3R@1, IoU=0.5R@5, IoU=0.3R@5, IoU=0.5
TGNC3D21.7718.9039.0631.02
CTRLC3D18.3213.3036.6925.42
QSPNC3D20.1515.2336.7225.30
CBPC3D27.3124.7943.6437.40
SCDMC3D26.1121.1740.1632.18
GDPC3D24.14---
VSLNetC3D29.6124.27--
CMINC3D24.6418.0538.4627.02
DRNC3D-23.17-33.36
2DTANC3D37.2925.3257.8145.04
APGNC3D40.4727.8659.9847.12
+ +Table 2: Performance compared with the state-of-the-art TSLV models on TACoS datasets. + +
MethodFeatureR@1, IoU=0.5R@1, IoU=0.7R@5, IoU=0.5R@5, IoU=0.7
2DTANVGG39.8123.2579.3351.15
APGNVGG44.2325.6489.5157.87
CTRLC3D23.638.8958.9229.57
QSPNC3D35.6015.8079.4045.40
CBPC3D36.8018.8770.9450.19
GDPC3D39.4718.49--
APGNC3D48.2029.3789.0558.49
DRNI3D53.0931.7589.0660.05
SCDMI3D54.4433.4374.4358.08
LGII3D59.4635.48--
APGNI3D62.5838.8691.2462.11
+ +Table 3: Performance compared with the state-of-the-art TSLV models on Charades-STA datasets. + +though TACoS suffers from similar kitchen background and cooking objects among the videos, it is worth noting that our APGN still achieves significant improvements. On Charades-STA dataset, for fair comparisons with other methods, we perform experiments with same features (i.e., VGG, C3D, and I3D) reported in their papers. It shows that our APGN reaches the highest results over all evaluation metrics. + +Comparison on efficiency. We compare the efficiency of our APGN with previous methods on a single Nvidia Titan XP GPU on the TACoS dataset. As shown in Table 4, it can be observed that we achieve much faster processing speeds and relatively less learnable parameters. The reason mainly owes to two folds: First, APGN generates proposals without processing overlapped sliding windows as CTRL, and generates less proposals than pre-defined methods such as 2DTAN and CMIN, thus is more efficient; Second, APGN does not apply many convolution layers like 2DTAN or multi-level feature fusion modules as DRN for cross-modal interaction, thus has less parameters. + +
ACRNCTRLTGN2DTANCMINDRNAPGN
VPS ↑0.230.451.091.7581.29133.38146.67
Para. ↓128221663637821491
+ +Table 4: Efficiency comparison in terms of video per second (VPS) and parameters (Para.), where our method APGN is much efficient. + +
Modelclass.reg.p.e.graphR@1, IoU=0.5R@1, IoU=0.7
××××39.1619.68
×××40.8421.30
××42.7723.52
×43.9524.66
×45.8126.34
48.9228.64
+ +# 4.4 Ablation Study + +Main ablation. As shown in Table 5, we verify the contribution of each part in our model. Starting from the backbone model (Figure 2 (a)), we first implement the baseline model ① by directly adding the top-down localization head ((Figure 2 (d))). In this model, we adopt pre-defined proposals as (Zhang et al., 2019). After adding the binary classification module in ②, we can find that classification module effectively filters out redundant predefined proposals on large number of background frames. When further applying adaptive proposal generation as ③, the generated proposals perform better than the pre-defined one ②. Note that, in ③, we directly encode proposal-wise features by max-pooling, and the classification module also makes the contribution for filtering out the negative generated proposals. To capture more fine-grained semantics for proposal refinement, we introduce a proposal encoder (model ④) for discriminative feature aggregation and a proposal graph (model ⑤) for proposal-wise feature interaction. Although each of them can only bring about $1 - 3\%$ improvement, the performance increases significantly when utilizing both of them (model ⑥). + +Investigation on the video/query encoder. To investigate whether a Transformer (Vaswani et al., 2017) can boost our APGN, we replace the GRU in video/query encoder with a simple Transformer and find some improvements. However, it brings + +Table 5: Main ablation studies on ActivityNet Caption dataset, where 'class.' and 'reg.' denotes the classification and regression modules (Sec. 3.3), 'p.e' denotes the proposal encoder (Sec. 3.4), 'graph' denotes the proposal graph (Sec. 3.4). + +
ComponentsVPS ↑Para. ↓R@1, IoU=0.5R@1, IoU=0.7
w/. GRU146.679148.9228.64
w/. Transformer129.3813850.1129.43
+ +Table 6: Investigation on video and query encoders on ActivityNet Caption dataset. + +
ComponentsModuleR@1, IoU=0.5R@1, IoU=0.7
binary classificationw/o balanced loss46.8827.13
w/ balanced loss48.9228.64
+ +Table 7: Investigation on binary classification on ActivityNet Caption dataset. + +
ComponentsModuleR@1, IoU=0.5R@1, IoU=0.7
proposal encoderw/o position46.4626.69
w/ position48.9228.64
w/ mean pooling47.4127.86
w/ max pooling48.9228.64
+ +Table 8: Investigation on proposal encoder on ActivityNet Caption dataset. + +larger model parameters and lower speed. + +Effect of unbalanced loss. In the binary classification module, we formulate the typical loss function into a balanced one. As shown in Table 7, the model w/ balanced loss has great improvement (2.04%, 1.51%) compared to the w/o variant, which demonstrates that it is important to consider the unbalanced distribution in the classification process. + +Investigation on proposal encoder. In proposal encoder, we discard the positional embedding as w/o position, and also replace the max-pooling with the mean-pooling as w/ mean pooling. From the Table 8, we can observe that positional embedding helps to learn the temporal distance (boost $2.46\%$ , $1.95\%$ ), and the max-pooling can aggregate more discriminative features (boost $1.49\%$ , $0.78\%$ ) than the mean-pooling. + +Investigation on proposal graph. In the table 9, we also give the analysis on the proposal graph. Compared to w/ edge convolution model (Wang et al., 2018), w/ edge attention directly utilizes coattention (Lu et al., 2016) to compute the similarity of each node-pair and updates them by a weighted summation strategy, which performs worse than the former one. + +Number of graph layer. As shown in Table 9, the model achieves the best result with 2 graph layers, and the performance will drop when the number of + +
ComponentsModuleR@1, IoU=0.5R@1, IoU=0.7
proposal graphw/ edge attention46.6326.90
w/ edge convolution48.9228.64
graph layer1 layer47.6027.57
2 layers48.9228.64
3 layers48.8328.39
+ +Table 9: Investigation on proposal graph on ActivityNet Caption dataset. + +
MethodsLocalization TypeR@1, IoU=0.5R@1, IoU=0.7
SCDMtop-downours36.7543.8619.8626.42
CMINtop-downours43.4050.3323.8829.75
LGIbottom-upours41.5149.2023.0730.64
DRNbottom-upours45.4553.7224.3631.01
+ +Table 10: Our proposed adaptive proposal generation can serve as a "plug-and-play" module for existing methods. The experiments are conducted on the ActivityNet Captions dataset. + +layers grows up. We give the analysis is that more graph layers will result in over-smoothing problem (Li et al., 2018) since the propagation between the nodes will be accumulated. + +Plug-and-play. Our proposed adaptive proposal generation can serve as a plug-and-play for existing methods. As shown in Table 10, for top-down methods, we maintain their feature encoders and video-query interaction, and add the proposal generation and proposal consolidation before the localization heads. For bottom-up methods, we first replace their regression heads with our proposal generation process and then add the proposal consolidation process. It shows that our proposal generation and proposal consolidation can bring large improvement on both two types of methods. + +# 4.5 Qualitative Results + +To qualitatively validate the effectiveness of our APGN, we display two typical examples in Figure 4. It is challenging to accurately localize the semantic "for a second time" in the first video, because there are two separate segments corresponding to the same object "girl in the blue dress" performing the same activity "hops". For comparison, previous method DRN fails to understand the meaning of phrase "second time", and ground both two seg + +![](images/19b60b2a25b2a7d7d3a75fcc506da9367b8bb5abe555ed6d8cdf44aa51702e66.jpg) + +![](images/1d0f3b7aaf3ff00580d655c4ec0a0aba2de1e4e8d617067b55701963a52ae4e2.jpg) +Figure 4: Typical examples of the localization results on the ActivityNet Caption dataset. + +ment parts. By contrast, our method has a strong ability to distinguish these two segments in temporal dimension thanks to the positional embedding in the developed proposal graph, thus achieves more accurate localization results. Furthermore, we also display the foreground/background class of each frame in this video. With the help of the proposal consolidation module, the segment proposals of "first time" are filtered out, and all the final ranked top 10 positive frames fall in the target segment. + +# 5 Conclusion + +In this paper, we introduce APGN, a new method for temporal sentence localization in videos. Our core idea is to adaptively generates discriminative proposals and achieve both effective and efficient localization. That is, we first introduce binary classification before the boundary regression to distinguish the background frames, which helps to filter out the corresponding noisy responses. Then, the regressed boundaries on the predicted foreground frames are taken as segment proposals, which decreases a large number of poor quality proposals compared to the pre-defined ones in top-down framework. We further learn higher-level feature interactions between the generated proposals for refinement via a graph convolutional network. Our framework achieves state-of-the-art performance on three challenging benchmarks, demonstrating the effectiveness of our proposed APGN. + +# 6 Acknowledgments + +This work was supported in part by the National Key Research and Development Program of China under No. 2018YFB1404102, and the National Natural Science Foundation of China under No. 61972448. + +# References + +Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). +Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally grounding natural sentence in video. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Long Chen, Chujie Lu, Siliang Tang, Jun Xiao, Dong Zhang, Chilie Tan, and Xiaolin Li. 2020. Rethinking the bottom-up framework for query-based video localization. In Proceedings of the AAAI Conference on Artificial Intelligence. +Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Advances in Neural Information Processing Systems (NIPS). +Jianfeng Dong, Xirong Li, Chaoxi Xu, Shouling Ji, and Xun Wang. 2019. Dual encoding for zero-example video retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. Tall: Temporal activity localization via language query. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). +Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). +Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence. +Daizong Liu, Xiaoye Qu, Jianfeng Dong, and Pan Zhou. 2020a. Reasoning step-by-step: Temporal + +sentence localization in videos via deep rectification-modulation network. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1841-1851. +Daizong Liu, Xiaoye Qu, Jianfeng Dong, Pan Zhou, Yu Cheng, Wei Wei, Zichuan Xu, and Yulai Xie. 2021. Context-aware biaffine localizing network for temporal sentence grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 11235-11244. +Daizong Liu, Xiaoye Qu, Xiao-Yang Liu, Jianfeng Dong, Pan Zhou, and Zichuan Xu. 2020b. Jointly cross-and self-modal graph attention network for query-based moment localization. In Proceedings of the ACM International Conference on Multimedia (ACM MM). +Chujie Lu, Long Chen, Chilie Tan, Xiaolin Li, and Jun Xiao. 2019. DEBUG: A dense bottom-up grounding approach for natural language video localization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). +Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances in Neural Information Processing Systems (NIPS). +Jonghwan Mun, Minsu Cho, and Bohyung Han. 2020. Local-global video-text interactions for temporal grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Guoshun Nan, Rui Qiao, Yao Xiao, Jun Liu, Sicong Leng, Hao Zhang, and Wei Lu. 2021. Interventional video grounding with dual contrastive learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Xiaoye Qu, Pengwei Tang, Zhikang Zou, Yu Cheng, Jianfeng Dong, Pan Zhou, and Zichuan Xu. 2020. Fine-grained iterative attention network for temporal language localization in videos. In Proceedings of the 28th ACM International Conference on Multimedia, pages 4280-4288. +Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). +Cristian Rodriguez, Edison Marrese-Taylor, Fatehneh Sadat Saleh, Hongdong Li, and Stephen Gould. 2020. Proposal-free temporal moment localization + +of a natural-language query in video using guided attention. In The IEEE Winter Conference on Applications of Computer Vision (WACV). +Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. +Joyeeta Singha, Amarjit Roy, and Rahul Hussain Laskar. 2018. Dynamic hand gesture recognition using vision-based approach for human-computer interaction. Neural Computing and Applications. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS). +Jingwen Wang, Lin Ma, and Wenhao Jiang. 2020. Temporally grounding language queries in videos by contextual boundary-aware prediction. In Proceedings of the AAAI Conference on Artificial Intelligence. +Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. 2018. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics. +Huijuan Xu, Kun He, Bryan A Plummer, Leonid Sigal, Stan Sclaroff, and Kate Saenko. 2019. Multi-level language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence. +Xun Yang, Jianfeng Dong, Yixin Cao, Xun Wang, Meng Wang, and Tat-Seng Chua. 2020. Tree-augmented cross-modal encoding for complex-query video retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 1339-1348. +Xun Yang, Fuli Feng, Wei Ji, Meng Wang, and TatSeng Chua. 2021. Deconfounded video moment retrieval with causal intervention. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). +Yitian Yuan, Lin Ma, Jingwen Wang, Wei Liu, and Wenwu Zhu. 2019a. Semantic conditioned dynamic modulation for temporal sentence grounding in videos. In Advances in Neural Information Processing Systems (NIPS). +Yitian Yuan, Tao Mei, and Wenwu Zhu. 2019b. To find where you talk: Temporal sentence localization in video with attention based location regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33. +Runhao Zeng, Haoming Xu, Wenbing Huang, Peihao Chen, Mingkui Tan, and Chuang Gan. 2020. Dense regression network for video grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). + +Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. 2020a. Span-based localizing network for natural language video localization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). +Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo. 2020b. Learning 2d temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence. +Zhu Zhang, Zhijie Lin, Zhou Zhao, and Zhenxin Xiao. 2019. Cross-modal interaction networks for query-based moment retrieval in videos. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). \ No newline at end of file diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/images.zip b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..aa29ff3bea75dbf60275ee962299fe7723422f3a --- /dev/null +++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4eeabf8185bca38cbd502fc259ea357e51d624d14711b88f33a30c6f3ae841f7 +size 593549 diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/layout.json b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d377a89a864a470b9959b17c6d172e9de49be6fd --- /dev/null +++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d177910cc2d6cfb16031ce3166d4c3418961590a715b75596a6e13e707d983b1 +size 370906 diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_content_list.json b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3e9e4fec4bfd74f5c45a0bbdec7abed7f58b17b4 --- /dev/null +++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d048afd58b71b82eaefbe03e8fe0584791a382cc8a38cce95548a1ee245f17c5 +size 132225 diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_model.json b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c6fcea3289c6298da6d68009c2f480f2e2df0f66 --- /dev/null +++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4b5bc9b93b4f5dcf8084136e8046743f986da4cf6477e6e0398bfda12bc5982 +size 176004 diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_origin.pdf b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6169b5d2aadd01a6eb1a36d6e5b0e8abed1bff23 --- /dev/null +++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0f7ac9e7f501bf80ef4f6c561fef4f191796914c875f0bea9a1633b55dd0892 +size 696842 diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/full.md b/adversarialattackagainstcrosslingualknowledgegraphalignment/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e23d6ff21f1f00afdc471edc5d8ff8b7338c0ecf --- /dev/null +++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/full.md @@ -0,0 +1,528 @@ +# Adversarial Attack against Cross-lingual Knowledge Graph Alignment + +Zeru Zhang $^{1}$ , Zijie Zhang $^{1}$ , Yang Zhou $^{1}$ , Lingfei Wu $^{2}$ , Sixing Wu $^{3}$ , Xiaoying Han $^{1}$ , Dejing Dou $^{4,5}$ , Tianshi Che $^{1}$ , Da Yan $^{6}$ + +1Auburn University, 2JD.COM Silicon Valley Research Center, 3Peking University, + +4University of Oregon, 5Baidu Research, 6University of Alabama at Birmingham + +{zeruzhang, zijiezhang, yangzhou, xzh0003, tianshiche} @auburn.edu, + +lwu@email.wm.edu, wusixing@pku.edu.cn, dou@cs.uoregon.edu, yanda@uab.edu + +# Abstract + +Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks. However, there is still a paucity of vulnerability analyses of cross-lingual entity alignment under adversarial attacks. This paper proposes an adversarial attack model with two novel attack techniques to perturb the KG structure and degrade the quality of deep cross-lingual entity alignment. First, an entity density maximization method is employed to hide the attacked entities in dense regions in two KGs, such that the derived perturbations are unnoticeable. Second, an attack signal amplification method is developed to reduce the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness. + +# 1 Introduction + +Today, multilingual knowledge graphs (KGs), such as WordNet (Miller, 1992), DBpedia (Auer et al., 2007), YAGO (Hoffart et al., 2011), and ConceptNet (Speer et al., 2017), are becoming essential sources of knowledge for various AI-related applications, e.g., personal assistants, medical diagnosis, and online question answering. Cross-lingual entity alignment between multilingual KGs is a powerful tool that align the same entities in different monolingual KGs together, automatically synchronize different language-specific KGs and revolutionize the understanding of these ubiquitous multilingual KGs in a transformative manner (Xu et al., 2020b; Sun et al., 2020a; Berrendorf et al., 2021b,a). + +Unfortunately, real-world KGs are typically noisy due to two main reasons: (1) massive fake information injected by malicious parties and users on online encyclopedia websites (e.g., Wikipedia (Wik) and Answers.com (Ans)), social networks (e.g., Twitter and Facebook), online communities (e.g., Reddit and Yahoo Answers), news websites, and search engines that usually serve as + +data sources of the KGs; and (2) direct adversarial attacks on the KGs. Google Knowledge Graph has been criticized for providing answers without source attribution or citation, and thus undermines people's ability to verify information and to develop well-informed opinions (Dewey, 2016). + +Recent studies have shown that KG learning models remain highly sensitive to adversarial attacks, i.e., carefully designed small perturbations in KGs can cause the models to produce wrong prediction results, including knowledge graph embedding (Minervini et al., 2017; Pujara et al., 2017; Pezeshkpour et al., 2019; Zhang et al., 2019; Banerjee et al., 2021) and knowledge graph-based dialogue generation (Xu et al., 2020a). However, existing techniques focus on the adversarial attacks on single KG learning tasks. These techniques cannot be directly utilized to attack the cross-lingual entity alignment models, as they have to analyze relations within and across KGs. Two critical questions still keep unsolved: (1) Can small perturbations on KGs defeat cross-lingual entity alignment models? (2) How to design effective and unnoticeable perturbations against cross-lingual entity alignment? + +The majority of cross-lingual entity alignment techniques aim to train the model by minimizing the distance between pre-aligned entity pairs in training data, such that the corresponding entity embeddings across KGs are close to each other, and the entity pairs with the smallest distance in test data are output as alignment results (Mao et al., 2020a; Wu et al., 2020b; Mao et al., 2020b; Tang et al., 2020; Yan et al., 2021; Zhu et al., 2021; Mao et al., 2021; Pei et al., 2020). + +In terms of the distribution of entities in a KG, one idea of perturbing an entity unobtrusively is to move the entity to a dense region in the KG with many similar entities by adding/deleting relations to/from it is able to move it to a dense region in the KG with many similar entities, such that it is non-trivial to recognize the modified entity in the + +dense region with many similar entities. + +Existing gradient-based adversarial attack methods (Goodfellow et al., 2015; Madry et al., 2018) search for the weakest input features to attack by calculating the loss gradient. However, the vanishing gradient problem is often encountered when training neural networks with poor backward signal propagation and thus leads to the attack failures (Athalye et al., 2018). Can we enhance the attack signal propagation for improving the attack effectiveness? + +In this work, an entity density estimation and maximization method is employed to first estimate the distribution of entities in KGs. Based on the estimated KG distributions, the entities to be attacked are then moved to dense regions in two KGs by maximizing their densities. The attacked entities are hidden in dense regions in two KGs, such that they are surrounded by many neighbors in dense regions as well as indistinguishable from these neighbors. In addition, the surrounding of many neighbors makes it difficult to identify the correctly aligned entity pairs among many similar candidate entities. + +We comprehensively study how poor signal propagation on neural networks leads to vanishing gradients in adversarial attacks over cross-lingual entity alignment. An attack signal amplification method is developed to secure informative attack signals with both well-conditioned Jacobian and competent signal propagation from the alignment loss. This reduces the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness. + +Extensive experiments over real-world KG datasets validate the superior attack performance of the EAA model against several state-of-the-art cross-lingual entity alignment models. To our best knowledge, this work is the first to study adversarial attacks on cross-lingual entity alignment. + +# 2 Problem Formulation + +Given two input knowledge graphs $G^{1}$ and $G^{2}$ . Each is denoted as $G^{k} = (E^{k}, R^{k}, T^{k})$ ( $1 \leq k \leq 2$ ), where $E^{k} = \{e_{1}^{k}, \dots, e_{N^{k}}^{k}\}$ is the set of $N^{k}$ entities, $R^{k} = \{r_{ij}^{k} = (e_{i}^{k}, e_{j}^{k}) : 1 \leq i, j \leq N^{k}, i \neq j\}$ is the set of relations, and $T^{k} = E^{k} \times R^{k} \times E^{k}$ is the set of triples. Each triple $t_{l}^{k} = (e_{i}^{k}, r_{ij}^{k}, e_{j}^{k}) \in T^{k}$ in $G^{K}$ denotes head entity $e_{i}^{k}$ connected to tail entity $e_{j}^{k}$ through relation $r_{ij}^{k}$ . $\mathbf{A}^{k}$ is an $N^{k} \times N^{k}$ adjacency matrix that + +denotes the structure information of $G^K$ . By using knowledge graph embedding (KGE), each triple can be presented as $(\mathbf{e}_i^k, \mathbf{r}_{ij}^k, \mathbf{e}_j^k)$ , where boldfaced $\mathbf{e}_i^k, \mathbf{r}_{ij}^k$ , and $\mathbf{e}_j^k$ represent the embedding vectors of head $e_i^k$ , relation $r_{ij}^k$ , and tail $e_j^k$ respectively. + +$D$ contains a set of pre-aligned entity pairs $D = \{(e_i^1,e_j^2)|e_i^1\leftrightarrow e_j^2,e_i^1\in E^1,e_j^2\in E^2\}$ where $e_i^1\leftrightarrow e_j^2$ indicates that two entities $e_i^1$ and $e_j^2$ are the equivalent ones in different language-specific KGs. The cross-lingual entity alignment aims to utilize $D$ as the training data to identify the one-to-one entity alignments between entities $e_i^1$ and $e_j^2$ in two cross-lingual KGs $G^{1}$ and $G^{2}$ in the test data. + +Most of existing cross-lingual entity alignment models are supervised learning methods with minimizing the distances (or maximizing the similarities) between the embeddings of pre-aligned entity pairs $e_i^1$ and $e_j^2$ in $D$ (Wang et al., 2018; Sun et al., 2020d; Wu et al., 2020b; Pei et al., 2020; Tang et al., 2020; Yan et al., 2021). The entity pairs $e_i^1$ and $e_j^2$ in the test data with the largest similarities are selected as the alignment results. The following loss function is minimized to learn a KGE model $h: e_i^k \in E^k \mapsto \mathbf{e}_i^k$ . $h$ is often implemented as a graph convolutional network (GCN) for deep KGE. + +$$ +\begin{array}{l} \min _ {h} \mathcal {L} = - \sum_ {\left(e _ {i} ^ {1}, e _ {j} ^ {2}\right) \in D} \log \sigma \left(\left(\mathbf {e} _ {i} ^ {1}\right) ^ {T} \cdot \mathbf {e} _ {j} ^ {2}\right) \\ + \sum_ {\left(e _ {i ^ {\prime}} ^ {1}, e _ {j ^ {\prime}} ^ {2}\right) \notin D} \log \sigma \left(\left(\mathbf {e} _ {i ^ {\prime}} ^ {1}\right) ^ {T} \cdot \mathbf {v} _ {j ^ {\prime}} ^ {2}\right) \end{array} \tag {1} +$$ + +where $(e_i^1,e_j^2)$ and $(e_{i^{\prime}}^{1},e_{j^{\prime}}^{2})$ are positive and negative entity pairs. $(\mathbf{e}_i^1)^T$ is the transpose of $\mathbf{e}_i^1$ . $\sigma (\cdot)$ is the sigmoid function. The inner product $\cdot$ denotes the similarity between two embedding vectors. + +Given a trained deep KGE model $\mathbf{e}_i^k = h(e_i^k)$ , an adversarial attacker aims to maximally degrade the alignment performance of $h$ by injecting effective and unnoticeable relation perturbations (including relation addition and deletion) into two clean KGs $G^{k}$ ( $1 \leq k \leq 2$ ), leading to two perturbed KGs $\hat{G}^{k} = (\hat{E}^{k}, \hat{R}^{k}, \hat{T}^{k})$ . + +$$ +\max _ {\hat {\mathbf {A}} ^ {k}} \mathcal {L} s. t. | \hat {\mathbf {A}} ^ {k} - \mathbf {A} ^ {k} | \leq \Delta , 1 \leq k \leq 2 \tag {2} +$$ + +where $\mathbf{A}^k$ and $\hat{\mathbf{A}}^k$ are clean and perturbed adjacency matrices respectively. $\Delta$ is the allowed attack budget, i.e., allowed relation modifications. + +# 3 Unnoticeable Adversarial Attacks + +Existing GCN-based entity alignment methods often initialize entity features with random initialization or pre-trained word embeddings of entity names and utilize adjacency matrix of KGs to learn the entity embeddings (Wang et al., 2018; Sun et al., 2020d; Wu et al., 2020b; Yan et al., 2021). Thus, the embedding of an entity mainly depends on the embeddings of its neighbor entities. In order to modify the embedding of a target entity for the purpose of adversarial attacks, we need to remove some positive (i.e., existing) relations and add some negative (i.e., non-existing) relations between the target entity and its neighbors in adjacency matrix, and thus degrade the accuracy of entity embedding and alignment. We use the $i^{th}$ row of adjacency matrix $\mathbf{A}^k$ (i.e., $\mathbf{A}_i^k$ ) to represent structure features of each entity $e_i^k$ and analyze the impact of each structure feature (i.e., positive or negative relation) on the alignment accuracy. + +As shown in Figure 1, assuming that $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ are pre-aligned entity embeddings, if we hide an entity $e_i^1$ in a dense region with many similar $e_k^1\mathrm{s}$ by modifying its associated relations, then the surrounding of many $e_k^1\mathrm{s}$ makes it difficult to differentiate $e_i^1$ from many similar $e_k^1\mathrm{s}$ and identify the correctly aligned entity pairs $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ among many similar candidate entities $e_k^1\mathrm{s}$ . In addition, if another pair of entity embeddings $\mathbf{e}_k^1$ and $\mathbf{e}_j^2$ are more similar than the pre-aligned entity embeddings $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ , i.e., $(\mathbf{e}_k^1)^T \cdot \mathbf{e}_j^2 > (\mathbf{e}_i^1)^T \cdot \mathbf{e}_j^2$ , then we will obtain an incorrect alignment result $(e_k^1, e_j^2)$ . + +In this work, we will leverage our proposed kernel density estimation method (Zhang et al., 2020b) to estimate the distribution of perturbed KGs and maximize the distance between pre-aligned entity pairs for degrading the performance of entity alignment as well as for hiding the attacked entities in dense regions in two KGs. The kernel density estimation method is essentially to estimate a probability density function (PDF) $f(x)$ of a random variable $x$ for revealing the intrinsic distribution of $x$ (Parzen, 1962). Let $\mathbf{x}^k$ be a $N^k$ -dimensional random variable to denote the structure features of all entities $\{\mathbf{A}_i^k,\dots ,\mathbf{A}_{N^K}^K\}$ in $\mathrm{KG}G^{k}$ for estimating a PDF $f(\mathbf{x}^k)$ . + +![](images/77494f6740b090705f90549e3effcae9a2a5e42186ee9de6f9fe7340ea539b54.jpg) +Figure 1: Unnoticeable Adversarial Attacks + +where $\operatorname{det}(\cdot)$ denotes the determinant operation. $\mathbf{B} > 0$ is a bandwidth to be estimated. It is an $N^k\times N^k$ diagonal matrix $\mathbf{B} = diag(b_1,\dots ,b_{N^k})$ , which has strong influence on the density estimation $f(\mathbf{x}^{k})$ . A good $\mathbf{B}$ should be as small as the data can allow. $\mathcal{K}$ is a product symmetric kernel that satisfies $\int \mathcal{K}(x)dx = 1$ and $\int x\mathcal{K}(x)dx = 0$ . The vector form $f(\mathbf{x}^k)$ can be rewritten as an element form, where $\mathbf{x}_j^k$ denotes the $j^{th}$ dimension in $\mathbf{x}^k$ . + +$$ +f \left(\mathbf {x} ^ {k}\right) = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \prod_ {j = 1} ^ {N ^ {k}} \frac {1}{b _ {j}} \mathcal {K} \left(\frac {\mathbf {x} _ {j} ^ {k} - \mathbf {A} _ {i j} ^ {k}}{b _ {j}}\right) \tag {4} +$$ + +We then calculate the derivative $\frac{\partial f(\mathbf{x}^k)}{\partial b_j}$ about each $b_{j}$ in $\mathbf{B}$ . + +$$ +\begin{array}{l} \frac {\partial f (\mathbf {x} ^ {k})}{\partial b _ {j}} = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\partial \left[ \prod_ {l = 1} ^ {N ^ {k}} \frac {1}{b _ {l}} \mathcal {K} \left(\frac {\mathbf {x} _ {l} ^ {k} - \mathbf {A} _ {i l} ^ {k}}{b _ {l}}\right) \right]}{\partial b _ {j}} = \\ - \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \left(\frac {1}{b _ {j}} + \frac {\mathbf {x} _ {l} ^ {k} - \mathbf {A} _ {i l} ^ {k}}{b _ {j} ^ {2}} K \big (\frac {\mathbf {x} _ {l} ^ {k} - \mathbf {A} _ {i l} ^ {k}}{b _ {j}} \big)\right) \\ \prod_ {l = 1} ^ {N ^ {k}} \frac {1}{b _ {l}} \mathcal {K} \left(\frac {\mathbf {x} _ {l} ^ {k} - \mathbf {A} _ {i l} ^ {k}}{b _ {l}}\right) \tag {5} \\ \end{array} +$$ + +$$ +f \left(\mathbf {x} ^ {k}\right) = \frac {1}{N ^ {k} \det (\mathbf {B})} \sum_ {i = 1} ^ {N ^ {k}} \mathcal {K} \left(\mathbf {B} ^ {- 1} \left(\mathbf {x} ^ {k} - \mathbf {A} _ {i} ^ {k}\right)\right) \tag {3} +$$ + +We make use of a greedy search method to determine bandwidths in the kernel density estimation method. For a non-trivial/trivial dimension $j$ , updating the bandwidth $b_{j}$ will have a + +strong/weak influence over $f(\mathbf{x}^k)$ . We greedily reduce $b_{j}$ with a sequence $b_{0}, b_{0}s, b_{0}s^{2}, \dots$ for a parameter $0 < s < 1$ , until $b_{j}$ is smaller than a certain threshold $\tau_{j}$ , to validate whether a small update in $b_{j}$ is able to lead to a large update in $f(\mathbf{x}^k)$ . + +We use an initial $\mathbf{B} = \text{diag}(b_0, \dots, b_0)$ for a large $b_0$ to estimate $\frac{\partial f(\mathbf{x}^k)}{\partial b_j}$ , and reduce $b_j$ when $\frac{\partial f(\mathbf{x}^k)}{\partial b_j}$ is larger than a certain threshold. + +$$ +\begin{array}{l} \frac {\partial f \left(\mathbf {x} ^ {k}\right)}{\partial b _ {j}} = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\partial \left[ \prod_ {l = 1} ^ {N ^ {1}} \frac {1}{b _ {l}} \mathcal {K} \left(\frac {\mathbf {x} _ {l} ^ {1} - \mathbf {A} _ {i l} ^ {1}}{b _ {l}}\right) \right]}{\partial b _ {j}} \\ = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\mathcal {K} \left(\frac {\mathbf {x} _ {j} ^ {1} - \mathbf {A} _ {i j} ^ {1}}{b _ {j}}\right)}{\mathcal {K} \left(\frac {\mathbf {x} _ {j} ^ {1} - \mathbf {A} _ {i j} ^ {1}}{b _ {j}}\right)} \prod_ {l = 1} ^ {N ^ {k}} \mathcal {K} \left(\frac {\mathbf {x} _ {l} ^ {1} - \mathbf {A} _ {i l} ^ {1}}{b _ {l}}\right) \\ = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\partial f \left(\mathbf {x} _ {i} ^ {k}\right)}{\partial b _ {j}} \tag {6} \\ \end{array} +$$ + +We derive the corresponding variance $\operatorname{Var}\left(\frac{\partial f(\mathbf{x}^k)}{\partial b_j}\right)$ as follows. + +$$ +\operatorname {V a r} \left(\frac {\partial f \left(\mathbf {x} ^ {k}\right)}{\partial b _ {j}}\right) = \operatorname {V a r} \left(\frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\partial f \left(\mathbf {x} _ {i} ^ {k}\right)}{\partial b _ {j}}\right) \tag {7} +$$ + +According to the estimated bandwidth $\mathbf{B}$ by Algorithm 1, we can calculate density $f(\mathbf{x}^k)$ of $\mathbf{x}^k$ in Eq.(3). The perturbation process is to maximize the following attack loss $\mathcal{L}_A$ for producing unnoticeable perturbations, in terms of the estimations $f(\mathbf{x}^{1})$ and $f(\mathbf{x}^{2})$ in two KGs $G^{1}$ and $G^{2}$ . + +$$ +\begin{array}{l} \max _ {\hat {\mathbf {A}} ^ {k}} \mathcal {L} _ {A} = \left[ \sum_ {(e _ {i} ^ {1}, e _ {j} ^ {2}) \in D} - \log \sigma \left(\left(\hat {\mathbf {e}} _ {i} ^ {1}\right) ^ {T} \cdot \hat {\mathbf {e}} _ {j} ^ {2}\right) \right. \\ \left. + f \left(\hat {\mathbf {A}} _ {i} ^ {1}\right) + f \left(\hat {\mathbf {A}} _ {j} ^ {2}\right) \right] \tag {8} \\ + \sum_ {(e _ {i ^ {\prime}} ^ {1}, e _ {j ^ {\prime}} ^ {2}) \notin D} \\ s. t. | \hat {\mathbf {A}} _ {i} ^ {k} - \mathbf {A} _ {i} ^ {k} | \leq \Delta , 1 \leq k \leq 2 \\ \end{array} +$$ + +where $\hat{\mathbf{A}}_i^1 = \mathbf{A}_i^1 +\delta_i^1$ (and $\hat{\mathbf{A}}_j^2 = \mathbf{A}_j^2 +\delta_j^2)$ denote perturbations of clean structure features $\mathbf{A}_i^1$ (and $\mathbf{A}_j^2$ ) in $G^{1}$ (and $G^{2}$ ) by adding a small amount of relation perturbations $\delta_i^1$ (and $\delta_j^2$ ), such that $\hat{\mathbf{e}}_i^1$ + +# Algorithm 1 Kernel Density Estimation + +Input: KG $G^{k} = (E^{k}, R^{k}, T^{k})$ , parameter $0 < s < 1$ , initial bandwidth $b_{0}$ , and parameter $c$ . + +Output: Bandwidth matrix B. + +1: Initialize all $b_{1}, \dots, b_{N^{k}}$ with $b_{0}$ ; +2: for each $j = 1$ to $N^k$ +3: do +4: Estimate derivative $\frac{\partial f(\mathbf{x}^k)}{\partial b_j}$ and variance $\operatorname{Var}\left(\frac{\partial f(\mathbf{x}^k)}{\partial b_j}\right)$ ; + +5: Compute $\tau_{j} = \sqrt{2\cdot\mathrm{Var}(\frac{\partial f(\mathbf{x}^{k})}{\partial b_{j}})\cdot\log(cN^{k})}$ + +6: if $\left|\frac{\partial f(\mathbf{x}^k)}{\partial b_j}\right| > \tau_j$ , then Update $b_j = b_j s$ +7: while $\left| \frac{\partial f(\mathbf{x}^k)}{\partial b_j} \right| > \tau_j$ +8: Return B. + +is far away from $\hat{\mathbf{e}}_j^2$ and thus the alignment accuracy is decreased. In addition, we push $e_i^1$ and $e_j^2$ to dense regions to generate $\hat{e}_i^1$ and $\hat{e}_j^2$ , by maximizing $f(\hat{\mathbf{A}}_i^1)$ and $f(\hat{\mathbf{A}}_j^2)$ , such that $\hat{e}_i^1$ and $\hat{e}_j^2$ are indistinguishable from their neighbors in perturbed KGs. This reduces the possibility of perturbation detection by humans or defender programs. + +We leverage the Projected Gradient Descent (PGD) technique (Madry et al., 2018) to produce perturbed adjacency matrices $\hat{\mathbf{A}}^1$ and $\hat{\mathbf{A}}^2$ of two KGs $G^{1}$ and $G^{2}$ . + +$$ +\begin{array}{l} \left(\mathbf {A} _ {i} ^ {1}\right) ^ {(t + 1)} = \Pi_ {\triangle^ {1}} \operatorname {s g n} \left[ \operatorname {R e L U} \left(\nabla_ {\left(\mathbf {A} _ {i} ^ {1}\right) ^ {t}} \mathcal {L} _ {A} \right. \right] \\ \left(\mathbf {A} _ {j} ^ {2}\right) ^ {(t + 1)} = \Pi_ {\triangle^ {2}} \operatorname {s g n} \left[ \operatorname {R e L U} \left(\nabla_ {\left(\mathbf {A} _ {j} ^ {2}\right) ^ {t}} \mathcal {L} _ {A} \right], \right. \tag {9} \\ t = 1, \dots , T \\ \end{array} +$$ + +where $(\mathbf{A}_i^1)^{(t + 1)}$ and $(\mathbf{A}_j^2)^{(t + 1)}$ denotes the perturbations of $\mathbf{A}_i^1$ and $\mathbf{A}_j^2$ derived at step $t$ . $\epsilon$ specifies the budget of allowed perturbed relations for each attacked entity. $\triangle^k = \{(\delta^k)^t |\mathbf{1}^T (\delta^k)^t\leq \epsilon ,(\delta^k)^t\in \{0,1\}^{N^k}\}$ , where $(\delta^k)^t = \| (\mathbf{A}_i^1)^t -$ $\mathbf{A}_i^1\| _2^2$ , represents the constraint set of the projection operator II, i.e., it encodes whether a relation in $\mathbf{A}_i^1$ is modified or not. The composition of the ReLU and sign operators guarantees $(\mathbf{A}_i^1)^t\in \{0,1\}^{N^1}$ and $(\mathbf{A}_j^2)^t\in \{0,1\}^{N^2}$ , as it adds (or removes) an relation or keeps it unchanged when an derivate in the gradient is positive (or negative). The outputs $(\mathbf{A}_i^1)^T$ and $(\mathbf{A}_j^2)^T$ at final step $T$ are used as the perturbed adjacency matrices $\hat{\mathbf{A}}_i^1$ and $\hat{\mathbf{A}}_j^2$ . + +# 4 Effective Adversarial Attacks + +Unfortunately, the above PGD-based unnoticeable attack method needs to iteratively calculate the gradient $\nabla_{(\mathbf{A}_i^1)}\mathcal{L}_A$ , which mainly depends on + +$\frac{\partial\left(\log\sigma((\mathbf{e}_i^1)^T\cdot\mathbf{e}_j^2)\right)}{\partial\mathbf{A}_i^1}$ in the GCN-based entity alignment models. + +Given an alignment signal $\phi\big((\mathbf{e}_i^1)^T, \mathbf{e}_j^2\big) = \frac{\partial\big(\log\sigma((\mathbf{e}_i^1)^T \cdot \mathbf{e}_j^2)\big)}{\partial(\mathbf{e}_i^1)^T}$ and a Jacobian matrix $\mathbf{J}_i = \frac{\partial(\mathbf{e}_i^1)^T}{\partial\mathbf{A}_i^1}$ , the gradient of $\log \sigma((\mathbf{e}_i^1)^T \cdot \mathbf{e}_j^2)$ is calculated as follows. + +$$ +\begin{array}{l} \frac {\partial \big (\log \sigma ((\mathbf {e} _ {i} ^ {1}) ^ {T} \cdot \mathbf {e} _ {j} ^ {2}) \big)}{\partial \mathbf {A} _ {i} ^ {1}} \\ = \frac {\partial \left(\log \sigma \left(\left(\mathbf {e} _ {i} ^ {1}\right) ^ {T} \cdot \mathbf {e} _ {j} ^ {2}\right)\right)}{\partial \left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}} \frac {\partial \left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}}{\partial \mathbf {A} _ {i} ^ {1}} \tag {10} \\ = \phi \left(\left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}, \mathbf {e} _ {j} ^ {2}\right) \mathbf {J} _ {i} \\ \end{array} +$$ + +It is obvious that the gradient is determined with both the signal and the Jacobian together. The situation that either the signal has saturating gradient or the Jacobian is insignificant is able to result in vanishing gradients in $\frac{\partial\left(\log\sigma((\mathbf{e}_i^1)^T\cdot\mathbf{e}_j^2)\right)}{\partial\mathbf{A}_i^1}$ and thus the attack failures. + +All singular values of a neural network's input-output Jacobian matrix concentrate near 1 is a property known as dynamical isometry (Pennington et al., 2017). Ensuring the mean squared singular value of a network's input-output Jacobian is $O(1)$ is essential for avoiding the exponential vanishing or explosion of gradients. We leverage the dynamical isometry theory for improving the effectiveness of the PGD adversarial attacks. Concretely, a neural network is dynamical isometry if all singular values $\lambda_{ir}$ of the Jacobian $\mathbf{J}_i$ are close to 1, i.e., $1 - \lambda_{ir} \leq \xi$ for $\forall r, r \in \{1, \dots, \min\{N^1, N^2\}\}$ and a small positive number $\xi \approx 0$ . In our problem, when the Jacobian matrix $\mathbf{J}_i$ is dynamical isometry, the signal $\phi((\mathbf{e}_i^1)^T, \mathbf{e}_j^2)$ backpropagates isometrically over the neural network and maintains the norm and all angles between vectors. + +Intuitively, if we select a good attack signal amplification factor $\alpha$ to amplify $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ as follows, then this can improve the diffusion of attack signals. In addition, a good $\alpha$ should guarantee the relative order of the network's output logits invariant, to ensure the decision boundary of entity alignment unchanged. + +$$ +\tilde {\mathbf {e}} _ {i} ^ {1} = \alpha \mathbf {e} _ {i} ^ {1}, \tilde {\mathbf {e}} _ {j} ^ {2} = \alpha \mathbf {e} _ {j} ^ {2} \tag {11} +$$ + +We rewrite the gradients with $\alpha$ as follows. + +$$ +\begin{array}{l} \frac {\partial \left(\log \sigma \left(\left(\tilde {\mathbf {e}} _ {i} ^ {1}\right) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2}\right)\right)}{\partial \mathbf {A} _ {i} ^ {1}} \\ = \frac {\partial \left(\log \sigma \left(\left(\tilde {\mathbf {e}} _ {i} ^ {1}\right) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2}\right)\right)}{\partial \left(\tilde {\mathbf {e}} _ {i} ^ {1}\right) ^ {T}} \frac {\partial \left(\tilde {\mathbf {e}} _ {i} ^ {1}\right) ^ {T}}{\partial \left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}} \frac {\partial \left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}}{\partial \mathbf {A} _ {i} ^ {1}} \tag {12} \\ = \phi \big ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T}, \tilde {\mathbf {e}} _ {j} ^ {2} \big) \alpha \mathbf {J} _ {i} \\ \end{array} +$$ + +Notice that $\phi\big((\tilde{\mathbf{e}}_i^1)^T, \tilde{\mathbf{e}}_j^2\big) =$ + +$$ +\frac {\sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2}) (1 - \sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2})) \tilde {\mathbf {e}} _ {j} ^ {2}}{\sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2})} = (1 - \sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot +$$ + +$(\tilde{\mathbf{e}}_j^2))\tilde{\mathbf{e}}_j^2$ When $\alpha$ is close to $\infty$ , the alignment signal $\phi \bigl ((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2\bigr)$ approaches zero and thus the vanishing gradient problem is encountered in adversarial attacks. In addition, all singular values of $\alpha \mathbf{J}_i$ are equal to zeros if $\alpha = 0$ $\frac{\partial\left(\log\sigma((\tilde{\mathbf{e}}_i^1)^T.\tilde{\mathbf{e}}_j^2)\right)}{\partial\mathbf{A}_i^1}$ is equal to zero, which leads to the vanishing gradient problem too. + +Therefore, a desired $\alpha$ for avoiding the exponential vanishing of gradients should stand in between 0 and $\infty$ , in order to guarantee the signal $\phi \left((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2\right)$ large enough, i.e., $\| \phi \bigl ((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2\bigr)\| _2 > \eta$ for a positive threshold $\eta$ , as well as make all singular values of $\alpha \mathbf{J}_i$ close to 1, such that the signal $\phi \bigl ((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2\bigr)$ can be well backpropagated from the output layer to the input layer. + +In order to make the mean of singular values of $\alpha \mathbf{J}_i$ close to 1, the first option of $\alpha$ is the inverse of the mean of singular values of $\mathbf{J}_i$ . + +$$ +\alpha = \frac {\left| D \right| N}{\sum_ {i = 1} ^ {| D |} \sum_ {r = 1} ^ {N} \lambda_ {i r}} \tag {13} +$$ + +where $\lambda_{ir}$ is the $r^{th}$ singular value of $\mathbf{J}_i$ . $|D|$ is the size of the set $D$ of pre-aligned entity pairs and $N = \min \{N^1, N^2\}$ . + +For the purpose of ensuring $\| \phi ((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2)\| _2 > \eta$ , the second option of $\alpha$ should be satisfied with $1 - \sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) > \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ . The feasible $\alpha$ can be obtained through the following theorem. + +Theorem 1. Let entity embedding vectors $\tilde{\mathbf{e}}_k^2$ and $\tilde{\mathbf{e}}_l^2$ be the most similar and least similar to $(\tilde{\mathbf{e}}_i^1)^T$ $(1\leq k,l\leq N^{2})$ , i.e., $\tilde{\mathbf{e}}_k^2 = \mathrm{argmax}_{\tilde{\mathbf{e}}_k^2}(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_k^2$ and $\tilde{\mathbf{e}}_l^2 = \mathrm{argmin}_{\tilde{\mathbf{e}}_l^2}(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_l^2$ , and $c = (\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_k^2$ Also, suppose that $d$ is the minimal norm of entity embedding vectors in $G^{2}$ , i.e., $d = \min_{\tilde{\mathbf{e}}_m^2}\| \tilde{\mathbf{e}}_m^2\| _2$ for $\forall e_m^2\in E^2$ . For a given $0 < \eta < d / 2$ , if $\alpha <$ + +# Algorithm 2 Effective Adversarial Attacks + +Input: KG $G^{k} = (E^{k}, R^{k}, T^{k})$ , set of pre-aligned entity pairs $D = \{(e_{i}^{1}, e_{j}^{2}) | e_{i}^{1} \leftrightarrow e_{j}^{2}\}$ , trained entity embedding model $h$ , noise budget $\epsilon$ , and signal threshold $\eta$ . + +Output: Perturbed adjacency matrices + +$$ +\{\hat {\mathbf {A}} _ {i} ^ {1}, \hat {\mathbf {A}} _ {j} ^ {2} | (e _ {i} ^ {1}, e _ {j} ^ {2}) \in D \}. +$$ + +1: for each pair $(e_i^1, e_j^2)$ in $D$ +2: Set $\hat{\mathbf{e}}_i^1 = \mathbf{e}_i^1 = h(e_i^1)$ , $\hat{\mathbf{e}}_j^2 = \mathbf{e}_j^2 = h(e_j^2)$ ; +3: Compute $\alpha_{1} = \frac{|D|N}{\sum_{i = 1}^{|D|}\sum_{r = 1}^{N}\lambda_{ir}}$ in Eq.(13); +4: for $t = 1,\dots ,T$ +5: Initialize $\alpha_{2} = 1.0$ +6: if $1 - \sigma((\tilde{\mathbf{e}}_i^1)^T \cdot \tilde{\mathbf{e}}_j^2) \leq \eta / \| \tilde{\mathbf{e}}_j^2\|_2$ +7: Update $\alpha_{2} = \sqrt{\frac{1}{c}\log\frac{d - \eta}{\eta}}$ in Theorem 1; +8: Amplify $\tilde{\mathbf{e}}_i^1 = \alpha_1\alpha_2\mathbf{e}_i^1$ $\tilde{\mathbf{e}}_j^2 = \alpha_1\alpha_2\mathbf{e}_j^2;$ +9: Calculate $\frac{\partial\left(\log\sigma((\tilde{\mathbf{e}}_i^1)^T\cdot\tilde{\mathbf{e}}_j^2)\right)}{\partial\tilde{\mathbf{A}}_i^1}$ and $\frac{\partial\left(\log\sigma((\tilde{\mathbf{e}}_i^1)^T\cdot\tilde{\mathbf{e}}_j^2)\right)}{\partial\tilde{\mathbf{A}}_j^2};$ +10: Use the PGD to update $\hat{\mathbf{A}}_i^1$ , $\hat{\mathbf{A}}_j^2$ in Eq.(9); +11: Return $\{\hat{\mathbf{A}}_i^1, \hat{\mathbf{A}}_j^2 | (e_i^1, e_j^2) \in D\}$ . + +$$ +\begin{array}{l} \sqrt {\frac {1}{c} \log \frac {d - \eta}{\eta}}, t h e n 1 - \sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2}) > \eta / \| \tilde {\mathbf {e}} _ {j} ^ {2} \| _ {2} f o r \\ \forall e _ {j} ^ {2} \in E ^ {2}. \end{array} +$$ + +Proof. Please see Appendix A for proof. + +Algorithm 2 combines the above two kinds of $\alpha$ to produce effective adversarial attacks with attack signal amplification. The perturbed entity embeddings $\hat{\mathbf{e}}_i^1$ and $\hat{\mathbf{e}}_j^2$ are initialized with clean ones $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ in step 2. The first amplification factor $\alpha_{1}$ is calculated in step 3. The second factor $\alpha_{2}$ is computed in steps 5-7. $\alpha_{1}$ and $\alpha_{2}$ are integrated together for enhancing the attack signal propagation of neural networks in steps 8-9. The PGD attack method with attack signal amplification is utilized to perturb the KGs. The algorithm repeats the above iterative procedure until convergence. + +# 5 Experimental Evaluation + +Table 1 presents the statistics of the DBP15K datasets (Sun et al., 2017). They consist of three different cross-lingual datasets which are $DBP15K_{ZH-EN}$ , $DBP15K_{JA-EN}$ , and $DBP15K_{FR-EN}$ . Each cross-lingual dataset contains two monolingual KGs in different languages and 15,000 pre-aligned entity pairs between two KGs. In the experiment, $30\%$ pre-aligned entity + +
Dataset#Entities#Relations#Triples#Alignments
ZH-EN ZH66,4692,830153,92915,000
98,1252,317237,674
JA-EN JA65,7442,043164,37315,000
95,6802,096233,319
FR-EN FA66,8581,379192,19115,000
105,8892,209278,590
+ +Table 1: Statistics of Datasets + +pairs are used for training data and the remaining are used for test data. + +We compare the EAA model with seven state-of-the-art attack models. Sememe-based Word Substitution (SWS) incorporates the sememe-based word substitution and swarm optimization-based search to conduct word-level attacks (Zang et al., 2020). Inflection Word Swap (IWS) perturb the inflectional morphology of words to craft plausible and semantically similar adversarial examples (Tan et al., 2020; Morris et al., 2020). We utilize the above two word-level attack models to replace associated entities of a relation based on semantics. GF-Attack attacks graph embedding methods by devising new loss and approximating the spectrum (Chang et al., 2020). LowBlow is a general low-rank adversarial attack model which is able to affect the performance of various graph learning tasks (Entezari et al., 2020). We use the above two graph attack models to directly add/remove relations in terms of graph topology. CRIAGE aims to add/remove the facts to/from the KG that degrades the performance of link prediction (Pezeshkpour et al., 2019). DPA contains a collection of data poisoning attack strategies against knowledge graph embedding (Zhang et al., 2019). RL-RR uses reinforcement learning policy to produce deceptively perturbed KGs while keeping the downstream quality of the original KG (Raman et al., 2021). To our best knowledge, this work is the first to study adversarial attacks on cross-lingual entity alignment. + +We evaluate four versions of EAA to show the strengths of different components. EAA-P uses the basic PGD (Madry et al., 2018) to produce adversarial attacks. EAA-D only utilizes the KDE and density maximization to generate effective and unnoticeable attacks. EAA-A employs only our attack signal amplification strategy to improve the performance of the basic PGD attack. EAA operates with the full support of both KDE and signal amplification components. + +We validate the effectiveness of the above attack models with three representative cross-lingual entity alignment algorithms. AttrGNN integrates + +
AttrGNNRNMREA
AttacksHits@1MRRHits@1MRRHits@1MRR
Clean0.7960.8450.8410.8750.7920.818
SWS0.7260.8390.7450.8620.7640.848
IWS0.7080.7610.7290.8230.7590.804
GF-Attack0.7090.8150.7240.8330.7330.844
LowBlow0.6770.7730.6780.7760.6970.797
CRIAGE0.6460.7040.6550.7190.6620.715
DPA0.6030.7120.6360.7510.6350.733
RL-RR0.5620.6840.6280.7130.6370.722
EAA0.4970.5380.5250.6360.5380.641
+ +Table 2: Results on $DBP15K_{ZH-EN}$ with $5\%$ perturbed relations + +
AttrGNNRNMREA
AttacksHits@1MRRHits@1MRRHits@1MRR
Clean0.7830.8340.8720.8990.7990.823
SWS0.7240.8390.7740.8540.7880.843
IWS0.7180.7870.7550.8040.7450.796
GF-Attack0.7150.8240.7470.8260.7670.845
LowBlow0.7370.7830.7280.8020.7230.821
CRIAGE0.7050.7560.6990.7690.7070.769
DPA0.6430.7250.7230.7530.6690.766
RL-RR0.6890.7160.6910.7650.7060.768
EAA0.5790.6120.6180.6420.6210.652
+ +both attribute and relation triples for better performance of cross-lingual entity alignment (Liu et al., 2020). RNM is a novel relation-aware neighborhood matching model for entity alignment (Zhu et al., 2021). To our best knowledge, REA is the only robust cross-lingual entity alignment solution against adversarial attacks by detecting noise in the perturbed inter-KG entity links (Pei et al., 2020). + +We use two popular metrics in entity alignment to verify the attack effectiveness: Hits@k (i.e., the ratio of correctly aligned entities ranked in the top $k$ candidates) and MRR (i.e., mean reciprocal rank). A smaller Hits@k or MRR indicates a worse entity alignment but a better attack. $K$ is fixed to 1 in all tests. + +Attack performance on various datasets with different entity alignment algorithms. Table 2-4 exhibit the Hits@1 and MRR scores of three GCN-based entity alignment algorithms on test data by nine attack models over three groups of cross-lingual datasets. Clean represents that the experiments run on the original KGs without any perturbations. For all other attack models, the number of perturbed relations is fixed to $5\%$ in these experiments. It is observed that among nine attack methods, no matter how strong the attacks are, the EAA method achieve the lowest Hits@1 and MRR scores on perturbed KGs in most experi + +Table 3: Results on $DBP15K_{JA-EN}$ with $5\%$ perturbed relations + +
AttrGNNRNMREA
AttacksHits@1MRRHits@1MRRHits@1MRR
Clean0.9190.910.9380.9540.8120.855
SWS0.7820.8730.8140.8860.8070.846
IWS0.7550.8010.8030.8360.8020.806
GF-Attack0.7150.8280.7790.8480.7920.848
LowBlow0.7920.8410.7990.8260.7930.852
CRIAGE0.7330.8640.7440.8730.7810.831
DPA0.7040.7570.7960.8170.6950.791
RL-RR0.7540.7920.7450.8230.7540.784
EAA0.6430.6970.6440.7090.6810.696
+ +Table 4: Results on $DBP15K_{FR-EN}$ with $5\%$ perturbed relations + +ments, showing the effectiveness of EAA for the adversarial attacks. Compared to the entity alignment results under other attack models, EAA, on average, achieves $17.7\%$ , $12.8\%$ , and $12.8\%$ improvement of Hits@1 and $17.6\%$ , $16.9\%$ , and $13.7\%$ boost of MRR on $DBP15K_{ZH-EN}$ , $DBP15K_{JA-EN}$ , and $DBP15K_{FR-EN}$ respectively. In addition, the promising performance of EAA with all three entity alignment models implies that EAA has great potential as a general attack solution to other entity alignment methods, which is desirable in practice. + +Ablation study. Figure 2 and 3 present the Hits@1 and MRR scores achieved by three entity alignment methods under adversarial attacks with four variants of our EAA attack model. We have observed the complete EAA achieves the lowest Hits@1 (< 0.681) and the smallest MRR scores (< 0.709) respectively, which are obviously better than other versions. Notice that EAA-A achieves the better attack performance than EAA-P in most tests. A reasonable explanation is that our attack signal amplification technique is able to alleviate the vanishing gradient issue, which effectively helps maintain the utility of adversarial attacks in GCN-based entity alignment models. In addition, EAA-D also performs well in most experiments, compared with EAA-P. A rational guess is that it is difficult to correctly match the entities in two KGs when they lie in dense regions with many similar entities. These results illustrate both KDE and signal amplification methods are important in producing effective and unnoticeable attacks in entity alignment. + +Attack performance with varying perturbed relations. Figure 4 presents the performance of entity alignment under nine attack models by varying the ratios of perturbed edges from $5\%$ to $30\%$ . It is obvious that the attacking performance improves for each attacker with an increase in the number of perturbed edges. This phenomenon indicates that current GCN-based entity alignment methods are very sensitive to adversarial attacks. EAA achieves + +![](images/25b21f9000a40f07d938f9f7c34f11b8a60a1757b71b5046fdafb2b845a496bc.jpg) +(a) AttrGNN + +![](images/18274cd104d07133030cdf62818bb2c95539a3f87a47b51ce9f77907adf86864.jpg) +(b) RNM + +![](images/1f2e32532d79748c3d861c94b1ed9fc42cb5d0cef689865c9fe183b515f87a35.jpg) +(c) REA +Figure 2: Hits@1 of EAA variants + +![](images/d93c0f0ebaec6580d780c80054fa048d429a96162f0132b4f0ccf824122908b1.jpg) +(a) AttrGNN +Figure 3: MRR of EAA variants + +![](images/97f5b3c75f1b34e5ebb25977955a2de41f3235972c68dfcb1d96bced09dcc712.jpg) +(b) RNM + +![](images/4e2fab3097fab3f7a9d47fc18ac7793e929e0023d8359a5db3e6457372c29f90.jpg) +(c) REA + +![](images/18b554ba1ee1b9e9b89b28f316fce16b1f07af19b8823a4f08a68e3fe119974f.jpg) +(a) AttrGNN on ZH-EN +Figure 4: Hits@1 with varying perturbed relations + +![](images/80a433848dfc80e5ff462481452a82b8d1dcde21463f7eba6584164b1fa22dcd.jpg) +(b) RNM on ZH-EN + +![](images/282d5bf51b5624210176d1b5403c43a9a2bfbfa94216a74080db8fc864b0f071.jpg) +(c) REA on ZH-EN +(a) Perturbation budget $\epsilon$ +Figure 5: Results with varying parameters + +![](images/388a2667998883aa54061f4e1e152ee6c28bf6598116888fbfd6b2269260def8.jpg) + +![](images/45fe61316cf75cb61191376232389a5cb3664283675be58baf15eea8aaeebd9c.jpg) +(b) Signal threshold $\eta$ + +the lowest Hits@1 values $(< 0.538)$ , which are still better than the other eight methods in most tests. Especially, when the perturbation ratio is large than $10\%$ , the Hits@1 values drop quickly. + +Impact of perturbation budget $\epsilon$ . Figure 5 (a) measures the performance effect of $\epsilon$ in the EAA model for the entity alignment by varying $\epsilon$ from 1 to 6. It is observed that when increasing $\epsilon$ , both Hits@1 and MRR scores of the EAA model decreases substantially. This demonstrates it is difficult to train a robust entity alignment model under large $\epsilon$ constraints. However, a large $\epsilon$ can be easily detected by humans or by defender programs. Notice that the average number of associated relations of each entity in three datasets is between 2.3 and 2.9. Thus we suggest generating both effective and unnoticeable attacks for the entity alignment task under $\epsilon$ between 2 and 3, such that $\epsilon$ is smaller than the average number of associated relations. + +Impact of signal threshold $\eta$ . Figure 5 (b) shows the impact of $\eta$ in our EAA model over three groups of datasets. The performance curves initially drop when $\eta$ increases. Intuitively, this can help alleviate the vanishing gradient issue in the PGD adversarial attacks. Later on, the performance curves keep relatively stable or even increasing when $\eta$ continuously increases. A reasonable explanation is that the too large $\eta$ makes the upper bound of $\alpha$ too small. This results in poor-conditioned + +Jacobian and thus leads to the vanishing gradient issue again. Thus, it is important to determine the optimal $\eta$ for the EAA model. + +# 6 Related Work + +Knowledge graph alignment. Knowledge graph alignment techniques have attracted active research in the last decade (Xu et al., 2020b; Sun et al., 2020a; Berrendorf et al., 2021b,a) and can be broadly classified into two categories: (1) Translation-based techniques, which denote entities by computing the plausibility of relational facts measured by a specific fact plausibility scoring function, including MtransE (Chen et al., 2017a), IPTransE (Zhu et al., 2017), JAPE (Sun et al., 2017), BootEA (Sun et al., 2018), RSNs (Guo et al., 2019), NAEA (Zhu et al., 2019), OTEA (Pei et al., 2019b), TransEdge (Sun et al., 2019), HyperKA (Sun et al., 2020c). The idea of this kind of methods are originated from cross-lingual word embedding techniques. Thus, they are able to capture fine-grained fact semantics. However, they fail to preserve the global topological structure of knowledge graphs; (2) GCN-based methods, which utilize GCN models to model global structure information of knowledge graphs by recursively aggregating the features of neighbors of each entity, such as GCN-Align (Wang et al., 2018), SEA (Pei et al., 2019a), MuGNN (Cao et al., 2019), HopGCN (Xu + +et al., 2019c), NAEA (Zhu et al., 2019), AVR-GCN (Ye et al., 2019), RDGCN (Wu et al., 2019a), HGCN-JE (Wu et al., 2019b), KECG (Li et al., 2019), MRAEA (Mao et al., 2020a), AliNet (Sun et al., 2020d), CG-MuAlign (Zhu et al., 2020), NMN (Wu et al., 2020b), DAT (Zeng et al., 2020), SSP (Nie et al., 2020), RREA (Mao et al., 2020b), DINGAL (Yan et al., 2021), RNM (Zhu et al., 2021), JEANS (Chen et al., 2021), Dual-AMN (Mao et al., 2021), KE-GCN (Yu et al., 2021). These methods can fully utilize the topological and neighborhood information to learn better representations of entities. However, it is difficult to model fine-grained fact semantics. + +Adversarial attacks on text and graph data. Recent studies have presented that NLP and graph models, especially DNN models, are highly sensitive to adversarial attacks, i.e., carefully designed small deliberate perturbations in input intended to result in analysis failures (Song et al., 2018; Chen et al., 2020; Xu et al., 2019a; Wang et al., 2019; Zhang et al., 2020a; Huq and Pervin, 2020). + +In the NLP area, the majority of research efforts focus on attacking the corpus in different models, including dialogue generation (Niu and Bansal, 2018), machine translation (Belinkov and Bisk, 2018; Tan et al., 2020; Niu et al., 2020), model-agnostic attacks (Wallace et al., 2019; Zang et al., 2020; Morris et al., 2020), natural language inference (Abdou et al., 2020; Chan et al., 2020; Li et al., 2020b), reading comprehension (Jia and Liang, 2017; Blohm et al., 2018; Tan et al., 2020), and sentiment classification (Wu et al., 2020c; Kurita et al., 2020; Wang et al., 2020). + +Graph data analysis have attracted active research in the last decade (Cheng et al., 2009; Zhou et al., 2009, 2010; Cheng et al., 2011; Zhou and Liu, 2012; Cheng et al., 2012; Lee et al., 2013; Su et al., 2013; Zhou et al., 2013; Zhou and Liu, 2013; Palanisamy et al., 2014; Zhou et al., 2014; Zhou and Liu, 2014; Su et al., 2015; Zhou et al., 2015b; Bao et al., 2015; Zhou et al., 2015d; Zhou and Liu, 2015; Zhou et al., 2015a,c; Lee et al., 2015; Zhou et al., 2016; Zhou, 2017; Palanisamy et al., 2018; Zhou et al., 2018b,a; Ren et al., 2019; Zhou et al., 2019c,b,d; Zhou and Liu, 2019; Wu et al., 2020a, 2021a; Zhou et al., 2020b; Zhang et al., 2020b; Zhou et al., 2020c,a; Goswami et al., 2020; Zhou et al., 2021b; Zhao et al., 2021; Ren et al., 2021; Jin et al., 2021; Wu et al., 2021b; Zhou et al., 2021a; Zhang et al., 2021; Liu et al., 2021). Various adver + +sarial attack models have been developed to show the vulnerability of graph learning models in node classification (Dai et al., 2018; Zügner et al., 2018; Wang and Gong, 2019; Xu et al., 2019b; Zügner and Gunnemann, 2019; Takahashi, 2019; Entezari et al., 2020; Sun et al., 2020b; Ma et al., 2020; Zügner et al., 2020; Xi et al., 2021; He et al., 2021), community detection (Chen et al., 2017b; Waniek et al., 2018; Chen et al., 2019; Li et al., 2020a), network embedding (Chen et al., 2018; Bojchevski and Gunnemann, 2019; Chang et al., 2020), graph classification (Dai et al., 2018; Xi et al., 2021), link prediction (Zhou et al., 2019a), similarity search (Dey and Medya, 2020), malware detection (Hou et al., 2019), and graph matching (Zhang et al., 2020b). + +Only recently, researchers have started to develop adversarial attack techniques to maximally degrade the performance of knowledge graph learning in knowledge graph embedding (Minervini et al., 2017; Pujara et al., 2017; Pezeshkpour et al., 2019; Zhang et al., 2019; Banerjee et al., 2021) and knowledge graph-based dialogue generation (Xu et al., 2020a). REA detects noise in the perturbed inter-graph links for robust cross-lingual entity alignment (Pei et al., 2020). RL-RR aims to produce deceptively perturbed knowledge graphs, which maintain the downstream performance of the original knowledge graph while significantly deviating from the original knowledge graph's semantics and structure (Raman et al., 2021). + +# 7 Conclusions + +We have studied the problem of adversarial attacks against cross-lingual entity alignment. First, we proposed to utilize kernel density estimation technique to estimate and maximize the densities of attacked entities and generate effective and unnoticeable perturbations, by pushing attacked entities to dense regions in two KGs. Second, we analyze how gradient vanishing causes failures of gradient-based adversarial attacks. We design an attack signal amplification method to ensure informative signal propagation. The EAA model achieves superior performance against representative attack models. + +# 8 Ethical Considerations + +In this work, all the three knowledge graph datasets are open-released by previous works for research (Sun et al., 2017). All the three datasets are widely used in training/evaluating the crosslingual entity alignment, for example, (Liu et al., + +2020; Zhu et al., 2021; Pei et al., 2020; Yan et al., 2021; Mao et al., 2021). All the three datasets are open-accessed resources that everyone can see and no privacy-related data (such as gender, nickname, birthday, etc.) are included. All the three knowledge graph datasets are originally collected and filtered from Wikipedia (under the license CC BY-SA 3.0). It is allowed to reuse them in research. But if it needs commercial use, it may need to ask for additional permission from the original author/copyright owner (Wik; Sun et al., 2017). To summary, as research work, this work has no concerns on the dataset and other aspects. But if someone wants to use the same/similar data as us in commercial, they have to further check the licenses. + +# References + +Answers.com. https://www Answers.com. + +Wikipedia. http://www.wikipedia.org/. + +Mostafa Abdou, Vinit Ravishankar, Maria Barrett, Yonatan Belinkov, Desmond Elliott, and Anders Søgaard. 2020. The sensitivity of language models and humans to winograd schema perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7590-7604. + +Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pages 274-283. + +Soren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007., pages 722-735. + +Prithu Banerjee, Lingyang Chu, Yong Zhang, Laks V.S. Lakshmanan, and Lanjun Wang. 2021. Stealthy targeted data poisoning attack on knowledge graphs. In Proceedings of the 37th IEEE International Conference on Data Engineering, IEEE 2021. + +Xianqiang Bao, Ling Liu, Nong Xiao, Yang Zhou, and Qi Zhang. 2015. Policy-driven autonomic configuration management for nosql. In Proceedings of the 2015 IEEE International Conference on Cloud Computing (CLOUD'15), pages 245-252, New York, NY. + +Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. + +Max Berrendorf, Evgeniy Faerman, and Volker Tresp. 2021a. Active learning for entity alignment. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, pages 48-62. + +Max Berrendorf, Ludwig Wacker, and Evgeniy Faerman. 2021b. A critical assessment of state-of-the-art in entity alignment. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part II, pages 18-32. + +Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, and Ngoc Thang Vu. 2018. Comparing attention-based convolutional and recurrent neural networks: Success and limitations in machine reading comprehension. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 - November 1, 2018, pages 108-118. + +Aleksandar Bojchevski and Stephan Gunnemann. 2019. Adversarial attacks on node embeddings via graph poisoning. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 695-704. + +Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1452-1461. + +Alvin Chan, Yi Tay, Yew-Soon Ong, and Aston Zhang. 2020. Poison attacks against text datasets with conditional adversarially regularized autoencoder. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 4175-4189. + +Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Junzhou Huang. 2020. A restricted black-box adversarial framework towards attacking graph embedding models. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New Yrok, NY, USA, February 7 - 12, 2020. + +Jinyin Chen, Lihong Chen, Yixian Chen, Minghao Zhao, Shanqing Yu, Qi Xuan, and Xiaoniu Yang. 2019. Ga-based q-attack on community detection. IEEE Trans. Comput. Social Systems, 6(3):491-503. + +Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. 2018. Fast gradient attack on network embedding. CoRR, abs/1809.02797. +Liang Chen, Jintang Li, Jiaying Peng, Tao Xie, Zengxu Cao, Kun Xu, Xiangnan He, and Zibin Zheng. 2020. A survey of adversarial learning on graphs. CoRR, abs/2003.05730. +Muhao Chen, Weijia Shi, Ben Zhou, and Dan Roth. 2021. Cross-lingual entity alignment with incidental supervision. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 645-658. +Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017a. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 1511-1517. +Yizheng Chen, Yacin Nadji, Athanasios Kountouras, Fabian Monrose, Roberto Perdisci, Manos Antonakakis, and Nikolaos Vasiloglou. 2017b. Practical attacks against graph-based clustering. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30-November 03, 2017, pages 1125-1142. +Hong Cheng, David Lo, Yang Zhou, Xiaoyin Wang, and Xifeng Yan. 2009. Identifying bug signatures using discriminative graph mining. In Proceedings of the 18th International Symposium on Software Testing and Analysis (ISSTA'09), pages 141-152, Chicago, IL. +Hong Cheng, Yang Zhou, Xin Huang, and Jeffrey Xu Yu. 2012. Clustering large attributed information networks: An efficient incremental computing approach. Data Mining and Knowledge Discovery (DMKD), 25(3):450-477. +Hong Cheng, Yang Zhou, and Jeffrey Xu Yu. 2011. Clustering large attributed graphs: A balance between structural and attribute similarities. ACM Transactions on Knowledge Discovery from Data (TKDD), 5(2):1-33. +Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pages 1123-1132. +Caitlin Dewey. 2016. You probably haven't even noticed google's sketchy quest to control the world's knowledge. https://www.washingtonpost.com/news/the-intersect/wp/2016/05/11/you-probably-havent-even-noticed-goggles-sketchy-quest-to-control-the-worlds-knowledge/. + +Palash Dey and Sourav Medya. 2020. Manipulating node similarity measures in networks. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '20, Auckland, New Zealand, May 9-13, 2020. +Negin Entezari, Saba Al-Sayouri, Amirali Darvishzadeh, and Evangelos Papalexakis. 2020. All you need is low (rank): Defending against adversarial attacks on graphs. In Proceedings of the 13th ACM International Conference on Web Search and Data Mining, WSDM 2020, Houston, TX, February 3-7, 2020. +Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Sayan Goswami, Ayam Pokhrel, Kisung Lee, Ling Liu, Qi Zhang, and Yang Zhou. 2020. Graphmap: Scalable iterative graph processing using nosql. The Journal of Supercomputing (TJSC), 76(9):6619-6647. +Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 2505-2514. +Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. 2021. Stealing links from graph neural networks. In 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021. +Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, Edwin Lewis-Kelham, Gerard de Melo, and Gerhard Weikum. 2011. YAGO2: exploring and querying world knowledge in time, space, context, and many languages. In Proceedings of the 20th International Conference on World Wide Web, WWW 2011, Hyderabad, India, March 28 - April 1, 2011 (Companion Volume), pages 229-232. +Shifu Hou, Yujie Fan, Yiming Zhang, Yanfang Ye, Jingwei Lei, Wenqiang Wan, Jiabin Wang, Qi Xiong, and Fudong Shao. 2019. $\alpha$ Cyber: Enhancing robustness of android malware detection system against adversarial attacks on heterogeneous graph based model. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, pages 609-618. +Aminul Huq and Mst. Tasnim Pervin. 2020. Adversarial attacks and defense on texts: A survey. CoRR, abs/2005.14108. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on + +Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2021-2031. +Ruoming Jin, Dong Li, Jing Gao, Zhi Liu, Li Chen, and Yang Zhou. 2021. Towards a better understanding of linear models for recommendation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'21), Virtual Event. +Keita Kurita, Paul Michel, and Graham Neubig. 2020. Weight poisoning attacks on pretrained models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2793-2806. +Kisung Lee, Ling Liu, Karsten Schwan, Calton Pu, Qi Zhang, Yang Zhou, Emre Yigitoglu, and Pingpeng Yuan. 2015. Scaling iterative graph computations with graphmap. In Proceedings of the 27th IEEE international conference for High Performance Computing, Networking, Storage and Analysis (SC'15), pages 57:1-57:12, Austin, TX. +Kisung Lee, Ling Liu, Yuzhe Tang, Qi Zhang, and Yang Zhou. 2013. Efficient and customizable data partitioning framework for distributed big pdf data processing in the cloud. In Proceedings of the 2013 IEEE International Conference on Cloud Computing (CLOUD'13), pages 327-334, Santa Clara, CA. +Chengjiang Li, Yixin Cao, Lei Hou, Jiaxin Shi, Juanzi Li, and Tat-Seng Chua. 2019. Semi-supervised entity alignment via joint knowledge embedding model and cross-graph model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2723-2732. +Jia Li, Honglei Zhang, Zhichao Han, Yu Rong, Hong Cheng, and Junzhou Huang. 2020a. Adversarial attack on community detection by hiding individuals. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 917-927. +Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020b. BERT-ATTACK: adversarial attack against BERT using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6193-6202. +Ji Liu, Jizhou Huang, Yang Zhou, Xuhong Li, Shilei Ji, Haoyi Xiong, and Dejing Dou. 2021. From distributed machine learning to federated learning: A survey. CoRR, abs/2104.14362. +Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP + +2020, Online, November 16-20, 2020, pages 6355-6364. +Jiaqi Ma, Shuangrui Ding, and Qiaozhu Mei. 2020. Towards more practical adversarial attacks on graph neural networks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, Online. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. +Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan. 2021. Boosting the speed of entity alignment $10x$ : Dual attention matching network with normalized hard sample mining. In WWW '21: The Web Conference 2021, April 19-23, 2021. +Xin Mao, Wenting Wang, Huimin Xu, Man Lan, and Yuanbin Wu. 2020a. MRAEA: an efficient and robust entity alignment approach for cross-lingual knowledge graph. In WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pages 420-428. +Xin Mao, Wenting Wang, Huimin Xu, Yuanbin Wu, and Man Lan. 2020b. Relational reflection entity alignment. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 1095-1104. +George A. Miller. 1992. WORDNET: a lexical database for english. In Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, USA, February 23-26, 1992. +Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, and Sebastian Riedel. 2017. Adversarial sets for regularising neural link predictors. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017, Sydney, Australia, August 11-15, 2017. +John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 119-126. +Hao Nie, Xianpei Han, Le Sun, Chi Man Wong, Qiang Chen, Suhui Wu, and Wei Zhang. 2020. Global structure and local semantics-preserved embeddings for entity alignment. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3658-3664. + +Tong Niu and Mohit Bansal. 2018. Adversarial over-sensitivity and over-stability strategies for dialogue models. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 - November 1, 2018, pages 486-496. +Xing Niu, Prashant Mathur, Georgiana Dinu, and Yaser Al-Onaizan. 2020. Evaluating robustness to input perturbations for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8538-8544. +Balaji Palanisamy, Ling Liu, Kisung Lee, Shicong Meng, Yuzhe Tang, and Yang Zhou. 2014. Anonymizing continuous queries with delay-tolerant mix-zones over road networks. Distributed and Parallel Databases (DAPD), 32(1):91-118. +Balaji Palanisamy, Ling Liu, Yang Zhou, and Qingyang Wang. 2018. Privacy-preserving publishing of multilevel utility-controlled graph datasets. ACM Transactions on Internet Technology (TOIT), 18(2):24:1-24:21. +Emanuel Parzen. 1962. On estimation of a probability density function and mode. The Annals of Mathematical Statistics, 33(3):1065-1076. +Shichao Pei, Lu Yu, Robert Hoehndorf, and Xiangliang Zhang. 2019a. Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 3130-3136. +Shichao Pei, Lu Yu, Guoxian Yu, and Xiangliang Zhang. 2020. REA: robust cross-lingual entity alignment between knowledge graphs. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 2175-2184. +Shichao Pei, Lu Yu, and Xiangliang Zhang. 2019b. Improving cross-lingual entity alignment via optimal transport. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3231-3237. +Jeffrey Pennington, Samuel S. Schoenholz, and Surya Ganguli. 2017. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems (NIPS) 2017, 4-9 December 2017, Long Beach, CA, USA, pages 4785-4795. +Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, + +NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3336-3347. +Jay Pujara, Eriq Augustine, and Lise Getoor. 2017. Sparsity and noise: Where knowledge graph embeddings fall short. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1751-1756. +Mrigank Raman, Siddhant Agarwal, Peifeng Wang, Aaron Chan, Hansen Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, and Xiang Ren. 2021. Learning to deceive knowledge graph augmented models via targeted perturbation. In 9th International Conference on Learning Representations, ICLR 2021, Online, May 4-7, 2021, Conference Track Proceedings. +Jiaxiang Ren, Zijie Zhang, Jiayin Jin, Xin Zhao, Sixing Wu, Yang Zhou, Yelong Shen, Tianshi Che, Ruoming Jin, and Dejing Dou. 2021. Integrated defense for resilient graph matching. In Proceedings of the 38th International Conference on Machine Learning, (ICML'21), pages 8982-8997, Virtual Event. +Jiaxiang Ren, Yang Zhou, Ruoming Jin, Zijie Zhang, Dejing Dou, and Pengwei Wang. 2019. Dual adversarial learning based network alignment. In Proceedings of the 19th IEEE International Conference on Data Mining (ICDM'19), pages 1288-1293, Beijing, China. +Wenzhuo Song, Shengsheng Wang, Bo Yang, You Lu, Xuehua Zhao, and Xueyan Liu. 2018. Learning node and edge embeddings for signed networks. Neurocomputing, 319:42-54. +Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 4444-4451. +Zhiyuan Su, Ling Liu, Mingchu Li, Xinxin Fan, and Yang Zhou. 2013. Servicetrust: Trust management in service provision networks. In Proceedings of the 10th IEEE International Conference on Services Computing (SCC'13), pages 272-279, Santa Clara, CA. +Zhiyuan Su, Ling Liu, Mingchu Li, Xinxin Fan, and Yang Zhou. 2015. Reliable and resilient trust management in distributed service provision networks. ACM Transactions on the Web (TWEB), 9(3):1-37. +Jian Sun, Yu Zhou, and Chengqing Zong. 2020a. Dual attention network for cross-lingual entity alignment. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 3190-3201. + +Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant G. Honavar. 2020b. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 673-683. +Zequn Sun, Muhao Chen, Wei Hu, Chengming Wang, Jian Dai, and Wei Zhang. 2020c. Knowledge association with hyperbolic knowledge graph embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5704-5716. +Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attribute-preserving embedding. In *The Semantic Web - ISWC* 2017 - 16th International Semantic Web Conference, Vienna, Austria, October 21-25, 2017, Proceedings, Part I, pages 628-644. +Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4396-4402. +Zequn Sun, JiaCheng Huang, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu. 2019. Transedge: Translating relation-contextualized embeddings for knowledge graphs. In The Semantic Web - ISWC 2019 - 18th International Semantic Web Conference, Auckland, New Zealand, October 26-30, 2019, Proceedings, Part I, pages 612-629. +Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020d. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 222-229. +Tsubasa Takahashi. 2019. Indirect adversarial attacks via poisoning neighbors for graph convolutional networks. In 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, December 9-12, 2019, pages 1395-1400. +Samson Tan, Shafiq R. Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! combating linguistic discrimination with inflectional perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2920-2935. +Xiaobin Tang, Jing Zhang, Bo Chen, Yang Yang, Hong Chen, and Cuiping Li. 2020. BERT-INT: A bert-based interaction model for knowledge graph align + +ment. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJ-CAI 2020, pages 3174-3180. +Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2153-2162. +Binghui Wang and Neil Zhenqiang Gong. 2019. Attacking graph-based classification via manipulating the graph structure. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019, pages 2023-2040. +Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, and Bo Li. 2020. T3: tree-autoencoder constrained adversarial text generation for targeted attack. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6134-6150. +Wenqi Wang, Lina Wang, Run Wang, Zhibo Wang, and Aoshuang Ye. 2019. Towards a robust deep neural network in texts: A survey. CoRR, abs/1902.07285. +Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 349-357. +Marcin Waniek, Tomasz P. Michalak, Michael J. Wooldridge, and Talal Rahwan. 2018. Hiding individuals and communities in a social network. Nature Human Behaviour, 2:139-147. +Sixing Wu, Ying Li, Dawei Zhang, Yang Zhou, and Zhonghai Wu. 2020a. Diverse and informative dialogue generation with context-specific commonsense knowledge awareness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, (ACL'20), pages 5811-5820, Online. +Sixing Wu, Ying Li, Dawei Zhang, Yang Zhou, and Zhonghai Wu. 2021a. Topicka: Generating commonsense knowledge-aware dialogue responses towards the recommended topic fact. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, (IJCAI'20), pages 3766-3772, Online. +Sixing Wu, Minghui Wang, Dawei Zhang, Yang Zhou, Ying Li, and Zhonghai Wu. 2021b. Knowledge-aware dialogue generation via hierarchical infobox accessing and infobox-dialogue interaction graph + +network. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, (IJCAI'21), Online. +Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019a. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5278-5284. +Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2019b. Jointly learning entity and relation representations for entity alignment. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 240-249. +Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2020b. Neighborhood matching network for entity alignment. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6477-6487. +Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020c. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4166-4176. +Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. 2021. Graph backdoor. In 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021. +Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, and Anil K. Jain. 2019a. Adversarial attacks and defenses in images, graphs and text: A review. CoRR, abs/1909.08072. +Hongcai Xu, Junpeng Bao, and Gaojie Zhang. 2020a. Dynamic knowledge graph-based dialogue generation with improved adversarial meta-learning. CoRR, abs/2004.08833. +Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. 2019b. Topology attack and defense for graph neural networks: An optimization perspective. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3961-3967. +Kun Xu, Linfeng Song, Yansong Feng, Yan Song, and Dong Yu. 2020b. Coordinated reasoning for cross-lingual knowledge graph alignment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational + +Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9354-9361. +Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, and Dong Yu. 2019c. Crosslingual knowledge graph alignment via graph matching neural network. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 3156-3161. +Yuchen Yan, Lihui Liu, Yikun Ban, Baoyu Jing, and Hanghang Tong. 2021. Dynamic knowledge graph alignment. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021. +Rui Ye, Xin Li, Yujie Fang, Hongyu Zang, and Mingzhong Wang. 2019. A vectorized relational graph convolutional network for multi-relational network alignment. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4135-4141. +Donghan Yu, Yiming Yang, Ruohong Zhang, and Yuexin Wu. 2021. Knowledge embedding based graph convolutional network. In WWW '21: The Web Conference 2021, April 19-23, 2021. +Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6066-6080. +Weixin Zeng, Xiang Zhao, Wei Wang, Jiuyang Tang, and Zhen Tan. 2020. Degree-aware alignment for entities in tail. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 811-820. +Gong Zhang, Yang Zhou, Sixing Wu, Zeru Zhang, and Dejing Dou. 2021. Cross-lingual entity alignment with adversarial kernel embedding and adversarial knowledge translation. CoRR, abs/2104.07837. +Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, and Kui Ren. 2019. Data poisoning attack against knowledge graph embedding. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4853-4859. +Wei Emma Zhang, Quan Z. Sheng, Ahoud Abdulrahmn F. Alhazmi, and Chenliang Li. 2020a. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Trans. Intell. Syst. Technol., 11(3):24:1-24:41. + +Zijie Zhang, Zeru Zhang, Yang Zhou, Yelong Shen, Ruoming Jin, and Dejing Dou. 2020b. Adversarial attacks on deep graph matching. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 (NeurIPS'20), Virtual. +Xin Zhao, Zeru Zhang, Zijie Zhang, Lingfei Wu, Jiayin Jin, Yang Zhou, Ruoming Jin, Dejing Dou, and Da Yan. 2021. Expressive 1-lipschitz neural networks for robust multiple graph learning against adversarial attacks. In Proceedings of the 38th International Conference on Machine Learning, (ICML'21), pages 12719-12735, Virtual Event. +Kai Zhou, Tomasz P. Michalak, Marcin Waniek, Talal Rahwan, and Yevgeniy Vorobeychik. 2019a. Attacking similarity-based link prediction in social networks. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '19, Montreal, QC, Canada, May 13-17, 2019, pages 305-313. +Yang Zhou. 2017. Innovative Mining, Processing, and Application of Big Graphs. Ph.D. thesis, Georgia Institute of Technology, Atlanta, GA, USA. +Yang Zhou, Amnay Amimeur, Chao Jiang, Dejing Dou, Ruoming Jin, and Pengwei Wang. 2018a. Density-aware local siamese autoencoder network embedding with autoencoder graph clustering. In Proceedings of the 2018 IEEE International Conference on Big Data (BigData'18), pages 1162-1167, Seattle, WA. +Yang Zhou, Hong Cheng, and Jeffrey Xu Yu. 2009. Graph clustering based on structural/attribute similarities. Proceedings of the VLDB Endowment (PVLDB), 2(1):718-729. +Yang Zhou, Hong Cheng, and Jeffrey Xu Yu. 2010. Clustering large attributed graphs: An efficient incremental approach. In Proceedings of the 10th IEEE International Conference on Data Mining (ICDM'10), pages 689-698, Sydney, Australia. +Yang Zhou, Chao Jiang, Zijie Zhang, Dejing Dou, Ruoming Jin, and Pengwei Wang. 2019b. Integrating local vertex/edge embedding via deep matrix fusion and siamese multi-label classification. In Proceedings of the 2019 IEEE International Conference on Big Data (BigData'19), pages 1018-1027, Los Angeles, CA. +Yang Zhou, Kisung Lee Ling Liu, Qi Zhang, and Balaji Palanisamy. 2019c. Enhancing collaborative filtering with multi-label classification. In Proceedings of the 2019 International Conference on Computational Data and Social Networks (CSoNet'19), pages 323-338, Ho Chi Minh City, Vietnam. +Yang Zhou and Ling Liu. 2012. Clustering analysis in large graphs with rich attributes. In Dawn E. Holmes and Lakhmi C. Jain, editors, Data Mining: Foundations and Intelligent Paradigms: Volume 1: Clustering, Association and Classification. Springer. + +Yang Zhou and Ling Liu. 2013. Social influence based clustering of heterogeneous information networks. In Proceedings of the 19th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'13), pages 338-346, Chicago, IL. +Yang Zhou and Ling Liu. 2014. Activity-edge centric multi-label classification for mining heterogeneous information networks. In Proceedings of the 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'14), pages 1276-1285, New York, NY. +Yang Zhou and Ling Liu. 2015. Social influence based clustering and optimization over heterogeneous information networks. ACM Transactions on Knowledge Discovery from Data (TKDD), 10(1):1-53. +Yang Zhou and Ling Liu. 2019. Approximate deep network embedding for mining large-scale graphs. In Proceedings of the 2019 IEEE International Conference on Cognitive Machine Intelligence (CogMI'19), pages 53-60, Los Angeles, CA. +Yang Zhou, Ling Liu, and David Buttler. 2015a. Integrating vertex-centric clustering with edge-centric clustering for meta path graph analysis. In Proceedings of the 21st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'15), pages 1563-1572, Sydney, Australia. +Yang Zhou, Ling Liu, Kisung Lee, Balaji Palanisamy, and Qi Zhang. 2020a. Improving collaborative filtering with social influence over heterogeneous information networks. ACM Transactions on Internet Technology (TOIT), 20(4):36:1-36:29. +Yang Zhou, Ling Liu, Kisung Lee, Calton Pu, and Qi Zhang. 2015b. Fast iterative graph computation with resource aware graph parallel abstractions. In Proceedings of the 24th ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC'15), pages 179-190, Portland, OR. +Yang Zhou, Ling Liu, Kisung Lee, and Qi Zhang. 2015c. Graphtwist: Fast iterative graph computation with two-tier optimizations. Proceedings of the VLDB Endowment (PVLDB), 8(11):1262-1273. +Yang Zhou, Ling Liu, Chang-Shing Perng, Anca Sailer, Ignacio Silva-Lepe, and Zhiyuan Su. 2013. Ranking services by service network structure and service attributes. In Proceedings of the 20th International Conference on Web Service (ICWS'13), pages 26-33, Santa Clara, CA. +Yang Zhou, Ling Liu, Calton Pu, Xianqiang Bao, Kisung Lee, Balaji Palanisamy, Emre Yigitoglu, and Qi Zhang. 2015d. Clustering service networks with entity, attribute and link heterogeneity. In Proceedings of the 22nd International Conference on Web Service (ICWS'15), pages 257-264, New York, NY. +Yang Zhou, Ling Liu, Sangeetha Seshadri, and Lawrence Chiu. 2016. Analyzing enterprise storage workloads with graph modeling and clustering. + +IEEE Journal on Selected Areas in Communications (JSAC), 34(3):551-574. +Yang Zhou, Jiaxiang Ren, Dejing Dou, Ruoming Jin, Jingyi Zheng, and Kisung Lee. 2020b. Robust meta network embedding against adversarial attacks. In Proceedings of the 20th IEEE International Conference on Data Mining (ICDM'20), pages 1448-1453, Sorrento, Italy. +Yang Zhou, Jiaxiang Ren, Ruoming Jin, Zijie Zhang, Dejing Dou, and Da Yan. 2020c. Unsupervised multiple network alignment with multinominal gan and variational inference. In Proceedings of the 2020 IEEE International Conference on Big Data (BigData'20), pages 868-877, Atlanta, GA. +Yang Zhou, Jiaxiang Ren, Ruoming Jin, Zijie Zhang, Jingyi Zheng, Zhe Jiang, Da Yan, and Dejing Dou. 2021a. Unsupervised adversarial network alignment with reinforcement learning. To appear in ACM Transactions on Knowledge Discovery from Data (TKDD). +Yang Zhou, Jiaxiang Ren, Sixing Wu, Dejing Dou, Ruoming Jin, Zijie Zhang, and Pengwei Wang. 2019d. Semi-supervised classification-based local vertex ranking via dual generative adversarial nets. In Proceedings of the 2019 IEEE International Conference on Big Data (BigData'19), pages 1267-1273, Los Angeles, CA. +Yang Zhou, Sangeetha Seshadri, Lawrence Chiu, and Ling Liu. 2014. Graphlens: Mining enterprise storage workloads using graph analytics. In Proceedings of the 2014 IEEE International Congress on Big Data (BigData'14), pages 1-8, Anchorage, AK. +Yang Zhou, Sixing Wu, Chao Jiang, Zijie Zhang, De- jing Dou, Ruoming Jin, and Pengwei Wang. 2018b. Density-adaptive local edge representation learning with generative adversarial network multi-label edge classification. In Proceedings of the 18th IEEE International Conference on Data Mining (ICDM'18), pages 1464-1469, Singapore. +Yang Zhou, Zeru Zhang, Sixing Wu, Victor Sheng, Xiaoying Han, Zijie Zhang, and Ruoming Jin. 2021b. Robust network alignment via attack signal scaling and adversarial perturbation elimination. In Proceedings of the 30th Web Conference (WWW'21), pages 3884-3895, Virtual Event / Ljubljana, Slovenia. +Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via joint knowledge embeddings. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4258-4264. +Qi Zhu, Hao Wei, Bunyamin Sisman, Da Zheng, Christos Faloutsos, Xin Luna Dong, and Jiawei Han. 2020. Collective multi-type entity alignment between knowledge graphs. In WWW '20: The Web + +Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 2241-2252. +Qiannan Zhu, Xiaofei Zhou, Jia Wu, Jianlong Tan, and Li Guo. 2019. Neighborhood-aware attentional representation for multilingual knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 1943-1949. +Yao Zhu, Hongzhi Liu, Zhonghai Wu, and Yingpeng Du. 2021. Relation-aware neighborhood matching model for entity alignment. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021. +Daniel Zügner, Amir Akbarnejad, and Stephan Günneumann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 2847-2856. +Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Gunnemann. 2020. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Trans. Knowl. Discov. Data, 14(5):57:1-57:31. +Daniel Zügner and Stephan Gunnemann. 2019. Adversarial attacks on graph neural networks via meta learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. + +# Appendix + +# A Theoretical Analysis + +Theorem 1. Let entity embedding vectors $\tilde{\mathbf{e}}_k^2$ and $\tilde{\mathbf{e}}_l^2$ be the most similar and least similar to $(\tilde{\mathbf{e}}_i^1)^T$ $(1\leq k,l\leq N^{2})$ , i.e., $\tilde{\mathbf{e}}_k^2 = \mathrm{argmax}_{\tilde{\mathbf{e}}_k^2}(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_k^2$ and $\tilde{\mathbf{e}}_l^2 = \mathrm{argmin}_{\tilde{\mathbf{e}}_l^2}(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_l^2$ , and $c = (\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_k^2$ Also, suppose that $d$ is the minimal norm of entity embedding vectors in $G^{2}$ , i.e., $d = \min_{\tilde{\mathbf{e}}_m^2}\| \tilde{\mathbf{e}}_m^2\| _2$ for $\forall e_m^2\in E^2$ . For a given $0 < \eta < d / 2$ , if $\alpha < \sqrt{\frac{1}{c}\log\frac{d - \eta}{\eta}}$ then $1 - \sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) > \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ for $\forall e_j^2\in E^2$ + +Proof. $1 - \sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) > \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ is equivalent to $\sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) < 1 - \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ . We convert it to $\frac{1}{1 + \exp(-(\tilde{\mathbf{e}}_i^1)^T\cdot\tilde{\mathbf{e}}_j^2)} < 1 - \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ As $(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2\leq c$ we have $\frac{1}{1 + \exp(-\alpha^2(\mathbf{e}_i^1)^T\cdot\mathbf{e}_j^2)}\leq \frac{1}{1 + \exp(-\alpha^2c)}$ If we can prove $\frac{1}{1 + \exp(-\alpha^2c)} < 1 - \eta /\| \tilde{\mathbf{e}}_j^2$ , then we can testify $\frac{1}{1 + \exp(-\alpha^2(\mathbf{e}_i^1)^T\cdot\mathbf{e}_j^2)} < 1 - \eta /\| \tilde{\mathbf{e}}_j^2$ Thus, we need to solve $\exp (\alpha^{2}c) < \frac{\|\tilde{\mathbf{e}}_{j}^{2}\|_{2} - \eta}{\eta}$ + +As $\| \tilde{\mathbf{e}}_j^2\| _2\geq d$ feasible $\alpha$ for $\exp (\alpha^2 c) < \frac{d - \eta}{\eta}$ is also feasible for $\exp (\alpha^{2}c) < \frac{\|\tilde{\mathbf{e}}_j^2\|_2 - \eta}{\eta}$ Since exp is a monotonic increasing function, by solving the above inequality, we have feasible $\alpha < \sqrt{\frac{1}{c}\log\frac{d - \eta}{\eta}}$ + +Notice that $0 < \eta < d / 2$ . This makes $\frac{d - \eta}{\eta} > 1$ and the upper bound of $\alpha$ be positive. Therefore, for any $\alpha < \frac{1}{c}\log \frac{d - \eta}{\eta}$ , $1 - \sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) > \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ is satisfied. \ No newline at end of file diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/images.zip b/adversarialattackagainstcrosslingualknowledgegraphalignment/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..29dcc08012146b13b27caf8e7ebedb26daa4688d --- /dev/null +++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48388e5ff8c9a8dfe14588f6d0d78a443d95c3a3df147ca6da4815ad87b09022 +size 562334 diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/layout.json b/adversarialattackagainstcrosslingualknowledgegraphalignment/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4e93de1c8d06f45d14b56cef4983e0861a513551 --- /dev/null +++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:798e63181185ce7638368ff511073832b54484809bb10a9fa9089c978866d5f8 +size 850367 diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_content_list.json b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4e462fbcba0660b1742298663e64847dfb32fe85 --- /dev/null +++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e178aad794a697a3201620a852045830ccc62885458d86e73029bbe0e9733c7 +size 109766 diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_model.json b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1e53c640733bc42714d784af9c8722350ff49ff3 --- /dev/null +++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c5ea76e1af65cf2ac7d8d888ce0b6477aad37df2cc84673053a6be485d5d4ef +size 128088 diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_origin.pdf b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..83e8ad0ce9f511132db6c3be17b2fbf99b6de5e8 --- /dev/null +++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1c684bc8966050b57f8ecd04f7ba998024ca2432e5f0d10369dec15d3183cfc +size 975222 diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/full.md b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/full.md new file mode 100644 index 0000000000000000000000000000000000000000..baf4af07cbf597117b5b49d50f75379e14eb064c --- /dev/null +++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/full.md @@ -0,0 +1,371 @@ +# Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods + +Peru Bhardwaj1 John Kelleher2* Luca Costabello3* Declan O'Sullivan1* + +$^{1}$ ADAPT Centre, Trinity College Dublin, Ireland + +$^{2}$ ADAPT Centre, TU Dublin, Ireland + +3 Accenture Labs, Ireland + +peru.bhardwaj@adaptcentre.ie + +# Abstract + +Despite the widespread use of Knowledge Graph Embeddings (KGE), little is known about the security vulnerabilities that might disrupt their intended behaviour. We study data poisoning attacks against KGE models for link prediction. These attacks craft adversarial additions or deletions at training time to cause model failure at test time. To select adversarial deletions, we propose to use the model-agnostic instance attribution methods from Interpretable Machine Learning, which identify the training instances that are most influential to a neural model's predictions on test instances. We use these influential triples as adversarial deletions. We further propose a heuristic method to replace one of the two entities in each influential triple to generate adversarial additions. Our experiments show that the proposed strategies outperform the state-of-art data poisoning attacks on KGE models and improve the MRR degradation due to the attacks by up to $62\%$ over the baselines. + +# 1 Introduction + +Knowledge Graph Embeddings (KGE) are the state-of-art models for relational learning on large scale Knowledge Graphs (KG). They drive enterprise products ranging from search engines to social networks to e-commerce (Noy et al., 2019). However, the analysis of their security vulnerabilities has received little attention. Identifying these vulnerabilities is especially important for high-stake domains like healthcare and finance that employ KGE models to make critical decisions (Hogan et al., 2020; Bendtsen and Petrovski, 2019). We study the security vulnerabilities of KGE models through data poisoning attacks (Biggio and Roli, 2018; Joseph et al., 2019) that aim to degrade the predictive performance of learned KGE models by adding or removing triples to the input training graph. + +![](images/d0495167c7699d546f68835063a104af955821a295588ab6449b409ba3cf8470.jpg) +(a) + +![](images/bef61e9eb98f99f7848a9690cb737099588b24797e316f63362158e216623cf4.jpg) +(b) + +Target triple becomes False by adversarial + +(a) deletion + +(b) add + +tion + +![](images/d1c03ccbf1d0edca321fbbe633275e9c8ec5dac91334401dd214fb42d3d5edbf.jpg) +Figure 1: Adversarial attacks against KGE models for fraud detection. The knowledge graph consists of two types of entities - Person and BankAccount. The missing target triple to predict is (Sam, allied_with, Joe). Original KGE model predicts this triple as True. But a malicious attacker uses the instance attribution methods to either (a) delete an adversarial triple or (b) add an adversarial triple. Now, the KGE model predicts the missing target triple as False. + +Designing data poisoning attacks against KGE models poses two main challenges. First, to select adversarial deletions or additions, we need to measure the impact of a candidate perturbation on the model's predictions. But the naive approach of re-training a new KGE model for each candidate perturbation is computationally prohibitive. Second, while the search space for adversarial deletions is limited to existing triples in the KG, it is computationally intractable to enumerate through all candidate adversarial additions. Furthermore, attack strategies proposed against models for other graph modalities (Xu et al., 2020) do not scale to KGE models; as they would require gradients with respect to a dense adjacency tensor of the KG. + +In this work, we propose to use the model-agnostic instance attribution methods from Interpretable Machine Learning (Molnar, 2019) to select adversarial deletions and additions against KGE models. Instance attribution methods identify the training instances that are influential to + +model predictions, that is, deleting the instances from the training data would considerably change the model parameters or predictions. These methods are widely used to generate post-hoc example-based explanations for deep neural networks on images (Koh and Liang, 2017; Hanawa et al., 2021; Charpiat et al., 2019) and text (Han et al., 2020; Han and Tsvetkov, 2020; Pezeshkpour et al., 2021). Since the KGE models have relatively shallow neural architectures and the instance attribution metrics are independent of the black-box models and the input domain, they are a promising approach to estimate the influence of training triples on the KGE model predictions. Yet, despite their promise, they have not been used on KGE models so far. We use the instance attribution methods to address the challenge of measuring the impact of a candidate adversarial deletion on the model predictions. + +We focus on the adversarial goal of degrading the KGE model prediction on a given target triple. To achieve this goal, we use three types of instance attribution methods - Instance Similarity that compares the feature representations of target and training triples (Hanawa et al., 2021; Charpiat et al., 2019); Gradient Similarity that compares the gradients of model's loss function due to target and training triples (Hanawa et al., 2021; Charpiat et al., 2019); and Influence Function (Koh and Liang, 2017) which is a principled approach from the robust statistics to estimate the effect of removing a training triple on the KGE model's predictions. + +Using these metrics, we select the most influential training triple for adversarial deletion. Using the influential triple, we further select adversarial addition by replacing one of the two entities of the influential triple with the most dissimilar entity in the embedding space. The intuition behind this step is to add a triple that would reduce the influence of the influential triple. This solution also overcomes the scalability challenge for adversarial additions by comparing only the entity embeddings to select the replacement. Figure 1 shows an example of the proposed adversarial deletions and additions against KGE models for fraud detection. + +We evaluate the proposed attacks for four KGE models - DistMult, ComplEx, ConvE and TransE on two benchmark datasets - WN18RR and FB15k-237. Our results show that instance attribution metrics achieve significantly better performance than all state-of-art attacks for both adversarial additions and deletions on three out of four models; + +and better or equivalent performance on one model. We find that even simple metrics based on instance similarity outperform the state-of-the-art poisoning attacks and are as effective as the computationally expensive Influence Function. + +Thus, the main contribution of our research is a collection of effective adversarial deletion and addition strategies based on instance attribution methods against KGE models. + +# 2 Knowledge Graph Embeddings + +A Knowledge Graph (KG), is a set of triples $\mathcal{T} \subseteq \mathcal{E} \times \mathcal{R} \times \mathcal{E}$ where each triple encodes the relationship $\mathbf{r}$ as a typed link between the subject entity $s$ and the object entity $o$ , i.e. $\mathcal{T} := \{t : (s, \mathbf{r}, o) \mid s, o \in \mathcal{E} \text{ and } \mathbf{r} \in \mathcal{R}\}$ . Here, $\mathcal{E}$ is the set of entities and $\mathcal{R}$ is the set of relations in the knowledge graph. Large scale KGs are often curated automatically from the user content or from the Web and thus are incomplete in practice. To predict the missing links in a KG, the state-of-art method is to learn low dimensional feature vectors for entities and relations in the graph and use them to score the links. These feature vectors are called Knowledge Graph Embeddings (KGE) and denoted as $\pmb{\theta} := \{\pmb{E}, \pmb{R}\}$ where $\pmb{E} \in \mathbb{R}^k$ is the embedding matrix for entities, $\pmb{R} \in \mathbb{R}^k$ is the embedding matrix for relations and $k$ is the embedding dimension. + +Scoring Functions: KGE models differ from each other by their scoring functions $f:\mathcal{T}\to \mathbb{R}$ which combine the subject, relation and object embeddings to assign a score to the triple, i.e. $f_{t}\coloneqq f(\pmb{e}_{s},\pmb{e}_{r},\pmb{e}_{o})$ where $\pmb {e}_s,\pmb {e}_o\in \pmb{E}$ and $\pmb {e_r}\in \pmb{R}$ . Table 1 shows the different scoring functions of KGE models used in this research. + +These scoring functions are used to categorize the models as additive or multiplicative (Chandrahas et al., 2018). Additive models apply relation-specific translation from the subject embedding to the object embedding. The scoring function for such models is expressed as $f_{t} = -\left\| M_{\mathrm{r}}^{1}(e_{s}) + e_{\mathrm{r}} - M_{\mathrm{r}}^{2}(e_{o}) \right\|$ where $M_{\mathrm{r}} \in \mathbb{R}^{k \times k}$ is a projection matrix from entity space to relation space. An example of additive models is TransE where $M_{\mathrm{r}}^{1} = M_{\mathrm{r}}^{2} = I$ . + +On the other hand, multiplicative models score triples through multiplicative interactions between the subject, relation and object embeddings. The scoring function for these models is expressed as $f_{t} = e_{\mathrm{r}}^{\top}\mathcal{F}(e_{s},e_{o})$ where the function $\mathcal{F}$ measures the compatibility between the subject and + +
ModelScoring FunctionFeature Vectors
DistMult⟨es, e_r, eo⟩es○er○eo
ComplExR(⟨es, e_r, eo⟩)R(ess○er○eo)
ConvE⟨(es*er), eo⟩(ess*er)○eo
TransE-||es+er-eo||p-(ess+er-eo)
+ +Table 1: Scoring functions $f_{sro}$ and the proposed Triple Feature Vectors $\pmb{f}_{sro}$ of the KGE models used in this research. For ComplEx, $\pmb{e}_s,\pmb{e}_r,\pmb{e}_o\in \mathbb{C}^k$ ; for the remaining models $\pmb{e}_s,\pmb{e}_r,\pmb{e}_o\in \mathbb{R}^k$ . Here, $\langle \cdot \rangle$ denotes the tri-linear dot product; o denotes the element-wise Hadamard product; $\overline{\cdot}$ denotes conjugate for complex vectors; $\| \cdot \| _p$ denotes l-p norm; $*$ is the neural architecture in ConvE, i.e. $e_s*e_r\coloneqq \sigma (\mathrm{vec}(\sigma ([\overline{e_r},\overline{e_s} ]*\Omega))W)$ where $\sigma$ denotes sigmoid activation, $*$ denotes 2D convolution; $\overline{\cdot}$ denotes 2D reshaping of real vectors. + +object embeddings and varies across different models within this family. DistMult, ComplEx and ConvE are examples of multiplicative models. + +Training: Since the KGs only contain positive triples; to train the KGE model, synthetic negative samples $t' \in \mathcal{T}'$ are generated by replacing the subject or object in the positive triples with other entities in $\mathcal{E}$ . That is, for each positive triple $t \coloneqq (s, \mathbf{r}, o)$ , the set of negative samples is $t' \coloneqq \{(s', \mathbf{r}, o) \cup (s, \mathbf{r}, o')\}$ . The training objective is to learn the embeddings that score positive triples existing in the KG higher than the negative triples generated synthetically. To achieve this, a triple-wise loss function $\mathcal{L}(t, \boldsymbol{\theta}) \coloneqq \ell(t, \boldsymbol{\theta}) + \sum_{t' \in \mathcal{T}'} \ell(t', \boldsymbol{\theta})$ is minimized. Thus, the optimal parameters $\widehat{\boldsymbol{\theta}}$ learned by the model are defined by $\widehat{\boldsymbol{\theta}} \coloneqq \arg \min_{\boldsymbol{\theta}} \sum_{t \in \mathcal{T}} \mathcal{L}(t, \boldsymbol{\theta})$ . Further details on KGE loss functions and negative sampling strategies are available in Ruffinelli et al. (2020). + +Missing Link Prediction: Given the learned embeddings $\theta$ , missing triples in the knowledge graph are predicted by an entity ranking evaluation protocol. Similar to the training process, subject-side negatives $t_s' = (s', \mathbf{r}, o)$ and object-side negatives $t_o' = (s, \mathbf{r}, o')$ are sampled for each test triple $t = (s, \mathbf{r}, o)$ to be predicted. Of these negatives, the triples already existing in the training, validation or test set are filtered out (Bordes et al., 2013). The test triple is then ranked against the remaining negatives based on the scores predicted by the KGE model. The state-of-art evaluation metrics reported over the entire set are (i) MR: mean of the ranks, (ii) MRR: mean of the reciprocals of ranks and (iii) Hits@n: number of triples ranked in top-n. + +# 3 Poisoning Knowledge Graph Embeddings via Instance Attribution + +We consider an adversarial attacker that aims to degrade the KGE model's predictive performance on a set of missing triples that have been ranked highly plausible by the model. We denote these target triples as $\mathcal{Z} \coloneqq \{z \coloneqq (z_s, z_r, z_o)\}$ . Since the predicted ranks are based on the predicted scores; to reduce the predicted rank of a target triple, we craft perturbations to the training data that aim to reduce the predicted score of the target triple. + +Threat Model: We use the same threat model as the state-of-art poisoning attacks on KGE models (Pezeshkpour et al., 2019; Zhang et al., 2019a). We focus on the white-box attack setting where the attacker has full knowledge of the victim model architecture and access to the learned embeddings. However, they cannot perturb the architecture or the embeddings directly; but only through perturbations in the training data. We study both adversarial additions and adversarial deletions. In both settings, the attacker is restricted to making only one edit in the neighbourhood of the target triple. The neighbourhood of the target triple $z \coloneqq (z_{s}, z_{\mathbf{r}}, z_{o})$ is the set of triples that have the same subject or the same object as the target triple, i.e. $\mathcal{X} \coloneqq \{x \coloneqq (x_{s}, x_{\mathbf{r}}, x_{o}) \mid x_{s} \in \{z_{s}, z_{o}\} \vee x_{o} \in \{z_{s}, z_{o}\} \}$ . + +# 3.1 Instance Attribution Methods + +For adversarial deletions, we want to identify the training triples that have influenced the KGE model's prediction on the target triple. Deleting these influential triples from the training set will likely degrade the prediction on the target triple. Thus, we define an influence score $\phi(z,x): \mathcal{T} \times \mathcal{T} \to \mathbb{R}$ for the pairs of triples $(z,x) \in \mathcal{T} \times \mathcal{T}$ which indicates the influence of training triple $x$ on the prediction of target triple $z$ . Larger values of the influence score $\phi(z,x)$ indicate that removing $x$ from the training data would cause larger reduction in the predicted score on $z$ . + +Trivially, we can compute the influence score for a training triple by removing the triple and retraining the KGE model. However, this is a prohibitively expensive step that requires re-training a new KGE model for every candidate influential triple. Thus, we use the following instance-attribute methods from Interpretable Machine Learning (Molnar, 2019) to estimate the influence score $\phi(z, x)$ without re-training the model. + +# 3.1.1 Instance Similarity + +We estimate the influence of training triple $x$ on the prediction of target triple $z$ based on the similarity of their feature representations. The intuition behind these metrics is to identify the training triples that a KGE model has learnt to be similar to the target triple and thus (might) have influenced the model's prediction on the target triple. + +Computing this similarity between triples requires feature vector representations for the triples. We note that while the standard KGE scoring functions assign a scalar score to the triples, this scalar value is obtained by reducing over the embedding dimension. For example, in the tri-linear dot product for DistMult, the embeddings of subject, relation and object are multiplied element-wise and then the scalar score for the triple is obtained by summing over the embedding dimension, i.e. $f_{t} \coloneqq \langle e_{s}, e_{r}, e_{o} \rangle \coloneqq \sum_{i=1}^{k} e_{s_{i}} e_{r_{i}} e_{o_{i}}$ where $k$ is the embedding dimension. + +Thus, to obtain feature vector representations for the triples $\pmb{f}_t: \mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \mathbb{R}^k$ , we use the state-of-art KGE scoring functions without reduction over the embedding dimension. For the DistMult model, the triple feature vector is $\pmb{f} := e_s \circ e_r \circ e_o$ where $\circ$ is the Hadamard (element-wise) product. Table 1 shows the feature vector scores for different KGE models used in this research. + +Given the feature vectors for target triples $\pmb{f}(z)$ and training triples $\pmb{f}(x)$ , we follow Hanawa et al. (2021) and define the following metrics. + +Dot Metric: This metric computes the similarity between target and training instances as the dot product of their feature vectors. That is, $\phi_{\text{dot}}(z,x) \coloneqq \langle \pmb{f}(z), \pmb{f}(x) \rangle$ + +$\ell_{2}$ Metric: This metric computes similarity as the negative Euclidean distance between the feature vectors of target instance and test instance. That is, $\phi_{\ell_2}(z,x)\coloneqq -\| f(z) - f(x)\| _2$ + +Cosine Metric: This metric computes similarity as the dot product between $\ell_{2}$ normalized feature vectors of target and test instance, i.e. it ignores the magnitude of the vectors and only relies on the angle between them. That is, $\phi_{\cos}(z,x)\coloneqq \cos (f(z),f(x))$ + +Here, we denote the dot product for two vectors $\mathbf{a}$ and $\mathbf{b}$ as $\langle \mathbf{a}, \mathbf{b} \rangle \coloneqq \sum_{i=1}^{p} a_i b_i$ ; the $\ell_2$ norm of a vector as $\| \mathbf{a} \|_2 \coloneqq \sqrt{\langle \mathbf{a}, \mathbf{a} \rangle}$ ; and the cos similarity between vectors $\mathbf{a}$ and $\mathbf{b}$ as $\cos(\mathbf{a}, \mathbf{b}) \coloneqq \frac{\langle \mathbf{a}, \mathbf{b} \rangle}{\| \mathbf{a} \|_2 \| \mathbf{b} \|_2}$ . + +# 3.1.2 Gradient Similarity + +We represent the gradient of the loss for triple $z$ w.r.t. model parameters as $\pmb {g}(z,\widehat{\pmb{\theta}})\coloneqq \nabla_{\pmb{\theta}}\mathcal{L}(z,\widehat{\pmb{\theta}})$ . Gradient similarity metrics compute similarity between the gradients due to target triple $z$ and the gradients due to training triple $x$ . The intuition is to assign higher influence to training triples that have similar effect on the model's parameters as the target triple; and are therefore likely to impact the prediction on target triple (Charpiat et al., 2019). Thus, using the same similarity functions as Instance Similarity metrics, we define the following three metrics for gradient similarity - Gradient Dot (GD), Gradient $\ell_2$ (GL) and Gradient Cosine (GC). + +$$ +\mathbf {G D} (\mathbf {d o t}): \phi_ {\mathrm {G D}} (z, x) := \langle \boldsymbol {g} (z, \widehat {\boldsymbol {\theta}}), \boldsymbol {g} (x, \widehat {\boldsymbol {\theta}}) \rangle +$$ + +$$ +\mathbf {G L} (\ell_ {\mathbf {2}}): \phi_ {\mathrm {G L}} (z, x) := - \left\| \boldsymbol {g} (z, \widehat {\boldsymbol {\theta}}) - \boldsymbol {g} (x, \widehat {\boldsymbol {\theta}}) \right\| _ {2} +$$ + +$$ +\mathbf {G C} (\boldsymbol {\cos}): \phi_ {\mathrm {G C}} (z, x) := \cos \left(\boldsymbol {g} (z, \widehat {\boldsymbol {\theta}}), \boldsymbol {g} (x, \widehat {\boldsymbol {\theta}})\right) +$$ + +# 3.1.3 Influence Functions + +Influence Functions (IF) is a classic technique from robust statistics and was introduced to explain the predictions of black-box models in Koh and Liang (2017). To estimate the effect of a training point on a model's predictions, it first approximates the effect of removing the training point on the learned model parameters. To do this, it performs a first order Taylor expansion around the learned parameters $\widehat{\pmb{\theta}}$ at the optimality conditions. + +Following the derivation in Koh and Liang (2017), the effect of removing the training triple $x$ on $\widehat{\pmb{\theta}}$ is given by $d\widehat{\theta} / d\epsilon_{i} = H_{\widehat{\theta}}^{-1} g(x, \widehat{\pmb{\theta}})$ . Here, $H_{\widehat{\pmb{\theta}}}$ denotes the Hessian of the loss function $H_{\widehat{\pmb{\theta}}} := 1/n \sum_{t \in \mathcal{T}} \nabla_{\pmb{\theta}}^2 \mathcal{L}(t, \widehat{\pmb{\theta}})$ . Using the chain rule then, we approximate the influence of removing $x$ on the model's prediction at $z$ as $\langle g(z, \widehat{\pmb{\theta}}), d\widehat{\pmb{\theta}} / d\epsilon_{i} \rangle$ . Thus, the influence score using IF is defined as: + +$$ +\mathbf {I F}: \phi_ {\mathrm {I F}} (z, x) := \langle \boldsymbol {g} (z, \widehat {\boldsymbol {\theta}}), \boldsymbol {H} _ {\widehat {\boldsymbol {\theta}}} ^ {- 1} \boldsymbol {g} (x, \widehat {\boldsymbol {\theta}}) \rangle +$$ + +Computing the IF for KGE models poses two challenges - (i) storing and inverting the Hessian matrix is computationally too expensive for a large number of parameters; (ii) the Hessian is not guaranteed to be positive definite and thus, invertible because KGE models are non-convex models. To address both these challenges, we follow the guidelines in Koh and Liang (2017). Instead of computing the exact Hessian matrix, we estimate the Hessian-vector product (HVP) with target triple's gradient. That is, for every target triple $z$ , we precompute the value $\pmb{H}_{\widehat{\theta}}^{-1}\pmb{g}(z,\widehat{\theta})$ . Then, for each + +neighbourhood triple $x$ in the training set, we compute $\phi_{\mathrm{IF}}(z,x)$ using the pre-computed HVP. Furthermore, we use the stochastic estimator LiSSA (Agarwal et al., 2017) that computes the HVP in linear time using samples from training data. For the second issue of non-convexity, we add a "damping" term to the Hessian so that it is positive definite and invertible. This term is a hyperparameter that is tuned to ensure that all eigenvalues of the Hessian matrix are positive, i.e. the Hessian matrix is positive definite. Further discussion on the validity of Influence Functions for non-convex settings is available in Koh and Liang (2017). + +# 3.2 Adversarial Additions + +In this attack setting, the adversarial attacker can only add triples to the neighbourhood of target triple. Using the Instance Attribution metrics above, we select the training triple $x \coloneqq (x_{s}, x_{\mathbf{r}}, x_{o})$ in the neighbourhood of the target triple $z \coloneqq (z_{s}, z_{\mathbf{r}}, z_{o})$ that is most influential to the prediction of $z$ . For brevity, let's assume $x_{s} = z_{s}$ , i.e., the influential and target triples have the same subject. To generate adversarial addition using the influential triple, we propose to replace $x_{o}$ with the most dissimilar entity $x_{o'}$ . Since the adversarial triple $x' \coloneqq (x_{s}, x_{\mathbf{r}}, x_{o'})$ has the same subject and relation as the influential triple but a different object, it should reduce the influence of the influential triple on the target triple's prediction. This in turn should degrade the model prediction on target triple. For multiplicative models, we select the dissimilar entity $x_{o'}$ using the cosine similarity between $x_{o}$ and the entities $\mathcal{E}$ . For additive models, we use the $\ell_{2}$ similarity between $x_{o}$ and the entities $\mathcal{E}$ . + +# 4 Evaluation + +We evaluate the effectiveness of the proposed attack strategies in degrading the KGE model's predictions on target triples at test time. We follow the state-of-art protocol to evaluate poisoning attacks (Xu et al., 2020) - we train a victim KGE model on the original dataset; generate adversarial deletions or additions using one of the attacks; perturb the original dataset; and train a new KGE model on the perturbed dataset. The hyperparameters for victim and poisoned KGE models are same. + +We evaluate our attacks on four state-of-art KGE models - DistMult, ComplEx, ConvE and TransE on two publicly available benchmark datasets - + +WN18RR and FB15k-237. To be able to evaluate the effectiveness of attacks in degrading the predictive performance, we select a subset of the benchmark test triples that has been ranked highest (ranks=1) by the victim KGE model. From this subset, we randomly sample 100 triples as the target triples. This is to avoid the expensive Hessian inverse estimation in the IF metric for a large number of target triples (for each target triple, this estimation requires one training epoch). + +The source code implementation of our experiments is available at https://github.com/PeruBhardwaj/AttributionAttack. + +Baselines: We evaluate our attacks against baseline methods based on random edits and the state-of-art poisoning attacks. Random_n adds or removes a random triple from the neighbourhood of the target triple. Random_g adds or removes a random triple globally and is not restricted to the target's neighbourhood. Direct-Del and Direct-Add are the adversarial deletion and addition attacks proposed in Zhang et al. (2019a). CRIAGE is the poisoning attack from Pezeshkpour et al. (2019) and is a baseline for both deletions and additions. GR (Gradient Rollback) (Lawrence et al., 2021) uses influence estimation to provide post-hoc explanations for KGE models and can also be used to generate adversarial deletions. Thus, we include this method as a baseline for adversarial deletions. + +The attack evaluations in Zhang et al. (2019a); Pezeshkpour et al. (2019); Lawrence et al. (2021) differ with respect to the definition of their neighbourhood. Thus, to ensure fair evaluation, we implement all methods with the same neighbourhood triples that are linked to the subject or object of the target triple (Section 3). We use the publicly available implementations for CRIAGE and Gradient Rollback and implement Direct-Del and Direct-Add ourselves. Further details on datasets, implementation of KGE models, baselines and computing resources is available in Appendix A and B. + +Results: For WN18RR and FB15k-237 respectively, Tables 2 and 3 show the degradation in MRR and Hits@1 due to adversarial deletions; and Tables 4 and 5 due to adversarial additions for state-of-art KGE models. Below we discuss different patterns in these results. We also discuss runtime efficiency of the attack methods in Appendix C.1. + +
DistMultComplExConvETransE
MRRHits@1MRRHits@1MRRHits@1MRRHits@1
Original1.001.001.001.001.001.001.001.00
Baseline AttacksRandom_n0.87 (-13%)0.820.85 (-15%)0.800.82 (-18%)0.790.82 (-18%)0.70
Random_g0.970.950.960.930.990.980.930.87
Direct-Del0.880.770.86 (-14%)0.770.71 (-29%)0.640.54 (-46%)0.37
CRIAGE0.73 (-27%)0.66--ErEr--
GR0.950.900.930.860.950.910.840.77
Proposed AttacksDot Metric0.890.820.850.790.84 (-16%)0.800.770.60
\( \ell_2 \)Metric0.25 (-75%)0.160.29 (-71%)0.200.880.780.620.50
Cos Metric0.25 (-75%)0.160.29 (-71%)0.200.870.760.56 (-44%)0.40
GD (dot)0.28 (-72%)0.190.290.210.250.210.71 (-29%)0.57
GL (\( \ell_2 \))0.300.200.28 (-72%)0.190.17 (-83%)0.120.720.60
GC (cos)0.290.190.290.210.200.160.71 (-29%)0.57
IF0.28 (-72%)0.190.29 (-71%)0.200.22 (-78%)0.170.71 (-29%)0.57
+ +Table 2: Reduction in MRR and Hits@1 due to adversarial deletions on target triples in WN18RR. Lower values indicate better results; best results for each model are in bold. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best reduction in percentage relative to the original MRR; computed as (poisoned - original)/original * 100. + +# 4.1 Comparison with Baselines + +We observe that the proposed strategies for adversarial deletions and adversarial additions successfully degrade the predictive performance of KGE models. On the other hand, the state-of-art attacks are ineffective or only partially effective. Adversarial deletions from Gradient Rollback perform similar to random baselines; likely because this method estimates the influence of a training triple as the sum of its gradients over the training process. In this way, it does not account for the target triple in the influence estimation. The method is also likely to be effective only for a KGE model that is trained with a batch size of 1 because it needs to track the gradient updates for each triple. + +The CRIAGE baseline is only applicable to DistMult and ConvE. But we found that the method ran into numpy.linalg.LinAlgError: Singular matrix error for ConvE; because the Hessian matrix computed from the victim model embeddings was non-invertible4. For adversarial deletions on DistMult, the baseline works better than random edits but not the proposed attacks5. It is also ineffective against adversarial additions. + +We see that Direct-Del is effective on TransE, but not on multiplicative models. This is likely + +because it estimates the influence of a candidate triple as the difference in the triple's score when the neighbour entity embedding is perturbed. The additive nature of this influence score might make it more suitable for additive models. We also see that Direct-Add works similar to random additions, likely because it uses random down-sampling. + +The proposed attacks based on instance attribution methods consistently outperform random baselines for adversarial additions and deletions. One exception to this pattern are adversarial additions against TransE on WN18RR. In this case, no influence metric performs better than random neighbourhood edits, though they are all effective for adversarial deletions. One possible reason is that the TransE model is designed to learn hierarchical relations like _has_part. We found that the target triples ranked highest by the model have such hierarchical relations; and the influential triple for them has the same relation. That is, the triple $(s_1, \text{has_part}, s)$ is the influential triple for $(s, \text{has_part}, o)$ . Removing this influential triple breaks the hierarchical link between $s_1$ and $s$ ; and degrades TransE predictions on the target. But adding the triple $(s_2, \text{has_part}, s)$ still preserves the hierarchical structure which TransE can use to score the target correctly. We provide more examples of such relations in Appendix C.3. + +# 4.2 Comparison across Influence Metrics + +We see that the IF and Gradient Similarity metrics show similar degradation in predictive performance. + +This indicates that the computationally expensive Hessian inverse in the IF can be avoided and simpler metrics can identify influential triples with comparable effectiveness. Furthermore, cos and $\ell_2$ based Instance Similarity metrics outperform all other methods for adversarial deletions on DistMult, ComplEx and TransE. This effectiveness of naive metrics indicates the high vulnerability of shallow KGE architectures to data poisoning attacks in practice. In contrast to this, the Input Similarity metrics are less effective in poisoning ConvE, especially significantly on WN18RR. This is likely because the triple feature vectors for ConvE are based on the output from a deeper neural architecture than the Embedding layer alone. Within Instance Similarity metrics, we see that the dot metric is not as effective as others. This could be because the dot product does not normalize the triple feature vectors. Thus, training triples with large norms are prioritized over relevant influential triples (Hanawa et al., 2021). + +# 4.3 Comparison of datasets + +We note that the degradation in predictive performance is more significant on WN18RR than on FB15k-237. This is likely due to the sparser graph structure of WN18RR, i.e. there are fewer neighbours per target triple in WN18RR than in FB15k-237 (Appendix C.4). Thus, the model learns its predictions from few influential triples in WN18RR; and removing only one neighbour significantly degrades the model's predictions on the target triple. + +On the other hand, because of more neighbours in FB15k-237, the model predictions are likely influenced by a group of training triples. Such group effect of training instances on model parameters has been studied in Koh et al. (2019); Basu et al. (2020). We will investigate these methods for KGE models on FB15k-237 in the future. + +# 5 Related Work + +Cai et al. (2018) and Nickel et al. (2015) provide a comprehensive survey of KGE models. We use the most popular models DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), ConvE (Dettmers et al., 2018) and TransE (Bordes et al., 2013). + +Our work is most closely related to CRIAGE (Pezeshkpour et al., 2019) and Direct Attack (Zhang et al., 2019a), that study both adversarial additions and deletions against KGE models. But CRIAGE is only applicable to multiplicative + +models and our experiments (Section 4) show that Direct Attack is effective (with respect to random baselines) on additive models only. On the other hand, our instance attribution methods work for all KGE models. Recently, Lawrence et al. (2021) propose Gradient Rollback to estimate the influence of training triples on the KGE model predictions. The original study uses the influential triples for post-hoc explanations, but they can also be used for adversarial deletions. However, the attack stores the model parameter updates for all training triples which are in the order of millions for benchmark datasets; and our experiments (Section 4) show that it performs similar to random deletions. Whereas, our influence estimation methods do not require additional storage and are consistently better than random baselines on all KGE models. + +We also study data poisoning attacks against KGE models in Bhardwaj et al. (2021). Here, we exploit the inductive abilities of KGE models to select adversarial additions that improve the predictive performance of the model on a set of decoy triples; which in turn degrades the performance on target triples. These inference patterns based attacks cannot be used for adversarial deletions, but we will perform detailed comparison for adversarial additions in future. In parallel work, Banerjee et al. (2021) study risk aware adversarial attacks with the aim of reducing the exposure risk of an adversarial attack instead of improving the attack effectiveness. Also, previous studies by Minervini et al. (2017) and Cai and Wang (2018) use adversarial regularization on the training loss of KGE models to improve predictive performance. But these adversarial samples are not in the input domain and aim to improve instead of degrade model performance. Poisoning attacks have also been studied against models for undirected and single relational graph data (Zügner et al., 2018; Dai et al., 2018; Xu et al., 2020). But they cannot be applied directly to KGE models because they require gradients of a dense adjacency matrix. + +Other related work towards understanding KGE models are Zhang et al. (2019b) and Nandwani et al. (2020) that generate post-hoc explanations in the form of sub-graphs. Also, Trouillon et al. (2019) study the inductive abilities of KGE models as binary relation properties for controlled inference tasks with synthetic datasets. Recently, Allen et al. (2021) interpret the structure of KGE by drawing comparison with word embeddings. + +
DistMultComplExConvETransE
MRRHits@1MRRHits@1MRRHits@1MRRHits@1
Original1.001.001.001.001.001.001.001.00
Baseline AttacksRandom_n0.66 (-34%)0.520.65 (-35%)0.510.62 (-38%)0.460.71 (-29%)0.56
Random_g0.680.530.65 (-35%)0.510.630.500.750.61
Direct-Del0.59 (-41%)0.420.62 (-38%)0.470.57 (-43%)0.410.62 (-38%)0.45
CRIAGE0.620.47--ErEr--
GR0.680.550.660.510.620.450.680.53
Proposed AttacksDot Metric0.630.470.640.490.600.440.740.62
\( \ell_2 \)Metric0.580.410.56 (-44%)0.400.53 (-47%)0.350.63 (-37%)0.46
Cos Metric0.56 (-44%)0.390.570.400.550.380.63 (-37%)0.45
GD (dot)0.600.440.600.450.55 (-45%)0.370.650.49
GL (\( \ell_2 \))0.620.450.600.450.560.410.700.58
GC (cos)0.58 (-42%)0.420.57 (-43%)0.390.570.400.64 (-36%)0.48
IF0.60 (-40%)0.440.60 (-40%)0.450.58 (-42%)0.430.66 (-34%)0.52
+ +Table 3: Reduction in MRR and Hits@1 due to adversarial deletions on target triples in FB15k-237. Lower values indicate better results. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best reduction in percentage relative to the original MRR. + +
DistMultComplExConvETransE
MRRHits@1MRRHits@1MRRHits@1MRRHits@1
Original1.001.001.001.001.001.001.001.00
Baseline AttacksRandom_n0.99 (-1%)0.980.97 (-3%)0.940.99 (-1%)0.980.76 (-24%)0.57
Random_g0.99 (-1%)0.970.97 (-3%)0.950.99 (-1%)0.980.930.87
Direct-Add0.98 (-2%)0.960.95 (-5%)0.920.99 (-1%)0.980.81 (-19%)0.67
CRIAGE0.98 (-2%)0.97--ErEr--
Proposed AttacksDot Metric0.970.930.950.900.95 (-5%)0.910.950.90
\( \ell_2 \)Metric0.89 (-11%)0.780.880.770.980.960.87 (-13%)0.83
Cos Metric0.89 (-11%)0.780.87 (-13%)0.770.990.980.87 (-13%)0.83
GD (dot)0.900.790.890.790.920.850.80 (-20%)0.73
GL (\( \ell_2 \))0.89 (-11%)0.790.86 (-14%)0.730.88 (-12%)0.770.890.83
GC (cos)0.900.800.870.760.910.820.80 (-20%)0.73
IF0.90 (-10%)0.790.89 (-11%)0.790.91 (-8.9%)0.820.77 (-23%)0.67
+ +Table 4: Reduction in MRR and Hits@1 due to adversarial additions on target triples in WN18RR. Lower values indicate better results. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best reduction in percentage relative to the original MRR. + +
DistMultComplExConvETransE
MRRHits@1MRRHits@1MRRHits@1MRRHits@1
Original1.001.001.001.001.001.001.001.00
Baseline AttacksRandom_n0.65 (-34%)0.500.690.570.61 (-39%)0.460.740.62
Random_g0.660.520.66 (-34%)0.520.630.500.73 (-27%)0.61
Direct-Add0.64 (-36%)0.480.66 (-34%)0.520.60 (-40%)0.450.72 (-28%)0.59
CRIAGE0.660.50--ErEr--
Proposed AttacksDot Metric0.670.540.650.500.610.460.74 (-26%)0.62
\( \ell_2 \)Metric0.640.500.660.520.59 (-41%)0.430.74 (-26%)0.62
Cos Metric0.63 (-37%)0.490.63 (-37%)0.470.600.430.74 (-26%)0.61
GD (dot)0.61 (-39%)0.450.650.500.620.460.71 (-29%)0.58
GL (\( \ell_2 \))0.630.480.670.530.61 (-39%)0.450.740.60
GC (cos)0.620.460.64 (-36%)0.490.61 (-39%)0.450.71 (-29%)0.56
IF0.61 (-39%)0.450.65 (-35%)0.500.58 (-42%)0.420.71 (-29%)0.58
+ +Table 5: Reduction in MRR and Hits@1 due to adversarial additions on target triples in FB15k-237. Lower values indicate better results; best results for each model are in bold. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best reduction in percentage relative to the original MRR; computed as (poisoned - original)/original * 100. + +The instance attribution methods we use are also used for post-hoc example-based explanations of black-box models (Molnar, 2019). Hanawa et al. (2021); Charpiat et al. (2019); Pruthi et al. (2020) use Instance or Gradient Similarity on image data. Similar to us, Han et al. (2020); Han and Tsvetkov (2020); Pezeshkpour et al. (2021) use different instance attribution methods, but to provide post-hoc explanations on natural language. + +# 6 Conclusion + +We propose data poisoning attacks against KGE models using instance attribution methods and demonstrate that the proposed attacks outperform the state-of-art attacks. We observe that the attacks are particularly effective when the KGE model relies on few training instances to make predictions, i.e. when the input graph is sparse. + +We also observe that shallow neural architectures like DistMult, ComplEx and TransE are vulnerable to naive attacks based on Instance Similarity. These models have shown competitive predictive performance by proper hyperparameter tuning (Ruffinelli et al., 2020; Kadlec et al., 2017), making them promising candidates for use in production pipelines. But our research shows that these performance gains can be brittle. This calls for improved KGE model evaluation that accounts for adversarial robustness in addition to predictive performance. + +Additionally, as in Bhardwaj (2020); Bhardwaj et al. (2021), we call for future proposals to defend against the security vulnerabilities of KGE models. Some promising directions might be to use adversarial training techniques or train ensembles of models over subsets of training data to prevent the model predictions being influenced by a few triples only. Specification of the model failure modes through adversarial robustness certificates will also improve the usability of KGE models in high-stake domains like healthcare and finance. + +# Acknowledgements + +This research was conducted with the financial support of Accenture Labs and Science Foundation Ireland (SFI) at the ADAPT SFI Research Centre at Trinity College Dublin. The ADAPT SFI Centre for Digital Content Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant No. 13/RC/2106_P2. + +# Broader Impact + +We study the problem of generating data poisoning attacks against KGE models. These models drive many enterprise products ranging from search engines (Google, Microsoft) to social networks (Facebook) to e-commerce (eBay) (Noy et al., 2019), and are increasingly used in domains with high stakes like healthcare and finance (Hogan et al., 2020; Bendtsen and Petrovski, 2019). Thus, it is important to identify the security vulnerabilities of these models that might be exploited by malicious actors to manipulate the predictions of the model and cause system failure. By highlighting these security vulnerabilities of KGE models, we provide an opportunity to fix them and protect stakeholders from harm. This honours the ACM Code of Ethics to contribute to societal well-being and avoid harm due to computing systems. + +Furthermore, to study data poisoning attacks against KGE models, we use the Instance Attribution Methods from Interpretable Machine Learning. These methods can also be used to provide post-hoc explanations for KGE models and thus, improve our understanding of the predictions made by the models. In addition to understanding model predictions, instance based attribution methods can help guide design decisions during KGE model training. There are a vast number of KGE model architectures, training strategies and loss functions, and empirically quantifying the impact of the design choices is often challenging (Ruffinelli et al., 2020). Thus, we would encourage further research on exploring the use of instance attribution methods to understand the impact of these choices on the KGE model predictions. By tracing back the model predictions to the input knowledge graph, we can gain a better understanding of the success or failure of different design choices. + +# References + +Naman Agarwal, Brian Bullins, and Elad Hazan. 2017. Second-order stochastic optimization for machine learning in linear time. Journal of Machine Learning Research, 18(116):1-40. +Carl Allen, Ivana Balazevic, and Timothy Hospedales. 2021. Interpreting knowledge graph relation representation from word embeddings. In International Conference on Learning Representations. +Prithu Banerjee, Lingyang Chu, Yong Zhang, Laks V.S. Lakshmanan, and Lanjun Wang. 2021. Stealthy targeted data poisoning attack on knowledge graphs. In + +2021 IEEE 37th International Conference on Data Engineering (ICDE), pages 2069-2074. IEEE. +Samyadeep Basu, Xuchen You, and Soheil Feizi. 2020. On second-order group influence functions for black-box predictions. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 715-724. PMLR. +Claus Bendtsen and Slavé Petrovski. 2019. How data and AI are helping unlock the secrets of disease. In AstraZeneca Blog. +Peru Bhardwaj. 2020. Towards adversarially robust knowledge graph embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 34(10):13712-13713. +Peru Bhardwaj, John Kelleher, Luca Costabello, and Declan O'Sullivan. 2021. Poisoning knowledge graph embeddings via relation inference patterns. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1875-1888, Online. Association for Computational Linguistics. +Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317-331. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2787-2795. Curran Associates, Inc. +Hongyun Cai, Vincent W Zheng, and Kevin Chen-Chuan Chang. 2018. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Transactions on Knowledge and Data Engineering. +Liwei Cai and William Yang Wang. 2018. KBGAN: Adversarial learning for knowledge graph embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1470-1480, New Orleans, Louisiana. Association for Computational Linguistics. +Chandrahas, Aditya Sharma, and Partha Talukdar. 2018. Towards understanding the geometry of knowledge graph embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 122-131, Melbourne, Australia. Association for Computational Linguistics. + +Guillaume Charpiat, Nicolas Girard, Loris Felardos, and Yuliya Tarabalka. 2019. Input similarity from the neural network perspective. In NeurIPS 2019-33th Annual Conference on Neural Information Processing Systems. +Luca Costabello, Sumit Pai, Chan Le Van, Rory McGrath, Nicholas McCarthy, and Pedro Tabacof. 2019. AmpliGraph: a Library for Representation Learning on Knowledge Graphs. +Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In International conference on machine learning, pages 1115-1124. PMLR. +Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1):1811-1818. +Xiaochuang Han and Yulia Tsvetkov. 2020. Fortifying toxic speech detectors against veiled toxicity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7732-7739, Online. Association for Computational Linguistics. +Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence functions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5553-5563, Online. Association for Computational Linguistics. +Kazuaki Hanawa, Sho Yokoi, Satoshi Hara, and Kentaro Inui. 2021. Evaluation of similarity-based explanations. In International Conference on Learning Representations. +Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard de Melo, Claudio Gutierrez, Jose Emilio Labra Gayo, Sabrina Kirrane, Sebastian Neumaier, Axel Polleres, Roberto Navigli, Axel Cyrille Ngonga Ngomo, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan F. Sequeda, Steffen Staab, and Antoine Zimmermann. 2020. Knowledge graphs. CoRR, abs/2003.02320. +Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2019. Adversarial Machine Learning. Cambridge University Press. +Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 69-74, Vancouver, Canada. Association for Computational Linguistics. +Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pages 1885-1894. PMLR. + +Pang Wei W Koh, Kai-Siang Ang, Hubert Teo, and Percy S Liang. 2019. On the accuracy of influence functions for measuring group effects. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. +Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2869-2878. PMLR. +Carolin Lawrence, Timo Sztyler, and Mathias Niepert. 2021. Explaining neural matrix factorization with gradient rollback. Proceedings of the AAAI Conference on Artificial Intelligence, 35(6):4987-4995. +Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, and Sebastian Riedel. 2017. Adversarial sets for regularising neural link predictors. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017, Sydney, Australia, August 11-15, 2017. AUAI Press. +Christoph Molnar. 2019. Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/. +Yatin Nandwani, Ankesh Gupta, Aman Agrawal, Mayank Singh Chauhan, Parag Singla, and Mausam. 2020. OxKBC: Outcome explanation for factorization based knowledge base completion. In *Automated Knowledge Base Construction*. +Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2015. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11-33. +Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. 2019. Industry-scale knowledge graphs: Lessons and challenges. Commun. ACM, 62(8):36-43. +Pouya Pezeshkpour, Sarthak Jain, Byron Wallace, and Sameer Singh. 2021. An empirical comparison of instance attribution methods for NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 967-975, Online. Association for Computational Linguistics. +Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3336-3347, Minneapolis, Minnesota. Association for Computational Linguistics. + +Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. +Daniel Ruffinelli, Samuel Broscheit, and Rainer Gemulla. 2020. You can teach an old dog new tricks! on training knowledge graph embeddings. In International Conference on Learning Representations. +Théo Trouillon, Éric Gaussier, Christopher R. Dance, and Guillaume Bouchard. 2019. On inductive abilities of latent factor models for relational learning. J. Artif. Int. Res., 64(1):21-53. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International Conference on Machine Learning, pages 2071-2080. +Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K Jain. 2020. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing, 17(2):151-178. +Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, and Kui Ren. 2019a. Data poisoning attack against knowledge graph embedding. In International Joint Conference on Artificial Intelligence. +Wen Zhang, Bibek Paudel, Wei Zhang, Abraham Bernstein, and Huajun Chen. 2019b. Interaction embeddings for prediction and explanation in knowledge graphs. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 96-104. +Daniel Zügner, Amir Akbarnejad, and Stephan Gunnemann. 2018. Adversarial attacks on neural networks for graph data. In International Conference on Knowledge Discovery & Data Mining, pages 2847-2856. + +# Appendix + +# A Dataset Details + +We evaluate the proposed attacks on four state-of-art KGE models - DistMult, ComplEx, ConvE and TransE; on two publicly available benchmark datasets for link prediction $^{6}$ - WN18RR and FB15k-237. For the KGE model evaluation protocol, we filter out triples from the validation and test set that contain unseen entities. + +To assess the attack effectiveness in degrading performance on triples predicted as True, we need to select a set of triples that are predicted as True by the victim model. Thus, we select a subset of the benchmark test set that has been ranked the best (i.e. ranks $= 1$ ) by the victim KGE model. If this subset has more than 100 triples, we randomly sample 100 triples as the target triples; otherwise we use all triples as target triples. We do this pre-processing step to avoid the expensive Hessian inverse computation in the Influence Functions (IF) for a large number of target triples - for each target triple, estimating the Hessian inverse (as an HVP) using the LissA algorithm requires one training epoch. + +
WN18RRFB15k-237
Entities40,55914,505
Relations11237
Training86,835272,115
Validation2,82417,526
Test2,92420,438
Subset with Best RanksDistMult1,1091,183
ComplEx1,1981,238
ConvE1,106901
TransE151223
+ +Table 6: Statistics for WN18RR and FB15k-237. We removed triples from the validation and test set that contained unseen entities to ensure that we do not add new entities as adversarial edits. The numbers above (including the number of entities) reflect this filtering. + +Table 6 shows the dataset statistics and the number of triples which are ranked best by the different KGE models. + +# B Training Details + +# B.1 Training KGE models + +We implement four KGE models - DistMult, ComplEx, ConvE and TransE. We use the 1-N training strategy proposed in Lacroix et al. (2018) but we do not add the reciprocal relations. Thus, for + +
WN18RRFB15k-237
MRRHits@1MRRHits@1
DistMult0.480.440.340.24
ComplEx0.510.470.340.25
ConvE0.440.410.320.23
TransE0.210.020.330.24
+ +Table 7: MRR and Hits@1 results for original KGE models on WN18RR and FB15k-237 + +each triple, we generate scores for $(s,r)\to o$ and $(o,r)\rightarrow s$ + +For TransE scoring function, we use the L2 norm. The loss function used for all models is Pytorch's CrossEntropyLoss. For regularization, we use N3 regularization and input dropout on DistMult and ComplEx; input dropout, hidden dropout and feature dropout on ConvE; and L2 regularization (Bordes et al., 2013) and input dropout for TransE. + +We do not use early stopping to ensure same hyperparameters for original and poisoned KGE models. We use an embedding size of 200 for all models on both datasets. An exception is TransE model for WN18RR, where we used embedding $\dim = 100$ due to the expensive time and space complexity of 1-N training for TransE. We manually tuned the hyperparameters for KGE models based on suggestions from state-of-art implementations (Ruffinelli et al., 2020; Dettmers et al., 2018; Lacroix et al., 2018; Costabello et al., 2019). + +Table 7 shows the MRR and Hits@1 for the original KGE models on WN18RR and FB15k-237. To re-train the KGE model on poisoned dataset, we use the same hyperparameters as the original model. We run all model training, adversarial attacks and evaluation on a shared HPC cluster with Nvidia RTX 2080ti, Tesla K40 and V100 GPUs. + +To ensure reproducibility, our source code is publicly available on GitHub at https://github.com/PeruBhardwaj/ AttributionAttack. The results in Section 4 can be reproduced by passing the argument reproduce - results to the attack scripts. Example commands for this are available in the bash scripts in our codebase. The hyperparameter used to generate the results can be inspected in the set_hyperparams() function in the file utils.py or in the log files. + +For the LissA algorithm used to estimate the Hessian inverse in Influence Functions, we select the hyperparameter values using suggestions from Koh and Liang (2017). The values are selected + +to ensure that the Taylor expansion in the estimator converges. These hyperparameter values for our experiments are available in the function set_if_parameters() in the file utils.py of the accompanying codebase. + +# B.2 Baseline Implementation Details + +One of the baselines in Section 4 of the main paper is the Direct-Del and Direct-Add attack from (Zhang et al., 2019a). The original study evaluated the method for the neighbourhood of subject of the target triple. We extend it for both subject and object to ensure fair comparison with other attacks. Since no public implementation is available, we implement our own. + +
WN18RR
OriginalHighLow
DistMult1.000.980.98
ComplEx1.000.960.95
ConvE1.000.990.99
TransE1.000.810.86
FB15k-237
OriginalHighLow
DistMult1.000.640.64
ComplEx1.000.670.66
ConvE1.000.620.60
TransE1.000.720.73
+ +The Direct-Add attack is based on computing a perturbation score for all possible candidate additions. Since the search space for candidate additions is of the order $\mathcal{E} \times \mathcal{R}$ (where $\mathcal{E}$ and $\mathcal{R}$ are the set of entities and relations), it uses random down sampling to filter out the candidates. The percent of triples down sampled are not reported in the original paper and a public implementation is not available. So, in this paper, we pick a high and a low value for the percentage of triples to be down-sampled and generate adversarial additions for both fractions. We arbitrarily choose $20\%$ of all candidate additions for high; and $5\%$ of all candidate additions as low. + +Thus, we generate two poisoned datasets from the attack - one that used a high number of candidates and another that used a low number of candidates. We train two separate KGE models on + +these datasets to assess the baseline performance. Table 8 shows the MRR of the original model; and poisoned KGE models from attack with high and low down-sampling percents. The results reported for Direct-Add in Section 4 of the main paper are the better of the two results (which show more degradation in performance) for each combination. + +# C Further Analysis of Proposed Attacks + +# C.1 Runtime Analysis + +We analyze the runtime efficiency of baseline and proposed attack methods for adversarial deletions. For brevity, we consider the attacks on DistMult model, but the results on other models show similar time scales. Table 9 shows the time taken in seconds to select the influential triples for DistMult model on WN18RR and FB15k-237. + +Table 8: MRR of KGE models trained on original datasets and poisoned datasets from the Direct-Add baseline attack in Zhang et al. (2019a). High, Low indicate the high $(20\%)$ and low percentage $(5\%)$ of candidates selected from random down-sampling. + +
WN18RRFB15k-237
Baseline AttacksRandom_n0.0240.057
Random_g0.0020.002
Direct-Del0.4070.272
CRIAGE2.23575.117
GR29.919174.191
Proposed AttacksDot Metric0.2880.342
\( \ell_2 \)Metric0.0570.067
Cos Metric0.0670.148
GD (dot)7.354109.015
GL (\( \ell_2 \))8.100120.659
GC (cos)9.478141.276
IF4751.9874750.404
+ +Table 9: Time taken in seconds for baseline and proposed attacks to generate influential triples for Dist-Mult on WN18RR and FB15k-237 + +We see that the Instance Similarity metrics (dot metric, $\ell_2$ metric, cos metric) are more efficient than the state-of-art attacks (Direct-Del, CRIAGE and GR). Furthermore, the $\ell_2$ metric is almost as quick as random triple selection. The efficiency of Gradient Similarity metrics is also better than or equivalent to CRIAGE and GR. + +Only the attack method based on IF is much slower than any other method. This is because estimating the Hessian inverse in IF requires one training epoch for every target triple, that is, we run 100 training epochs to get the influential triples for 100 target triples. However, our results in Section 4.2 of the main paper show that this expensive computation does not provide improved adversarial deletions, and thus, might be unnecessary to select influential triples for KGE models. + +
Target RelationInfluential Relation
_has_part_has_part
_synset_domain主題_of_synset_domain主題_of
_has_part_has_part
_synset_domain主題_of_synset_domain主題_of
_synset_domain主題_of_synset_domain主題_of
_synset_domain主題_of_synset_domain主題_of
_instance_hypernym_instance_hypernym
_synset_domain主題_of_synset_domain主題_of
_instance_hypernym_synset_domain主題_of
_synset_domain主題_of_synset_domain主題_of
_member_meronym.derivationally_related_form
_synset_domain主題_of_synset_domain主題_of
_has_part_has_part
_member_meronym_member_meronym
_synset_domain主題_of_synset_domain主題_of
+ +# C.2 Additional Comparison with CRIAGE + +The baseline attack method CRIAGE estimates the influence of a training triple using the BCE loss and is thus likely to be effective only for KGE models that are trained with BCE loss. In Section 4.1, we found that the proposed attacks are more effective than the baseline attack. + +But since our original models are trained with cross-entropy loss, we perform an additional analysis of the Instance Similarity attacks against CRIAGE for the DistMult model trained with BCE loss. Table 11 shows the reduction in MRR and Hits@1 due to adversarial deletions in this training setting. We find that the Instance Similarity attacks outperform the baseline for this setting as well. + +Table 10: Relations from the target triples and influential triples (adversarial deletions) for the cos metric on WN18RR-TransE. This combination has 15 target triples and the table shows the relations for all of them. + +
WN18RRFB15k-237
MRRHits@1MRRHits@1
Original1.001.001.001.00
CRIAGE0.670.630.630.46
Dot Metric0.860.810.610.44
\( \ell_2 \)Metric0.120.060.600.43
Cos Metric0.120.060.580.38
+ +Table 11: Reduction MRR and Hits@1 due to adversarial deletions for DistMult (trained with BCE loss) on WN18RR and FB15k-237 + +# C.3 Analysis of Instance Attribution Methods on WN18RR-TransE + +For the TransE model on WN18RR, we found that the instance attribution methods lead to effective adversarial deletions with respect to random baselines, but not adversarial additions (Section 4.1 of main paper). A possible reason is based on the ability of TransE model hierarchical relations, i.e. the relations that represent a hierarchy between the subject and object entities. For example, $(s,\_ \text{has\_part},o)$ indicates that $s$ is the parent node for $o$ in a hierarchy. + +We select the Instance Similarity method as a metric for further analysis. It performs the best of all instance attribution methods for adversarial deletions, but performs worse than random neighbourhood edits for adversarial additions. Table 10 shows the relations in the target triples and the influential triples (i.e. adversarial deletions) selected by cos metric. + +We see that the target triples contain mostly hierarchical relations like _synset_domain_topic_of and _has_part. Also the cos metric identifies influential triples with same relations. And since our adversarial additions are only based on modifying the entity in the influential triple, these edits improve the hierarchy structure of the graph instead of breaking it. Thus, these edits perform well for adversarial deletions, but not for additions. + +# C.4 Neighbourhood Sparsity Comparison on WN18RR and FB15k-237 + +In Section 4.3 of the main paper, we found that the proposed attacks are significantly more effective for WN18RR than for FB15k-237. This is likely because there are fewer triples in the neighbourhood of target triples for WN18RR than for FB15k-237. The graph in Figure 2 shows the median number of neighbours of the target triples for WN18RR and FB15k-237. We report median (instead of mean) because of large standard deviation in the number of target triple neighbours for FB15k-237. + +We see that the target triple's neighbourhood for WN18RR is significantly sparser than the neighbourhood for FB15k-237. Thus, since the KGE model predictions are learned from fewer triples for WN18RR, it is also easier to perturb these results with fewer adversarial edits. + +![](images/55c7119328eb9608896d8e5ce480b96d7497357e529be8903fbf41736edc15af.jpg) +Figure 2: Comparison of the median number of neighbouring triples of target triples from WN18RR and FB15k-237 for DistMult, ComplEx, ConvE and TransE. \ No newline at end of file diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/images.zip b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9f1ff46117d011eabb7196f8fab51c17e844c1d1 --- /dev/null +++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8887a58990278b807234711a420a5cbb78bb10a01eb6bd75d2341e7a81fd1cf2 +size 674193 diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/layout.json b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..aacec6118256aa974ec2914c0a7335522c25994a --- /dev/null +++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b907de027ca4406f6cafc8640400f596578baab572bd44887449ae078b8eac8 +size 477329 diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_content_list.json b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..08b5671edc9f82f551ba241176318de866f35366 --- /dev/null +++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:253765ef9cebab2c6ec3686fd5fd2c76b2550d16ee0bd32fc3c93a55971e1fea +size 80897 diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_model.json b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2f61afa69e3690e0cf1fd8adf53dfaaaa828b462 --- /dev/null +++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90d747bdcd248a6e070dfc0435274bef065407a3946e75c93b7f8db70d06ba57 +size 98204 diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_origin.pdf b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0a05a2b609b8532f5d78fcb48481cb5210197766 --- /dev/null +++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a01579785d60ece88c7f04ec892efe00cc637da4e9b93ee85bf3f00c91608f3c +size 514205 diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/full.md b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/full.md new file mode 100644 index 0000000000000000000000000000000000000000..96dd3f285be87c7209aacc1412b42f0e91316061 --- /dev/null +++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/full.md @@ -0,0 +1,348 @@ +# Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup + +Guang Liu, Yuzhao Mao, Hailong Huang, Weiguo Gao, and Xuan Li + +PingAn Life Insurance of China + +https://github.com/PAI-SmallIsAllYourNeed/Mixup-AMP + +# Abstract + +Mixup is a recent regularizer for current deep classification networks. Through training a neural network on convex combinations of pairs of examples and their labels, it imposes locally linear constraints on the model's input space. However, such strict linear constraints often lead to under-fitting which degrades the effects of regularization. Noticeably, this issue is getting more serious when the resource is extremely limited. To address these issues, we propose the Adversarial Mixing Policy (AMP), organized in a "min-max-rand" formulation, to relax the Locally Linear Constraints in Mixup. Specifically, AMP adds a small adversarial perturbation to the mixing coefficients rather than the examples. Thus, slight non-linearity is injected in-between the synthetic examples and synthetic labels. By training on these data, the deep networks are further regularized, and thus achieve a lower predictive error rate. Experiments on five text classification benchmarks and five backbone models have empirically shown that our methods reduce the error rate over Mixup variants in a significant margin (up to $31.3\%$ ), especially in low-resource conditions (up to $17.5\%$ ). + +# 1 Introduction + +Deep classification models have achieved impressive results in both images (He et al., 2016; Dosovitskiy et al., 2020) and language processing (Devlin et al., 2019; Kim, 2014; Wang et al., 2016). One of the most significant challenges to train a deep model is the great efforts and costs to collect large-scale labels. Without sufficient labels, the deep networks tend to generalize poorly, leading to unsatisfactory performance. Thus, the regularization techniques under augmentation schema, which generate labeled data to regularize models (Hernandez-Garcia and König, 2018), are widely explored (Wei and Zou, 2019; Liu et al., 2021). + +Mixup (Zhang et al., 2018) is an effective regularizer under the augmentation schema. In recent + +years, topics related to Mixup have warranted serious attention (Lee et al., 2020; Xu et al., 2020; Verma et al., 2019; Archambault et al., 2019; Berthelot et al., 2019b,a; Beckham et al., 2019; Mao et al., 2019; Zhu et al., 2020). The core idea of Mixup is to generate synthetic training data via a mixing policy, which convex combines a pair of examples and its labels. Through training on these data, the classification networks will be regularized to reach higher performance. Unlike conventional regularizers (Srivastava et al., 2014; Hanson and Pratt, 1988; Ioffe and Szegedy, 2015), Mixup imposes a kind of locally linear constraint (Zhang et al., 2018; Guo et al., 2019b) on the model's input space. + +However, vanilla Mixup often suffers from underfitting due to the ambiguous data (Guo et al., 2019b; Guo, 2020; Mai et al., 2021) generated under the strict locally linear constraints. To alleviate the under-fitting, (Guo, 2020) uses extra parameters to project the inputs and labels into a high dimensional space to properly separate the data. (Guo et al., 2019b; Mai et al., 2021) use auxiliary networks to learn the mixing policy in a data-driven way to avoid the generation of ambiguous data. Although existing works effectively reduce the underfitting, they have limitations to properly regularization networks. Current networks are prone to be over-fitting when adding the extra parameters. Eventually, these methods degrade the effects of regularization. The conflicts between over-fitting and under-fitting get more serious when the labeled resources are rare or hard to obtain. Besides, the methods with auxiliary networks usually have difficulties in integrating with other Mixup variants. More importantly, Mixup works well in most cases (Guo et al., 2019b). Adding too much non-linearity into Mixup will sacrifice the majority of synthetic data that can regularize the networks under locally linear constraints. So, the locally linear constraints in Mixup only need to be slightly + +relaxed. + +In this paper, we propose the Adversarial Mixing Policy (AMP) to overcome these limitations. We modify the adversarial training (Goodfellow et al., 2015), which relaxes the linear nature of the network without any extra parameters or auxiliary networks, to relax the Locally Linear Constraints in Mixup. Inspired by the "min-max" formulation of adversarial training, we formulate our method as a form of "min-max-rand" regularization. Specifically, the "rand" operation randomly samples a mixing coefficient as in vanilla Mixup to generate synthetic example and label. Then, the "max" operation calculates the perturbation of the mixing coefficient and applies it. Note that the updated mixing coefficient is only used to re-synthetic example, keeping the synthetic label unchanged. Thus, slight non-linearity is injected in-between the synthetic example and label. Finally, the "min" operation minimizes the training loss over the non-linearly generated example-label pairs. In summary, we highlight the following contributions: + +- We propose an Adversarial Mixing Policy (AMP) to relax the Locally Linear Constraints (LLC) in Mixup without any auxiliary networks. It can be seamlessly integrated into other Mixup variants for its simplicity. +- To the best of our knowledge, this is the first exploration of the application of adversarial perturbation to the mixing coefficient in Mixup. +- We analyze our proposed method with extensive experiments and show that our AMP improves the performance of two Mixup variants on various settings and outperforms the nonlinear Mixup in terms of error rate. + +# 2 Background + +# 2.1 Linear nature of the networks + +Let $(x; y)$ be a sample in the training data, where $x$ denotes the input and $y$ the corresponding label. Deep networks learn a mapping function from $x$ to $y$ , which is: + +$$ +f (x) = y ^ {\prime} \rightarrow y. \tag {1} +$$ + +Here, $y'$ is the output of the networks, $\rightarrow$ represents the learning process. The linear nature of networks can be interpreted as that a small change in the + +input will lead to a change of model output: + +$$ +f (x + \nabla x) = y ^ {\prime} + \nabla y. \tag {2} +$$ + +Here, $\nabla x$ is a small perturbation of $x$ , and $\nabla y$ is the changing of output caused by the injection of $\nabla x$ . This linearity causes the networks vulnerable to adversarial attacks (Goodfellow et al., 2015). + +# 2.2 Relax the linear nature + +To relax the linear nature of the networks, adversarial training (Goodfellow et al., 2015) forces the networks to learn the following mapping function, + +$$ +f (x + \nabla x) = y ^ {\prime} \rightarrow y, \tag {3} +$$ + +where $\nabla x$ is an small adversarial perturbation. Such kind of training can effectively relax the linearity of networks and improve the robustness of deep networks. However, there exists a trade-off between model robustness(Equation. 3) and generalization(Equation. 1)(Tsipras et al., 2019). + +# 2.3 Locally linear constraints in Mixup + +Mixup can be formulated as follows, + +$$ +f \left(m _ {x} (\lambda)\right) = y ^ {\prime} \rightarrow m _ {y} (\lambda), \tag {4} +$$ + +$$ +m _ {x} (\lambda) = x _ {1} \cdot \lambda + x _ {2} \cdot (1 - \lambda), \tag {5} +$$ + +$$ +m _ {y} (\lambda) = y _ {1} \cdot \lambda + y _ {2} \cdot (1 - \lambda), \tag {6} +$$ + +where $\lambda \in [0,1]$ is the mixing coefficient. $m$ is the mixing policy. $(x_{1};y_{1})$ and $(x_{2};y_{2})$ are a pair of examples from the original training data. By training on synthetic data, $m_x(\lambda)$ and $m_y(\lambda)$ , Mixup (Zhang et al., 2018; Verma et al., 2019) imposes the Locally Linear Constraints on the input space of networks. Different from Eq. 2, this linearity can be formulated as follow, + +$$ +f \left(m _ {x} (\lambda + \nabla \lambda)\right) = y ^ {\prime} + \nabla y \rightarrow m _ {y} (\lambda + \nabla \lambda). \tag {7} +$$ + +Here, the $\nabla \lambda$ is a small change in $\lambda$ . We can observe that the output of the networks is changed accordingly. That is similar to the form of the linear nature of networks. Under these settings, the small change in $\lambda$ often leads to an undesirable change of output. Eventually, these strict linear constraints lead to under-fitting that degrades the regularization effects (Guo et al., 2019b; Guo, 2020). + +# 2.4 Why relaxing locally linear constraints + +Relaxing the strict linear constraints in Mixup can alleviate the under-fitting and therefore improve + +the regularization effects (Guo, 2020). The underfitting happens when the synthetic data is corrupted or ambiguous for the network. So, if we can make the networks compatible with such data, like the soft margin (Suykens and Vandewalle, 1999), the under-fitting will be eased. Furthermore, such a technique is best realized the relaxing without extra parameters. Inspired by the adversarial training (Eq. 3), we hypothesize that injecting slight non-linearity into Mixup can relax its constraints without extra parameters as follow, + +$$ +f (m _ {x} (\lambda + \nabla \lambda)) = y ^ {\prime} \rightarrow m _ {y} (\lambda), \qquad (8) +$$ + +where $\nabla \lambda$ is an adversarial perturbation injected to the original mixing coefficient $\lambda$ . + +# 3 Methodology + +As shown in Figure 1, Adversarial Mixing Policy (AMP) consists of three operations: Rand, Max and Min. Rand Operation (RandOp) generates the synthetic data by interpolating pairs of training examples and their labels with a random mixing coefficient $\lambda$ . Max Operation (MaxOp) injects a small adversarial perturbation into the $\lambda$ to resynthesize the example and keeps the synthetic label unchanged. This operation injects slight nonlinearity into the synthetic data. Min Operation (MinOp) minimizes the losses of these data. Additionally, we use a simple comparison to eliminate the influence caused by the scaling of gradients. + +# 3.1 Method formulation + +Given a training set $D = \{x_{i},y_{i}\}$ of texts, in which each sample includes a sequence of words $x_{i}$ and a label $y_{i}$ . A classification model encodes the text into a hidden state and predicts the category of text. Mixup's objective is to generate interpolated sample $\hat{g}_k$ and label $\hat{y}$ by randomly linear interpolation with ratio $\lambda$ applied on a data pair $(x_{i};y_{i})$ and $(x_{j};y_{j})$ . Our method aims to project a perturbation $\nabla \lambda$ into $\lambda$ to maximize the loss on interpolated data. Then, it minimizes the maximized loss. Inspired by adversarial training, we formulate this problem as a min-max-rand optimization problem, + +$$ +\min _ {\theta} \mathbb {E} _ {\hat {D}} \max _ {| \nabla \lambda | \leq \varepsilon} \ell_ {m i x} (f _ {r a n d} (\lambda + \nabla \lambda , i, j, k); \theta). \tag {9} +$$ + +Here, $\hat{D} = \{\hat{g}_{ki},\hat{y}_i\}$ is the synthetic data set generated by $f_{rand}(\lambda ,i,j)$ , $\nabla \lambda$ is the adversarial perturbation of $\lambda$ , $\varepsilon$ is the maximum step size, $\ell_{mix}(*)$ + +is the Mixup loss function, $f_{rand}(*)$ represent the random interpolation of data and labels, $\lambda$ is the random mixing coefficient sampled from a Beta distribution with $\alpha$ parameters, $i$ and $j$ are the randomly sampled data indexes in $D$ , $k$ is the mixed layer. + +# 3.2 Rand operation + +Rand Operation (RandOp) is identical to Mixup (Zhang et al., 2018). It aims to generate random interpolated data between two categories. Specifically, it generates synthetic labeled data by linearly interpolating pairs of training examples as well as their corresponding labels. For a data pair $(x_{i};y_{i})$ and $(x_{j};y_{j})$ , $x$ denotes the examples and $y$ the one-hot encoding of the corresponding labels. Consider a model $f(x) = f_{k}(g_{k}(x))$ , $g_{k}$ denotes the part of the model mapping the input data to the hidden state at layer $k$ , and $f_{k}$ denotes the part mapping such hidden state to the output of $f(x)$ . The synthetic data is generated as follows, + +$$ +\lambda \sim \operatorname {B e t a} (\alpha , \alpha), \tag {10} +$$ + +$$ +\hat {g} _ {k} = g _ {k} \left(x _ {i}\right) \cdot \lambda + g _ {k} \left(x _ {j}\right) \cdot (1 - \lambda), \tag {11} +$$ + +$$ +\hat {y} = y _ {i} \cdot \lambda + y _ {j} \cdot (1 - \lambda), \tag {12} +$$ + +where $\lambda$ is the mixing coefficient for the data pair, $\alpha$ indicates the hyper-parameter of Beta distribution, $\hat{g}_k$ is the synthetic hidden state. For efficient computation, the mixing happens by randomly picking one sample and then pairs it up with another sample drawn from the same mini-batch (Zhang et al., 2018). Here, the sample is obtained randomly. To simplify, we reformulate the random interpolation $f_{rand}(*)$ as follow, + +$$ +\left(f _ {k} \left(\hat {g} _ {k}\right), \hat {y}\right) := \underset {\lambda \sim B e t a (\alpha , \alpha)} {f _ {r a n d}} (\lambda , i, j, k). \tag {13} +$$ + +Here, $f_{rand}(*)$ takes the results of Equation 10-12 as input, outputs the model predictions $f_{k}(\hat{g}_{k})$ and the label $\hat{y}$ . The model trained on the generated data tends to reduce the volatility of prediction on these data. Then, the model will generalize better on unseen data. + +# 3.3 Max operation + +Max operation (MaxOp) injects a small adversarial perturbation to inject slight non-linearity between the synthetic example and synthetic label. It means that the generated synthetic data will not strictly follow the Locally Linear Constraints in Mixup. To achieve this, we propose an algorithm, + +![](images/055a53f04f0ff5196785dfa7705f301c0f44b90b447dedd11fc2f562d5ee8d10.jpg) +Figure 1: The major operations of Adversarial Mixing Policy (AMP). + +which is similar to the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), to inject an adversarial perturbation to the $\lambda$ . It calculates the gradient of $\lambda$ in the gradient ascend direction, + +$$ +\max _ {| \nabla \lambda | \leq \varepsilon} \ell_ {m i x} \left(f _ {r a n d} (\lambda + \nabla \lambda , i, j, k); \theta\right), \tag {14} +$$ + +where the $\nabla \lambda$ is the gradients of $\lambda$ on gradient ascent direction, $\varepsilon$ is the step size. Different from the FGSM (Goodfellow et al., 2015), we add a small perturbation on $\lambda$ instead of the input. Besides, the $\lambda$ is a scalar, we can get the adversarial direction and strength directly. So, there is no need to perform the normalization on $\nabla \lambda$ . + +$$ +\lambda^ {\prime} = \lambda + \varepsilon \cdot \nabla \lambda , \tag {15} +$$ + +where $\lambda^{\prime}$ is the slight hardness version of mix coefficient, $\varepsilon$ is the step size, $\nabla \lambda$ is the clipped $(\leq 1)$ gradient of $\lambda$ . The perturbation is the gradient in the adversarial direction. We calculate the gradient of $\lambda$ as follow, + +$$ +\nabla \lambda = \frac {\partial \mathcal {L}}{\partial \lambda}. \tag {16} +$$ + +Here, the Mixup loss $\mathcal{L}$ is calculated by interpolation of losses on pair of labels (Zhang et al., 2018; Verma et al., 2019) as follow, + +$$ +\begin{array}{l} \mathcal{L} = \ell_{mix}(f_{rand}(\lambda ,i,j,k);\theta)_{\substack{\lambda \sim Beta (\alpha ,\alpha)}} \\ = \ell_ {c e} \left(f _ {k} \left(\hat {g} _ {k}\right), y _ {i}; \theta\right) \cdot \lambda + \tag {17} \\ \ell_ {c e} \left(f _ {k} \left(\hat {g} _ {k}\right), y _ {j}; \theta\right) \cdot (1 - \lambda). \\ \end{array} +$$ + +Here, $\mathcal{L}$ represents the loss of synthetic data generated under mixing coefficient $\lambda$ , $\theta$ is the parameters of the model, $\ell_{mix}(*)$ is the Mixup loss, $\ell_{ce}(*)$ represents the cross-entropy function. Notable that + +the step size of gradient $\varepsilon$ may lead to undesirable results that minimize the losses. So, we need to eliminate the influence caused by $\varepsilon$ . + +# 3.4 Min operation + +Min operation (MinOp) minimizes loss of constraints relaxed synthetic data as follow, + +$$ +\underset {\theta} {\arg \min } \mathcal {L} _ {\text {f i n a l}}, \tag {18} +$$ + +where $\mathcal{L}_{final}$ is the final loss. In addition, MinOp leans to minimize the larger loss in the previous two steps to eliminate the influence of the step size $\varepsilon$ . Besides, this preference will help model learning from the one with larger loss to reduce the risk of under-fitting. We use a mask-based mechanism to realize the operation as follow, + +$$ +\mathcal {L} _ {\text {f i n a l}} = \mathcal {L} \cdot (1 - \operatorname {m a s k}) + \mathcal {L} ^ {\prime} \cdot \operatorname {m a s k}. \tag {19} +$$ + +Here, the mask is used as a selector of losses. The comparison is carried out on losses before and after updated $\lambda$ in the synthetic example. The latter one $\mathcal{L}'$ is calculated as follow, + +$$ +\mathcal {L} ^ {\prime} = \ell_ {m i x} \left(f _ {r a n d} \left(\lambda^ {\prime}, i, j, k\right); \theta\right). \tag {20} +$$ + +Here, $\lambda^\prime$ is the mixing coefficient after injecting perturbation (we only inject the perturbation into mixing coefficient of input, as Eq. 8), $\mathcal{L}^{\prime}$ is the Mixup loss on synthetic example generated under $\lambda^\prime$ . Note that the $\lambda$ for the synthetic label is unchanged. mask is calculated as follow, + +$$ +m a s k = \left\{ \begin{array}{l l} 1 & \delta_ {\mathcal {L}} > 0 \\ 0 & \delta_ {\mathcal {L}} \leq 0. \end{array} \right. \tag {21} +$$ + +Here, the mask is batch size vector, $\delta_{\mathcal{L}}$ is the direct comparison $\mathcal{L}' - \mathcal{L}$ . By doing this, the proposed method achieves steady improvement under different settings of step size. + +# 4 Experiments + +# 4.1 Data + +We evaluate the proposed AMP on five sentence classification benchmark datasets as used in (Guo et al., 2019a). TREC is a question dataset which aims to categorize a question into six types (Li and Roth, 2002). MR is a movie review dataset aiming at classifying positive/negative reviews (Pang and Lee, 2005). SST-1 is the Stanford Sentiment Treebank dataset with five sentiment categories: very positive, positive, neutral, negative, and very negative (Socher et al., 2013). SST-2 is a binary label version of SST-1. SUBJ is a dataset aiming to judge a sentence to be subjective or objective (Pang and Lee, 2004). Table 1 summarizes the statistical characteristics of the five datasets after prepossessing. + +Table 1: The statistics of datasets. $c$ is the category number. $l$ is the average length. $V$ is the vocabulary size. $N$ is the size of the training set. $T$ is the size of the testing set. ${CV}$ denotes the 10-fold cross-validation. + +
DataclVNT
TREC61095925952500
SST-151817836118552210
SST-22191618596131821
SUBJ2232132310000CV
MR2201876510662CV
+ +# 4.2 Baselines and Settings + +Our AMP is evaluated by integrating to two recent proposed Mixup variants. We choose five popular sentence classification models as the backbone to test the performance of all Mixups on the five benchmark datasets. + +Classification backbone. We test Mixups on five classification backbones. $LSTM_{rand}$ and $LSTM_{glove}$ (Wang et al., 2016) are two versions of bi-directional Long Short Term Memory(LSTM) with attention, where the former uses randomly initiated word embeddings and the latter uses GloVe (Pennington et al., 2014) initiated word embeddings. $CNN_{rand}$ and $CNN_{glove}$ (Kim, 2014) are two versions of convolutional neural networks. They are fed with randomly and GloVe initiated word embeddings, respectively. The above four methods are popular sentence classification models without pre-training techniques. We employ $BERT_{base}$ (Devlin et al., 2019) as the pre-training classification backbone. + +Mixup. We choose three popular Mixup variants for sentence classification as baselines. Word-Mixup (Guo et al., 2019a) is the straightforward application of Mixup on NLP tasks where linear interpolation applying on the word embedding level (first layer). SentMixup (Verma et al., 2019; Sun et al., 2020) is the Mixup applying to NLP tasks where linear interpolation is conducted in the last layer of hidden states. Non-linear Mixup is the non-linear version of SentMixup. + +AMP. WordAMP is applied on the word embedding level, the same as WordMixup. SentAMP is applied on the last layer of hidden states, the same as SentMixup. + +We obtained the source codes of backbone models from the public available implementations1. In our experiments, we follow the exact implementation and settings in (Kim, 2014; Wang et al., 2016; Devlin et al., 2019; Guo et al., 2019a; Verma et al., 2019). Specifically, we use filter sizes of 3, 4, and 5, each with 100 feature maps; dropout rate of 0.5 and L2 regularization of 1e-8 for the CNN baselines. We use hidden size of 1024 of single-layer; dropout rate of 0.5 and L2 regularization of 1e-8 for the LSTM baselines. For datasets without a standard development set, we randomly select $10\%$ of training data as a development set. Training is done through Adam (Kingma and Ba, 2015) over mini-batches of size 50 (CNN, LSTM) and 24 $(BERT_{base})$ respectively. The learning rate is 2e-4 for CNN and LSTM, and 1e-5 for $BERT_{base}$ . The word embeddings are 300 dimensions for CNN and LSTM. The step size $\varepsilon = 0.002$ for all experiments. The $\alpha$ for all Mixup is set to one. For each dataset, we train each model 10 times with different random seeds each with 8k steps and compute their mean error rates and standard deviations. + +# 4.3 Main results + +To evaluate the predictive performance of $AMP$ , we conduct five sets of experiments. For each setting, we compare the performance of without Mixup (w/o), WordMixup (Word), SentMixup (Sent) and non-linear Mixup(non-linear². As presented in Table 2, $AMP$ outperform Mixup comparison baselines. For example, compared with the Sent base + +Table 2: The results of our AMP method compared with two recent Mixup methods on five different datasets under five different classification models. For a fair comparison, we re-implement the Mixup baselines based on backbone models. The results may not be the same as the results in (Guo et al., 2019a; Sun et al., 2020). $RP$ indicates the relative improvement.† indicates the results are cited from (Guo, 2020). + +
ModelMixupTREC(%)SST-1(%)SST-2(%)SUBJ(%)MR(%)
\( RNN_{rand} \)w/o11.3±1.4863.7±3.0018.0±0.8510.7±0.5724.9±1.11
Sent10.5±1.1655.8±0.7516.6±0.3810.3±0.5524.2±0.72
Sent(our)9.8±0.7355.0±0.3715.9±0.4310.0±0.7823.6±0.65
RP(%)6.7↑1.4↑4.2↑2.9↑2.5↑
Word9.8±0.8655.9±0.6216.1±0.629.4±0.7723.6±0.75
Word(our)9.5±0.8455.6±0.6715.3±0.438.8±0.4822.7±0.96
RP(%)3.1↑0.5↑5.0↑6.4↑3.8↑
\( RNN_{glove} \)w/o8.3±0.4756.6±0.3013.0±0.516.1±0.7618.5±0.97
Sent6.9±0.5548.1±0.3712.1±0.616.0±0.6918.1±0.95
Sent(our)6.7±0.2748.0±0.4511.5±0.315.8±0.7917.8±0.98
RP(%)2.9↑0.2↑5.0↑3.3↑1.7↑
Word6.5±0.4548.6±0.3311.8±0.345.5±0.7317.8±0.87
Word(our)6.6±0.5248.0±0.6611.1±0.425.2±0.7217.5±0.91
RP(%)1.5↓1.2↑5.9↑5.5↑1.7↑
\( CNN_{rand} \)w/o8.8±0.8663.2±0.5417.6±0.529.5±0.6424.2±1.39
Sent8.3±0.6358.1±0.4819.9±0.329.5±0.5225.1±0.91
Sent(our)8.1±0.7157.9±0.5119.9±0.519.4±0.4525.1±0.93
RP(%)2.4↑0.5↑1.1↑
Word8.3±0.7158.0±0.5519.4±0.229.7±0.5724.6±0.78
Word(our)8.4±0.9257.5±0.5019.2±0.539.2±0.6824.1±0.98
RP(%)1.2↓1.0↑1.0↑5.2↑2.0↑
\( CNN_{glove} \)w/o7.9±0.1257.5±0.5013.1±0.495.6±0.3620.2±0.60
Non-linear5.3±0.29†50.7±0.42†11.4±0.29†6.1±0.19†16.6±0.36†
Sent6.7±0.2351.4±0.2312.8±0.355.1±0.3419.4±0.56
Sent(our)4.6±0.3350.6±0.4011.7±0.255.1±0.6217.4±0.69
RP(%)31.3↑1.6↑8.6↑10.3↑
Word6.3±0.8051.8±0.9112.9±0.265.3±0.4518.7±0.28
Word(our)4.8±0.2650.4±0.6011.7±0.245.1±0.5817.4±0.66
RP(%)23.8↑2.7↑9.3↑3.8↑7.0↑
\( BERT_{base} \)w/o2.6±0.1847.3±0.476.9±0.212.4±0.4711.5±1.19
Sent2.2±0.2444.5±0.376.3±0.292.4±0.5611.3±1.44
Sent(our)2.1±0.2044.3±0.545.9±0.302.3±0.4911.2±1.31
RP(%)4.5↑0.4↑9.5↑4.2↑0.9↑
Word2.1±0.2045.6±0.376.5±0.252.3±0.5411.1±1.44
Word(our)1.9±0.1345.5±0.376.4±0.232.2±0.5610.8±1.29
RP(%)9.5↑0.2↑1.5↑4.3↑2.7↑
+ +line over $C N N_{glove}$ , Sent(our) achieves a significant improvement on all five datasets. For instance, Sent(our) outperform Sent on the TREC, SST2 and MR datasets over $C N N_{glove}$ , the relative improvements are $31.3\%$ , $8.6\%$ and $10.3\%$ , respectively3. Compared with Word over $R N N_{glove}$ , Word(our) reduces the error rate over $1.2\%$ (up to $5.9\%$ ) on all five testing datasets. Interestingly, one can see that the Word(our) outperform Non-linear Mixup on three out of five datasets. That shows the slightly relaxing of LLC achieves similar sometimes even better results than changing the LLC into a nonlinear version. + +We use different initial embeddings to evaluate + +the effectiveness of augmentation as (Guo et al., 2019a). From the embedding perspective, we have three kinds of embeddings: the randomly initiated embeddings $(RNN_{rand}$ and $CNN_{rand})$ the pre-trained fixed embeddings $(RNN_{glove}$ and $CNN_{glove})$ and the pre-trained context-aware embeddings $(BERT_{base})$ . For each kind of embeddings, AMP outperforms the Mixup baselines. For instance, when compared with Sent under randomly initiated embeddings, the proposed method Sent(our) obtains lower predictive error rate on eight out of ten experiments. While Word(our) outperforms Word on nine out of ten experiments. Similar results can be observed on the pre-trained embeddings settings. Even under the context-aware embeddings setting $(BERT_{base})$ , our AMP can fur + +ther improve the performance against the Mixup with advanced backbone models. Significantly, on SST1, our method helps $BERT_{base}$ outperforms the SOTA model ( $BERT_{large}$ , 44.5) (Munikar et al., 2019), which is as two times large as $BERT_{base}$ . The results show the effectiveness of our method. + +Table 3: The results of $BERT_{base}$ with SentAMP on low-resource settings. The experiments are run ten times on each scaled TREC datasets. The average error rate and standard deviation are reported. + +
%labelsSentSent(our)RP(%)
316051.0±7.3442.1±7.34+17.5
421529.8±4.0525.6±4.01+14.1
527010.2±1.009.2±0.80+9.8
105435.1±0.644.6±0.37+9.8
158154.1±0.644.0±0.67+2.4
2010893.6±0.623.5±0.48+2.8
4021792.9±0.352.7±0.38+6.7
8043592.2±0.172.1±0.10+4.5
10054522.2±0.242.1±0.20+4.5
+ +# 4.4 Low-resource conditions + +With low resources, the under-fitting caused by the strict LLC has a serious impact on the model generalization. To evaluate our AMP performance with different amounts of data, particularly in the case of low-resource settings. We scale the size of the dataset by a certain ratio of data for each category. If the scaled category is less than 0, we retain at least one sample. We randomly generate ten different datasets for each scale ratio and then run the experiment on each dataset. The mean error rate and standard deviation are reported. As shown in Table 3, we can see that our method reduces the mean error rate against Mixup with a significant margin. For instance, Sent(our) reduces the error rate over Sent with $17.5\%$ and $14.1\%$ on $3\%$ and $4\%$ training data, separately. AMP works well as we expected in low resource conditions for its effectiveness in relaxing LLC in Mixup. + +# 4.5 Ablation study + +To further understand the Max Operation (MaxOp) and Min Operation (MinOp) effects in $AMP$ , we make several variations of our model. The variations are tested under $CNN_{glove}$ and $BERT_{base}$ on TREC. As presented in Table 4, the model trained without augmentation is denoted as Baseline. $+RandOp$ is identical to the model trained with Mixup, $+MaxOp$ indicates Mixup + +Table 4: Ablation study. + +
MethodModelOperationTREC
Word\( CNN_{glove} \)Baseline7.9±0.12
+RandOp6.3±0.80
+MaxOp4.7±0.35
AMP4.8±0.26
\( BERT_{base} \)Baseline2.6±0.18
+RandOp2.1±0.24
+MaxOp2.0±0.23
AMP1.9±0.13
Sent\( CNN_{glove} \)Baseline7.9±0.12
+RandOp6.7±0.23
+MaxOp4.8±0.22
AMP4.6±0.33
\( BERT_{base} \)Baseline2.6±0.18
+RandOp2.2±0.24
+MaxOp2.1±0.13
AMP2.1±0.15
+ +Table 5: The results under different setting of $\alpha$ . + +
αMethodsTRECSST2MR
Word1.9±0.136.3±0.2311.0±1.25
0.2Word(our)1.8±0.136.0±0.2010.9±1.22
RP(%)+5.3+4.8+0.9
Word1.9±0.136.7±0.2411.1±1.25
0.5Word(our)1.9±0.166.1±0.1810.8±1.25
RP(%)+0.0+8.9+2.7
Word2.1±0.206.5±0.2511.1±1.44
1.0Word(our)2.0±0.126.4±0.2310.8±1.29
RP(%)+4.8+1.5+2.7
Word2.1±0.186.8±0.1311.2±1.44
1.5Word(our)2.0±0.126.5±0.2811.0±1.34
RP(%)+4.8+4.4+1.8
+ +with MaxOp is used for model training, $AMP$ is the fully functional method of our proposed method. As the results presented in Table 4, MaxOp contributes the majority cut down of error rate. For instance, the $CNN_{glove}$ under Sent Mixup settings, MaxOp reduces the error rate from 6.7 to 4.8. That suggests the effectiveness of adversarial perturbation in relaxing the LLC in Mixup. The comparison in MinOp can mostly (three out of four times) further reduce the error rate. Specifically, it brings down the mean error rate from 4.8 to 4.6 on $CNN_{glove}$ . That indicates the effectiveness of MinOp in eliminating the influence of step size. + +![](images/1d4e3fd8998762daf2f049f1fa765c142bd429ba60c35585637f65c6c90a85de.jpg) +(a) Random pair1 + +![](images/9f8f5ffe663f8ca696d159402bbe5938b9657c8003c0724cb8125213ed56f28d.jpg) +(b) Random pair2 + +![](images/df8a51cb6b1fa6e652c4e4e25c206714ec105ddf22f9cc3a45d6617fef1ae939.jpg) +(c) Full-size testing set +Figure 2: The visualization of loss on unseen synthetic data. The results conduct by $BERT_{base}$ on $3\%$ TREC dataset, as listed in Table 3. + +# 4.6 Mix ratio distribution + +To analyze the effects of different shapes of mixing coefficient distributions, we compare Word(out) with Word on $BERT_{base}$ on four $\alpha$ settings (from 0.2 to 1.5) and three datasets: TREC, SST2, and MR. The $\alpha$ is the parameter of the Beta distribution. It controls the shape of how the mixing coefficient $\lambda$ is distributed. As presented in Table 5, our method can achieve lower mean error rates than Word on all $\alpha$ settings. For instance, Word(our) achieve 8.9% lower mean error rate than Word on SST2 with $\alpha = 0.5$ . The improvements come mainly from training the models with the slightly non-linear data generated by AMP. + +# 4.7 Visualization + +To intuitively demonstrate the effects of relaxing LLC, we visualize the loss of networks trained by our $AMP$ and Mixup. The synthetic data is generated strictly follow the LLC based on the testing data. The network trained with relaxed LLC has a smaller loss value shows the effectiveness of our method in alleviate under-fitting. As shown in Figure 2(a), 2(b) and 2(c), we draw the losses on synthetic data generated with mixing coefficient $\in [0,1]$ . Figure 2(a) and 2(b) each uses one random pair of data in the testing set for generating. For two random pair $(x_{1},y_{1})(x_{4},y_{4})$ and $(x_{2},y_{2})(x_{3},y_{3})$ , we calculate the Mixup loss of each pair on different $\lambda$ to get Figure 2(a) and 2(b). The loss curves on random pairs are not symmetric for the loss of each + +example of the pairs are different. The loss curves are encouraged (by LLC) to be a line in-between two examples. The line should start with the loss of one example and end with the loss of another example. The Mixup loss (interpolation on cross-entropy loss) and the different examples result in different shapes of the loss curves in Figure 2(a) and 2(b). As illustrated in Figure 2(a) and 2(b), one can observe that AMP have a smaller loss than Mixup. That indicates the effectiveness of training on the slightly non-linear synthetic data in the micro view. + +Figure 2(c) uses the full-size testing set for generating. Figure 2(c) shows the average loss over all synthetic data generated with the full-size testing set. We freeze the random seeds; thus, we can freeze the data pairs. Let the testing dataset be $X = [(x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3}),(x_{4},y_{4})]$ . The synthetic data is generated by $\lambda X + (1 - \lambda)X$ , where $X' = [(x_4,y_4),(x_3,y_3),(x_2,y_2),(x_1,y_1)]$ is shuffled $X$ . So, the loss when $\lambda = 0$ and $\lambda = 1$ are identical. Similarly, we can get a symmetric picture as Figure 2(c). One can observe that our method can achieve a significantly smaller average loss than Mixup in the macro view. The visualizations verified our assumption that relaxing LLC can further regularize models. + +# 5 Related work + +Mixup on text classification. Text classification has achieved remarkable improvements underlying some effective paradigms, e.g., CNN (Kim, 2014), attention-based LSTMs (Wang et al., 2016), GloVe (Pennington et al., 2014) and BERT (Devlin et al., 2019), etc. The large scale parameter of the model tends to generalize poorly in low-resource conditions. To overcome the limitation, Mixup (Zhang et al., 2018) is proposed as a data augmentation based regularizer. Few researches explore the Mixup (Guo et al., 2019b; Zhang et al., 2020; Guo, 2020) on NLP tasks. For classification, (Guo et al., 2019a) suggest applying Mixup on particular level of networks, i.e., word or sentence level. Although these work make promising progress, the mechanism of Mixup is still need to be explored. + +Adversarial Training. The min-max formulation of adversarial training has been theoretically and empirically verified (Beckham et al., 2019; Xu et al., 2020; Pang et al., 2020; Archambault et al., 2019; Lee et al., 2020; Miyato et al., 2015, 2018, 2017). Such training procedure first generates ad + +versarial examples that might maximize the training loss and then minimizes the training loss after adding the adversarial examples into the training set (Madry et al., 2018). The Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) is an efficient one-step method. Inspired by the min-max formulation of adversarial learning, we organize our method into a min-max-rand formulation. + +# 6 Conclusion + +For relaxing Locally Linear Constraints (LLC) in Mixup to alleviate the under-fitting, this paper proposes an Adversarial Mixing Policy (AMP). Inspired by the adversarial training, we organize our method into a min-max-rand formulation. The proposed method injects slightly non-linearity in between synthetic examples and synthetic labels without extra parameters. By training on these data, the networks can compatible with some ambiguous data and thus reduce under-fitting. Thus, the network will be further regularized to reach better performance. We evaluate our method on five popular classification models on five publicly available text datasets. Extensive experimental results show that our AMP can achieve a significantly lower error rate than vanilla Mixup (up to $31.3\%$ ), especially in low-resource conditions (up to $17.5\%$ ). + +# 7 Acknowledgments + +We thank Prof.Xiaojie Wang and Prof.Fangxiang Feng from BUPT for their valuable feedback on an earlier draft of this paper, and Yang Du from XDF for her suggestions of English writing for the final revision. We also thank anonymous reviewers for their helpful comments. + +# References + +Guillaume P Archambault, Yongyi Mao, Hongyu Guo, and Richong Zhang. 2019. Mixup as directional adversarial training. arXiv preprint arXiv:1906.06875. +Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R. Devon Hjelm, Yoshua Bengio, and Chris Pal. 2019. On adversarial mixup resynthesis. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 4348-4359. +David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. 2019a. Remixmatch: Semi-supervised learn + +ing with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785. +David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019b. Mixmatch: A holistic approach to semisupervised learning. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5050-5060. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. +Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Hongyu Guo. 2020. Nonlinear mixup: Out-of-manifold data augmentation for text classification. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 4044-4051. AAAI Press. +Hongyu Guo, Yongyi Mao, and Richong Zhang. 2019a. Augmenting data with mixup for sentence classification: An empirical study. arXiv preprint arXiv:1905.08941. +Hongyu Guo, Yongyi Mao, and Richong Zhang. 2019b. Mixup as locally linear out-of-manifold regularization. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 3714-3722. AAAI Press. +Stephen Hanson and Lorien Pratt. 1988. Comparing biases for minimal network construction with backpropagation. Advances in neural information processing systems, 1:177-185. + +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society. +Alex Hernández-García and Peter König. 2018. Data augmentation instead of explicit regularization. arXiv preprint arXiv:1806.03852. +Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 448-456. JMLR.org. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Linguistics. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Saehyung Lee, Hyungyu Lee, and Sungroh Yoon. 2020. Adversarial vertex mixup: Toward better adversarially robust generalization. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 269-278. IEEE. +Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics. +Guang Liu, Hailong Huang, Yuzhao Mao, Weiguo Gao, Xuan Li, and Jianping Shen. 2021. A diversity-enhanced and constraints-relaxed augmentation for low-resource classification. In Database Systems for Advanced Applications - 26th International Conference, DASFAA 2021, Taipei, Taiwan, April 11-14, 2021, Proceedings, Part II, volume 12682 of Lecture Notes in Computer Science, pages 262-270. Springer. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Zhijun Mai, Guosheng Hu, Dexiong Chen, Fumin Shen, and Heng Tao Shen. 2021. Metamixup: Learning adaptive interpolation policy of mixup with metalearning. IEEE Transactions on Neural Networks and Learning Systems. + +Xudong Mao, Yun Ma, Zhenguo Yang, Yangbin Chen, and Qing Li. 2019. Virtual mixup training for unsupervised domain adaptation. arXiv preprint arXiv:1905.04215. +Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993. +Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2015. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677. +Manish Munikar, Sushil Shakya, and Aakash Shrestha. 2019. Fine-grained sentiment classification using bert. In 2019 Artificial Intelligence for Transforming Business and Society (AITB), volume 1, pages 1-5. IEEE. +Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271-278, Barcelona, Spain. +Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115-124, Ann Arbor, Michigan. Association for Computational Linguistics. +Tianyu Pang, Kun Xu, and Jun Zhu. 2020. Mixup inference: Better exploiting mixup to defend adversarial attacks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. + +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958. +Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip S Yu, and Lifang He. 2020. Mixup-transformer: Dynamic data augmentation for nlp tasks. arXiv preprint arXiv:2010.02394. +Johan AK Suykens and Joos Vandewalle. 1999. Least squares support vector machine classifiers. Neural processing letters, 9(3):293-300. +Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Robustness may be at odds with accuracy. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. +Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitlagkas, David Lopez-Paz, and Yoshua Bengio. 2019. *Manifold mixup: Better representations by interpolating hidden states.* In *Proceedings of the 36th International Conference on Machine Learning*, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pages 6438-6447. PMLR. +Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606-615, Austin, Texas. Association for Computational Linguistics. +Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics. +Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. 2020. Adversarial domain adaptation with domain mixup. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6502-6509. +Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Rongzhi Zhang, Yue Yu, and Chao Zhang. 2020. SeqMix: Augmenting active sequence labeling via sequence mixup. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language + +Processing (EMNLP), pages 8566-8579, Online. Association for Computational Linguistics. +Jianchao Zhu, Liangliang Shi, Junchi Yan, and Hongyuan Zha. 2020. Automix: Mixup networks for sample interpolation via cooperative barycenter learning. In European Conference on Computer Vision, pages 633-649. Springer. \ No newline at end of file diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/images.zip b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c207e662a7c911842b57c57bda3a7c5da45e4e91 --- /dev/null +++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b79a80d803fef00e5af327d615333cb1cbffb37d6dbdc1b9808861b595e41fc2 +size 587534 diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/layout.json b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7214d2957444891bb59ce3c7b218dce41abd366b --- /dev/null +++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8373684fbb67ab7874bc37be39b50187526e1d1021230eba7e7ea4858675718 +size 450232 diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_content_list.json b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e1b181861956fe4b8a813a9cad49199c962a344c --- /dev/null +++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ce4fada32f91f8a007fd20283c65ca7c6cabbae3ca85cec7c6608f24c07a0b1 +size 113978 diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_model.json b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e47a16e970cb7b4454147f7d412b40c0da920706 --- /dev/null +++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d2221b63c27ac0e76e338c2a0f1f54aeeede91d4f530c5ca3d9dd57299f3b3c +size 141895 diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_origin.pdf b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..523656015792a794398adf5d7607779f4eca0af4 --- /dev/null +++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53b1b7729600ea85df4990f24471ffa1f936e0a79c6dbebb0a4c137fae501b2b +size 1066655 diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/full.md b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bf3286eac51ec73384690f40171f504e1e8cf97e --- /dev/null +++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/full.md @@ -0,0 +1,513 @@ +# Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach + +Simiao Zuo†, Chen Liang†, Haoming Jiang‡, Xiaodong Liu‡, Pengcheng He‡, Jianfeng Gao‡, Weizhu Chen and Tuo Zhao† + +$^{\dagger}$ Georgia Institute of Technology $\square$ Amazon $\circ$ Microsoft + +{simiaozuo,ciang73}@gatech.edu,jhaoming@amazon.com + +{xiaodl,Pengcheng.H,jfgao,wzchen}@microsoft.com, + +tourzhao@gatech.edu + +# Abstract + +Adversarial regularization has been shown to improve the generalization performance of deep learning models in various natural language processing tasks. Existing works usually formulate the method as a zero-sum game, which is solved by alternating gradient descent/ascent algorithms. Such a formulation treats the adversarial and the defending players equally, which is undesirable because only the defending player contributes to the generalization performance. To address this issue, we propose Stackelberg Adversarial Regularization (SALT), which formulates adversarial regularization as a Stackelberg game. This formulation induces a competition between a leader and a follower, where the follower generates perturbations, and the leader trains the model subject to the perturbations. Different from conventional approaches, in SALT, the leader is in an advantageous position. When the leader moves, it recognizes the strategy of the follower and takes the anticipated follower's outcomes into consideration. Such a leader's advantage enables us to improve the model fitting to the unperturbed data. The leader's strategic information is captured by the Stackelberg gradient, which is obtained using an unrolling algorithm. Our experimental results on a set of machine translation and natural language understanding tasks show that SALT outperforms existing adversarial regularization baselines across all tasks. Our code is publicly available. + +# 1 Introduction + +Adversarial regularization (Miyato et al., 2017) has been shown to improve the generalization performance of deep learning models in various natural language processing (NLP) tasks, such as language modeling (Wang et al., 2019b), machine translation (Sato et al., 2019), natural language understanding (Jiang et al., 2020), and reading comprehension (Zhang et al., 2020). + +sion (Jia and Liang, 2017). However, even though significant progress has been made, the power of adversarial regularization is not fully harnessed. + +Conventional adversarial regularization is formulated as a zero-sum game (a min-max optimization problem), where two players seek to minimize/maximize their utility functions. In this formulation, an adversarial player composes perturbations, and a defending player solves for the model parameters subject to the perturbed inputs. Existing algorithms find the equilibrium of this zero-sum game using alternating gradient descent/ascent (Madry et al., 2018). For example, in a classification problem, the adversarial player first generates the input perturbations by running projected gradient ascent to maximize a loss function, and then the defending player updates the model using gradient descent, trying to decrease the classification error. Notice that in this case, neither of the players know the strategy of its competitor, i.e., the model does not know how the perturbations are generated, and vice versa. In other words, the two players are of the same priority, and either one of them can be advantageous in the game. It is possible that the adversarial player generates over-strong perturbations that hinder generalization of the model. + +To resolve this issue, we grant the defending player (i.e., the model) a higher priority than the adversarial player by letting the defender recognize its competitor's strategy, such that it is advantageous in the game. Consequently, we propose Stackelberg Adversarial Regularization (SALT), where we formulate adversarial regularization as a Stackelberg game (Von Stackelberg, 2010). The concept arises from economics, where two firms are competing in a market, and one of the them is in the leading position by acknowledging the opponent's strategy. In Stackelberg adversarial regularization, a leader solves for the model parameters, and a follower generates input perturbations. The leader procures its advantage by considering what the best response + +of the follower is, i.e., how will the follower respond after observing the leader's decision. Then, the leader minimizes its loss, anticipating the predicted response of the follower. + +The SALT framework identifies the interaction between the leader and the follower by treating the follower's strategy (i.e., the input perturbations) as an operator of the leader's decision (i.e., the model parameters). Then we can solve for the model parameters using gradient descent. One caveat is that computing the gradient term, which we call the Stackelberg gradient, requires differentiating the interaction operator. To rigorously define this operator, recall that the follower can be approximately solved using gradient ascent. We can treat the perturbations in each iteration as an operator of the model parameters, and the interaction operator is then the composition of such update-induced operators. Correspondingly, the Stackelberg gradient is obtained by differentiating through these updates. This procedure is referred to as unrolling (Pearlmutter and Siskind, 2008), and the only computational overhead caused by it is computing Hessian vector products. As a result, when applying the finite difference method, computing the Stackelberg gradient requires two backpropagation and an extra $O(d)$ complexity operation, where $d$ is the embedding dimension. Therefore, the unrolling algorithm computes the Stackelberg gradient without causing much computational overhead. + +We conduct experiments on neural machine translation (NMT) and natural language understanding (NLU) tasks. For the NMT tasks, we experiment on four low-resource and one rich-resource datasets. SALT improves upon existing adversarial regularization algorithms by notable margins, especially on low-resource datasets, where it achieves up to 2 BLEU score improvements. To test performance on NLU tasks, we evaluate SALT on the GLUE (Wang et al., 2019a) benchmark. SALT outperforms state-of-the-art models, such as BERT (Devlin et al., 2019), FreeAT (Shafahi et al., 2019), FreeLB (Zhu et al., 2019), and SMART (Jiang et al., 2020). We build SALT on the BERT-base architecture, and we achieve an average score of 84.5 on the GLUE development set, which is at least 0.7 higher than existing methods. Moreover, even though we adapt SALT to BERT-base, the performance is noticeably higher than the vanilla BERT-large model (84.5 vs. 84.0). + +The unrolling procedure was first proposed + +for auto-differentiation (Pearlmutter and Siskind, 2008), and later applied in various context, such as hyper-parameter optimization (Maclaurin et al., 2015; Finn et al., 2017), meta-learning (Andrychowicz et al., 2016), and Generative Adversarial Networks (Metz et al., 2017). To the best of our knowledge, we are the first to apply the unrolling technique to adversarial regularization to improve generalization performance. + +We summarize our contributions as the following: (1) We propose SALT, which employs a Stackelberg game formulation of adversarial regularization. (2) We use an unrolling algorithm to find the equilibrium of the Stackelberg game. (3) Extensive experiments on NMT and NLU tasks verify the efficacy of our method. + +Notation. We use $\mathrm{df}(x) / \mathrm{dx}$ to denote the gradient of $f$ with respect to $x$ . We use $\partial f(x,y) / \partial x$ to denote the partial derivative of $f$ with respect to $x$ . For a $d$ -dimensional vector $v$ , its $\ell_2$ norm is defined as $\| v \|_2 = (\sum_{i=1}^d v_i^2)^{1/2}$ , and its $\ell_\infty$ norm is defined as $\| v \|_\infty = \max_{1 \leq i \leq d} |v_i|$ . + +# 2 Background and Related Works + +$\diamond$ Neural machine translation has achieved superior empirical performance (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017). We focus on the Transformer architecture (Vaswani et al., 2017), which integrates the attention mechanism in an encoder-decoder structure. The encoder in a Transformer model first maps a source sentence into an embedding space, then the embeddings are fed into several encoding layers to generate hidden representations, where each of the encoding layers contains a self-attention mechanism and a feed-forward neural network (FFN). After which the Transformer decoder layers, each contains a self-attention, a encoder-decoder attention, and a FFN, decode the hidden representations. + +Adversarial training was originally proposed for training adversarial robust classifiers in image classification (Szegedy et al., 2014; Goodfellow et al., 2015; Madry et al., 2018). The idea is to synthesize strong adversarial samples, and the classifier is trained to be robust to them. Theoretical understanding (Li et al., 2019) about adversarial training and various algorithms to generate the adversarial samples, such as learning-to-learn (Jiang et al., 2021), are proposed. Besides computer vision, adversarial training can also benefit reinforce + +ment learning (Shen et al., 2020). Different from the above fields, in NLP, the goal of adversarial training is to build models that generalize well on the unperturbed test data. Note that robustness and generalization are different concepts. Recent works (Raghunathan et al., 2020; Min et al., 2020) showed that adversarial training can hurt generalization performance, i.e., accuracy on clean data. As such, adversarial training needs to be treated with great caution. Therefore, in NLP, this technique requires refined tuning of, for example, the training algorithm and the perturbation strength. + +$\diamond$ Fine-tuning pre-trained language models (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019b; He et al., 2020) is state-of-the-art for natural language understanding tasks such as the GLUE (Wang et al., 2019a) benchmark. Recently, there are works that use adversarial pre-training (Liu et al., 2020a) and adversarial-regularized fine-tuning methods such as SMART (Jiang et al., 2020), FreeLB (Zhu et al., 2019), and FreeAT (Shafahi et al., 2019) to improve model generalization and robustness (Cheng et al., 2021). + +# 3 Method + +Natural language inputs are discrete symbols (e.g., words), instead of continuous ones. Therefore, a common approach to generate perturbations is to learn continuous embeddings of the inputs and operate on the embedding space (Miyato et al., 2017; Clark et al., 2018; Sato et al., 2018, 2019; Stutz et al., 2019). Let $f(x, \theta)$ be our model, where $x$ is the input embedding, and $\theta$ is the model parameter. Further let $y$ be the ground-truth output corresponding to $x$ . For example, in NMT, $f$ is a sequence-to-sequence model, $x$ is the embedding of the source sentence, and $y$ is the target sentence. In classification tasks, $f$ is a classifier, $x$ is the input sentence/document embedding, and $y$ is the label. In both of these cases, the model is trained by minimizing the empirical risk over the training data, i.e., + +$$ +\min _ {\theta} \mathcal {L} (\theta) = \frac {1}{n} \sum_ {i = 1} ^ {n} \ell (f (x _ {i}, \theta), y _ {i}). +$$ + +Here $\{(x_i, y_i)\}_{i=1}^n$ is our dataset, and $\ell$ is a task-specific loss function, e.g., cross-entropy loss. + +# 3.1 Adversarial Regularization + +Adversarial Regularization (Miyato et al., 2017) is a regularization technique that encourages smooth- + +ness of the model outputs around each input data point. Concretely, we define an adversarial regularizer for non-regression tasks as + +$$ +\ell_ {v} (x, \delta , \theta) = \operatorname {K L} \left(f (x, \theta) \mid | f (x + \delta , \theta)\right), +$$ + +where $\mathrm{KL}(P\mid Q) = \sum_{k}p_{k}\log \frac{p_{k}}{q_{k}}.$ + +Here $\mathrm{KL}(\cdot ||\cdot)$ is the Kullback-Leibler (KL) divergence, $\delta$ is the perturbation corresponding to $x$ , and $f(\cdot ,\theta)$ is the prediction probability simplex given model parameters $\theta$ . In regression tasks, the model output $f(\cdot ,\theta)$ is a scalar, and the adversarial regularizer is defined as + +$$ +\ell_ {v} (x, \delta , \theta) = (f (x, \theta) - f (x + \delta , \theta)) ^ {2}. +$$ + +Then the training objective is + +$$ +\min _ {\theta} \mathcal {L} (\theta) + \frac {\alpha}{n} \sum_ {i = 1} ^ {n} \max _ {\| \delta_ {i} \| \leq \epsilon} \ell_ {v} \left(x _ {i}, \delta_ {i}, \theta\right), \tag {1} +$$ + +where $\alpha$ is a tuning parameter, $\epsilon$ is a pre-defined perturbation strength, and $\| \cdot \|$ is either the $\ell_2$ norm or the $\ell_{\infty}$ norm. + +The min and max problems are solved using alternating gradient descent/ascent. We first generate the perturbations $\delta$ by solving the maximization problem using several steps of projected gradient ascent, and then we update the model parameters $\theta$ with gradient descent, subject to the perturbed inputs. More details are deferred to Appendix A. + +One major drawback of the zero-sum game formulation (Eq. 1) is that it fails to consider the interaction between the perturbations $\delta$ and the model parameters $\theta$ . This is problematic because a small change in $\delta$ may lead to a significant change in $\theta$ , which renders the optimization ill-conditioned. Thus, the model is susceptible to underfitting and generalize poorly on unperturbed test data. + +# 3.2 Adversarial Regularization as Stackelberg Game + +We formulate adversarial regularization as a Stackelberg game (Von Stackelberg, 2010): + +$$ +\min _ {\theta} \mathcal {F} (\theta) = \mathcal {L} (\theta) + \frac {\alpha}{n} \sum_ {i = 1} ^ {n} \ell_ {v} \left(x _ {i}, \delta_ {i} ^ {K} (\theta), \theta\right), +$$ + +$$ +\mathrm {s . t .} \delta_ {i} ^ {K} (\theta) = U ^ {K} \circ U ^ {K - 1} \circ \dots \circ U ^ {1} \left(\delta_ {i} ^ {0}\right). \tag {2} +$$ + +Here “ $\circ$ ” denotes operator composition, i.e., $f \circ g(\cdot) = f(g(\cdot))$ . Following conventions, in this + +Stackelberg game, we call the optimization problem in Eq. 2 the leader. Further, the follower in Eq. 2 is described using a equality constraint. Note that $U^{K}$ is the follower's $K$ -step composite strategy, which is the composition of $K$ one-step strategies $\{U^{k}\}_{k=1}^{K}$ . In practice, $K$ is usually small. This is because in NLP, we target for generalization, instead of robustness, and choosing a small $K$ prevents over-strong adversaries. + +In Eq. 2, $U^{k}$ s are the follower's one-step strategies, and we call them update operators, e.g., $U^{1}$ updates $\delta^{0}$ to $\delta^{1}$ using pre-selected algorithms. For example, projected gradient ascent can be applied as the update procedure, that is, + +$$ +\begin{array}{l} \delta^ {k} (\theta) = U ^ {k} (\delta^ {k - 1} (\theta)) \\ = \Pi_ {\| \cdot \| \leq \epsilon} \left(\delta^ {k - 1} (\theta) + \eta \frac {\partial \ell_ {v} (x , \delta^ {k - 1} (\theta) , \theta)}{\partial \delta^ {k - 1} (\theta)}\right) \\ \end{array} +$$ + +$$ +\text {f o r} k = 1, \dots , K, \tag {3} +$$ + +where $\delta^0\sim \mathcal{N}(0,\sigma^2\mathrm{I})$ is a initial random perturbation drawn from a normal distribution with variance $\sigma^2 I$ , $\eta$ is a pre-defined step size, and $\Pi$ denotes projection to the $\ell_2$ -ball or the $\ell_{\infty}$ -ball. + +To model how the follower will react to a leader's decision $\theta$ , we consider the function $\delta^{K}(\theta)$ . Then, adversarial training can be viewed solely in terms of the leader decision $\theta$ . + +We highlight that in our formulation, the leader knows the strategy, instead of only the outcome, of the follower. This information is captured by the Stackelberg gradient $\mathrm{d}\mathcal{F}(\theta) / \mathrm{d}\theta$ , defined as the following: + +$$ +\begin{array}{l} \frac {\mathrm {d} \mathcal {F} (\theta)}{\mathrm {d} \theta} = \frac {\mathrm {d} \ell (f (x , \theta) , y)}{\mathrm {d} \theta} + \alpha \frac {\mathrm {d} \ell_ {v} (x , \delta^ {K} (\theta) , \theta)}{\mathrm {d} \theta} \\ = \underbrace {\frac {\mathrm {d} \ell (f (x , \theta) , y)}{\mathrm {d} \theta} + \alpha \frac {\partial \ell_ {v} (x , \delta^ {K} , \theta)}{\partial \theta}} _ {\text {l e a d e r}} \\ + \underbrace {\alpha \frac {\partial \ell_ {v} (x , \delta^ {K} (\theta) , \theta)}{\partial \delta^ {K} (\theta)} \frac {\mathrm {d} \delta^ {K} (\theta)}{\mathrm {d} \theta}} _ {\text {l e a d e r - f o l l o w e r i n t e r a c t i o n}}. \tag {4} \\ \end{array} +$$ + +The underlying idea behind Eq. $4^{1}$ is that given a leader's decision $\theta$ , we take the follower's strategy into account (i.e., the "leader-follower interaction" term) and find a direction along which the + +Algorithm 1: Stackelberg Adversarial Regularization with Unrolled Optimization. + +Input: $\mathcal{D}$ : dataset; $T$ total number of training epochs; $\sigma^2$ variance of initial perturbations; $K$ number of unrolling steps; Optimizer: optimizer to update $\theta$ . + +Initialize: model parameters $\theta$ + +for $t = 1,\dots ,T$ do + +for $(x,y)\in \mathcal{D}$ do + +Initialize $\delta^0\sim \mathcal{N}(0,\sigma^2\mathrm{I})$ + +for $k = 1,\dots ,K$ do + +Compute $\delta^k$ using Eq. 3; + +Compute $\mathrm{d}\delta^k (\theta) / \mathrm{d}\theta$ using + +Eq. 6; + +end + +Compute $\mathrm{d}\mathcal{F}(\theta) / \mathrm{d}\theta$ based on + +$\mathrm{d}\delta^{K}(\theta) / \mathrm{d}\theta$ using Eq. 4; + +$\theta \gets$ Optimizer(dF(0)/d0); + +end + +end + +Output: $\theta$ + +leader's loss decreases the most. Then we update $\theta$ in that direction. Note that the gradient used in standard adversarial training (Eq. 1) only contains the "leader" term, such that the "leader-follower interaction" is not taken into account. + +# 3.3 SALT: Stackelberg Adversarial Regularization + +We propose to use an unrolling method (Pearlmutter and Siskind, 2008) to compute the Stackelberg gradient (Eq. 4). The general idea is that since the interaction operator is defined as the composition of the $\{U^k\}$ operators, all of which are known, we can directly compute the derivative of $\delta^K (\theta)$ with respect to $\theta$ . Concretely, we first run a forward iteration to update $\delta$ , and then we differentiate through this update to acquire the Stackelberg gradient. + +Note that the updates of $\delta$ can take any form, such as projected gradient ascent in Eq. 3, or more complicated alternatives like Adam (Kingma and Ba, 2015). For notation simplicity, we denote $\Delta(x, \delta^{k-1}(\theta), \theta) = \delta^k(\theta) - \delta^{k-1}(\theta)$ . Accordingly, Eq. 3 can be rewritten as + +$$ +\delta^ {k} (\theta) = \delta^ {k - 1} (\theta) + \Delta (x, \delta^ {k - 1} (\theta), \theta). \tag {5} +$$ + +The most expensive part in computing the Stackelberg gradient (Eq. 4) is to calculate $\mathrm{d}\delta^{K}(\theta) / \mathrm{d}\theta$ + +which involves differentiating through the composition form of the follower's strategy: + +$$ +\begin{array}{l} \frac {\mathrm {d} \delta^ {k} (\theta)}{\mathrm {d} \theta} = \frac {\mathrm {d} \delta^ {k - 1} (\theta)}{\mathrm {d} \theta} + \frac {\partial \Delta (x , \delta^ {k - 1} , \theta)}{\partial \theta} \\ + \frac {\partial \Delta (x , \delta^ {k - 1} (\theta) , \theta)}{\partial \delta^ {k - 1} (\theta)} \frac {\mathrm {d} \delta^ {k - 1} (\theta)}{\mathrm {d} \theta} \\ \text {f o r} k = 1, \dots , K. \tag {6} \\ \end{array} +$$ + +We can compute Eq. 6 efficiently using deep learning libraries, such as PyTorch (Paszke et al., 2019). Notice that $\Delta(x, \delta^{k-1}(\theta), \theta)$ already contains the first order derivative with respect to the perturbations. Therefore, the term $\partial \Delta(x, \delta^{k-1}(\theta), \theta) / \partial \delta^{k-1}(\theta)$ contains the Hessian of $\delta^{k-1}(\theta)$ . As a result, in Eq. 4, the most expensive operation is the Hessian vector product (Hvp). Using the finite difference method, computing Hvp only requires two backpropagation and an extra $O(d)$ complexity operation. This indicates that in comparison with conventional adversarial training, SALT does not introduce significant computational overhead. The training algorithm is summarized in Algorithm 1. + +# 4 Experiments + +In all the experiments, we use PyTorch² (Paszke et al., 2019) as the backend. All the experiments are conducted on NVIDIA V100 32GB GPUs. We use the Higher package³ (Grefenstette et al., 2019) to implement the proposed algorithm. + +# 4.1 Baselines + +We adopt several baselines in the experiments. + +$\diamond$ Transformer (Vaswani et al., 2017) achieves superior performance in neural machine translation. +$\diamond$ BERT (Devlin et al., 2019) is a pre-trained language model that exhibits outstanding performance after fine-tuned on downstream NLU tasks. +Adversarial training (Adv, Sato et al. 2019) in NMT can improve models' generalization by training the model to defend against adversarial attacks. +$\diamond$ FreeAT (Shafahi et al., 2019) enables "free" adversarial training by recycling the gradient information generated when updating the model parameters. This method was proposed for computer vision tasks, but was later modified for NLU. We further adjust the algorithm for NMT tasks. + +
DataSourceTrainValidTest
En-ViIWSLT'15133k7681268
De-EnIWSLT'14161k7.2k6.7k
Fr-EnIWSLT'16224k10801133
En-DeWMT'164.5m3.0k3.0k
+ +Table 1: Dataset source and statistics. Here "k" stands for thousand, and "m" stands for million. + +
En-ViDe-EnFr-En
Transformer30.334.738.2
Adv31.034.838.8
FreeAT31.035.238.6
FreeLB31.635.338.7
SMART31.535.538.9
SALT32.836.839.7
+ +Table 2: BLEU score on three low-resource datasets. All the baseline results are from our re-implementation. We report the mean of three runs. + +$\diamond$ FreeLB (Zhu et al., 2019) is a "free" large batch adversarial training method. We modify FreeLB to an adversarial regularization method that better fits our need. This algorithm was originally proposed for NLU. We modify the algorithm so that it is also suitable for NMT tasks. + +$\diamond$ SMART (Jiang et al., 2020) is a state-of-the-art fine-tuning method that utilizes smoothness-inducing regularization and Bregman proximal point optimization. + +We highlight that we focus on model generalization on clean data, instead of adversarial robustness (a model's ability to defend adversarial attacks). As we will see in the experiments, adversarial training methods (e.g., Adv, FreeAT) suffer from label leakage, and do not generalize as well as adversarial regularization methods. + +# 4.2 Neural Machine Translation + +Datasets. We adopt three low-resource datasets and a rich-resource dataset. Dataset statistics are summarized in Table 1. For the low-resource experiments, we use: English-Vietnamese from IWSLT'15, German-English from IWSLT'14, and French-English from IWSLT'16. For the rich-resource experiments, we use the English-German dataset from WMT'16, which contains about 4.5 million training samples. + +
RTE AccMRPC Acc/F1CoLA MccSST-2 AccSTS-B P/S CorrQNLI AccQQP Acc/F1MNLI-m/mm AccAverage Score
BERTLARGE71.186.0/89.661.893.589.6/89.392.491.3/88.486.3/86.284.0
BERTBASE63.584.1/89.054.792.989.2/88.891.190.9/88.384.5/84.481.5
FreeAT68.085.0/89.257.593.289.5/89.091.391.2/88.584.9/85.082.6
FreeLB70.086.0/90.058.993.489.7/89.291.591.4/88.485.4/85.583.3
SMART71.287.7/91.359.193.090.0/89.491.791.5/88.585.6/86.083.8
SALT72.988.4/91.861.093.690.4/90.092.091.7/88.686.1/85.884.5
+ +Table 3: Evaluation results on the GLUE development set. All the rows use $BERT_{BASE}$ , except the top one, which is included to demonstrate the effectiveness of our model. Best results on each dataset, excluding $BERT_{LARGE}$ , are shown in **bold**. Results of $BERT_{BASE}$ (Devlin et al., 2019), $BERT_{LARGE}$ (Devlin et al., 2019), FreeAT (Shafahi et al., 2019), and FreeLB (Zhu et al., 2019) are from our re-implementation. SMART results are from Jiang et al. (2020). + +
RTE AccMRPC Acc/F1CoLA MccSST-2 AccSTS-B P/S CorrQNLI AccQQP Acc/F1MNLI-m/mm AccAverage Score
BERTBASE66.484.8/88.952.193.587.1/85.890.571.2/89.284.6/83.480.0
FreeLB70.183.5/88.154.593.687.7/86.791.872.7/89.685.7/84.681.2
SALT72.285.8/89.755.694.288.0/87.192.172.8/89.885.8/84.882.0
+ +Table 4: GLUE test set results on the GLUE evaluation server. All the methods fine-tune a pre-trained BERTBASE model. FreeAT and SMART did not report BERTBASE results in their paper or on the GLUE evaluation server. Model references: BERTBASE (Devlin et al., 2019), FreeLB (Zhu et al., 2019). + +
BLEU
Transformer (Vaswani et al., 2017)28.4
FreeAT (Shafahi et al., 2019)29.0
FreeLB (Zhu et al., 2019)29.0
SMART (Jiang et al., 2020)29.1
SALT29.6
+ +Table 5: sacreBLEU score on WMT'16 En-De. All the baseline results are from our re-implementation. + +Implementation. Recall that to generate adversarial examples, we perturb the word embeddings. In NMT experiments, we perturb both the source-side and the target-side embeddings. This strategy is empirically demonstrated (Sato et al., 2019) to be more effective than perturbing only one side of the inputs. We use $\text{Fairseq}^5$ (Ott et al., 2019) to implement our algorithms. We adopt the Transformer-base (Vaswani et al., 2017) architecture in all the low-resource experiments, except IWSLT'14 De-En. In this dataset, we use a model smaller than Transformer-base by decreasing the hidden dimension size from 2048 to 1024, and decreasing the number of heads from 8 to 4 (while dimension of each head doubles). For the rich-resource experi + +ments, we use the Transformer-big (Vaswani et al., 2017) architecture. Training details are presented in Appendix B.1. + +Results. Experimental results for the low-resource experiments are summarized in Table 2. Notice that SMART, which utilizes conventional adversarial regularization, consistently outperforms standard adversarial training (Adv). Similar observations were also reported in Miyato et al. (2017); Sato et al. (2019). This is because Adv generates perturbations using the correct examples, thus, the label information are "leaked" (Kurakin et al., 2017). Additionally, we can see that SALT is particularly effective in this low-resource setting, where it outperforms all the baselines by large margins. In comparison with the vanilla Transformer model, SALT achieves up to 2 BLEU score improvements on all the three datasets. + +Table 5 summarizes experiment results on the WMT'16 En-De dataset. We report the sacre-BLEU (Post, 2018) score, which is a detokenized version of the BLEU score that better reflects translation quality. We can see that SALT outperforms all the baseline methods by notable margins, and it improves upon the vanilla Transformer model by 1.2 BLEU score. + +![](images/1004102b44323f6dee875ebd144f2528ea3eb7ceaedb403a6e37f5d7454d6b56.jpg) +(a) Number of unrolling steps. + +![](images/e04e9111aaa2462d7d0148b95073cfd53d76724e19609313fb3b7334059eecb3.jpg) +(b) Perturbation strength $\epsilon, \ell_2$ case. + +![](images/0f2a673206bf109e2b6b7d64d86baedb03eeea84fb6ac43d67d4f516923aef0c.jpg) +(c) Perturbation strength $\epsilon$ , $\ell_{\infty}$ case. +Figure 1: Relation between BLEU score and different factors on the IWSLT'14 De-En dataset. + +# 4.3 Natural Language Understanding + +Datasets. We demonstrate the effectiveness of SALT on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019a), which is a collection of nine NLU tasks. The benchmark includes question answering (Rajpurkar et al., 2016), linguistic acceptability (CoLA, Warstadt et al. 2019), sentiment analysis (SST, Socher et al. 2013), text similarity (STS-B, Cer et al. 2017), paraphrase detection (MRPC, Dolan and Brockett 2005), and natural language inference (RTE & MNLI, Dagan et al. 2006; Bar-Haim et al. 2006; Giampiccolo et al. 2007; Bentivogli et al. 2009; Williams et al. 2018) tasks. Dataset details can be found in Table 7 (Appendix B.2). + +Implementation. We evaluate our algorithm by fine-tuning a pre-trained BERT-base (Devlin et al., 2019) model. Our implementation is based on the MT-DNN code-base (Liu et al., 2019a, 2020b). Training details are presented in Appendix B.2. + +Results. Table 3 summarizes experiment results on the GLUE development set. We can see that SALT outperforms BERTBASE in all the tasks. Further, our method is particularly effective for small datasets, such as RTE, MRPC, and CoLA, where we achieve 9.4, 4.3, and 6.3 absolute improvements, respectively. Comparing with other adversarial training baselines, i.e., FreeAT, FreeLB, and SMART, our method achieves notable improvements in all the tasks. + +We highlight that SALT achieves a 84.5 average score, which is significantly higher than that of the vanilla BERTBASE (+3.0) fine-tuning approach. Also, our average score is higher than the scores of baseline adversarial training methods (+1.9, +1.2, +0.7 for FreeAT, FreeLB, SMART, respectively). Moreover, the 84.5 average score is even higher + +than fine-tuning $\mathrm{BERT}_{\mathrm{LARGE}}(+0.5)$ , which contains three times more parameters than the backbone of SALT. + +Table 4 summarizes results on the GLUE test set. We can see that SALT consistently outperforms $\mathrm{BERT}_{\mathrm{BASE}}$ and FreeLB across all the tasks. + +# 4.4 Parameter Study + +$\diamond$ Robustness to the number of unrolling steps. From Figure 1a, we can see that SALT is robust to the number of unrolling steps. As such, setting the unrolling steps $K = 1$ or 2 suffices to build models that generalize well. +$\diamond$ Robustness to the perturbation strength. Unrolling is robust to the perturbation strength within a wide range, as indicated in Figure 1b. Meanwhile, the performance of SMART consistently drops when we increase $\epsilon$ from 0.01 to 0.5. This indicates that the unrolling algorithm can withstand stronger perturbations than conventional approaches. +$\diamond \ell_{2}$ constraints vs. $\ell_{\infty}$ constraints. Figure 1c illustrates model performance with respect to different perturbation strength in the $\ell_{\infty}$ case. Notice that in comparison with the $\ell_{2}$ case (Figure 1b), SALT achieves the same level of performance, but the behavior of SMART is unstable. Additionally, SALT is stable within a wider range of perturbation strength in the $\ell_{2}$ than in the $\ell_{\infty}$ case, which is the reason that we adopt $\ell_{2}$ constraints in the experiments. + +We highlight that SALT does not introduce additional tuning parameter comparing with conventional adversarial regularization approaches. + +# 4.5 Analysis + +Unrolling reduces bias. In Figure 3, we visualize the training and the validation error on the STS-B and the SST datasets from the GLUE benchmark. As mentioned, conventional adversarial regulariza + +![](images/b9656aa23405a5b886fa711e2aee7a1a49a7100dace840f1ce8047b9769aa44b.jpg) +(a) BERTBASE (ECE: $6.09\%$ ). + +![](images/aaa4fc0559a19dce2722978d5e2b80700c8511ae3d40359af86b20befa07dd3f.jpg) +(b) SMART (ECE: $5.08\%$ ). + +![](images/ee8e3cb83d5ff09d50b7821969b088d2a69b30bb8562a940bdbb3ce304291410.jpg) +(c) SALT (ECE: $4.06\%$ ). + +![](images/53a08d8d0a68eee674291f88428edfa1d3347acc108dfbbac6d8c1dd764ac1f5.jpg) +Figure 2: Reliability diagrams on SST. Perfect Calibration: confidence = accuracy; ECE: the lower the better. + +![](images/cd42e9ee7803b57df061f7c6f6fb8e4f77c0e3ac5c41ca59038dc80067ebd7c1.jpg) +Figure 3: Training and validation loss of SMART and SALT on STS-B (upper) and SST-2 (lower) datasets. + +tion suffers from over-strong perturbations, such that the model cannot fit the unperturbed data well. This is supported by the fact that the training loss of SALT is smaller than that of SMART, which means SALT fits the data better. SALT also yields a smaller loss than SMART on the validation data, indicating that the Stackelberg game-formulated model exhibits better generalization performance. + +Adversarial robustness. Even though the primary focus of SALT is model generalization, we still test its robustness on the Adversarial-NLI (ANLI, Nie et al. 2020) dataset. The dataset contains 163k data, which are collected via a human-and-model-in-the-loop approach. From Table 6, we can see that SALT improves model robustness upon conventional methods (i.e., SMART). +$\diamond$ Probing experiments. For each method, we first fine-tune a $\mathrm{BERT}_{\mathrm{BASE}}$ model on the SST-2 dataset. Then, we only tune a prediction head on other + +
Dev
R1R2R3All
BERTBASE53.343.044.746.8
SMART54.144.445.347.8
SALT56.646.245.949.3
Test
R1R2R3All
BERTBASE54.144.946.648.4
SMART54.346.446.548.9
SALT55.447.746.749.7
+ +Table 6: Experimental results on the ANLI dataset. Model references: $BERT_{BASE}$ (Devlin et al., 2019), SMART (Jiang et al., 2020). + +![](images/d34c02d53b15b40fd4b5a706a8835466733dd31e433f6ed2e65397002f872c45.jpg) +Figure 4: Probing experiments. Each violin plot is based on 10 runs with different random seeds. + +datasets while keeping the representations fixed. Such a method directly measures the quality of representations generated by different models. As illustrated in Fig. 4, SALT outperforms the baseline methods by large margins. + +$\diamond$ Classification Model Calibration. Adversarial regularization also helps model calibration (Stutz et al., 2020). A well-calibrated model produces reliable confidence estimation (i.e., confidence $\simeq$ actual accuracy), where the confidence is defined as the maximum output probability calculated by the model. We evaluate the calibration performance of BERTBASE, SMART, and SALT by the Expected Calibration Error (ECE, Niculescu-Mizil and Caruana 2005). We plot the reliability diagram (confidence vs. accuracy) on the SST task in Fig. 2 (see + +Appendix C for details). As we can see, $\mathrm{BERT}_{\mathrm{BASE}}$ and SMART are more likely to make overconfident predictions. SALT reduces ECE, and its corresponding reliability diagram aligns better with the perfect calibration curve. + +Comparison with Unrolled-GAN. The unrolling technique has been applied to train GANs (Unrolled-GAN, Metz et al. 2017). However, subsequent works find that this approach not necessarily improves training (Gnarova et al., 2018; Tran et al., 2019; Doan et al., 2019). This is because Unrolled-GAN unrolls its discriminator, which has a significant amount of parameters. Consequently, the unrolling algorithm operates on a very large space, rendering the stochastic gradients that are used for updating the discriminator considerably noisy. In SALT, the unrolling space is the sample embedding space, the dimension of which is much smaller than the unrolling space of GANs. Therefore, unrolling is more effective for NLP tasks. + +# 5 Conclusion + +We propose SALT, an adversarial regularization method that employs a Stackelberg game formulation. Such a formulation induces a competition between a leader (the model) and a follower (the adversary). In SALT, the leader is in an advantageous position by recognizing the follower's strategy, and this strategic information is captured by the Stackelberg gradient. We compute the Stackelberg gradient, and hence find the equilibrium of the Stackelberg game, using an unrolled optimization approach. Empirical results NMT and NLU tasks suggest the superiority of SALT to existing adversarial regularization methods. + +# Broader Impact + +This paper proposes Stackelberg Adversarial Regularization (SALT), an adversarial regularized training framework for NLP tasks. Different from Generative Adversarial Networks (GAN), where the target is to attack existing neural network models, or to improve models' robustness to adversarial attacks, we seek to improve the generalization performance of deep learning models. We demonstrate that the SALT framework can be used for neural machine translation and natural language understanding tasks. In all the experiments, we use publicly available data, and we build our algorithms using public code bases. We do not find any ethical concerns. + +# References + +Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. 2016. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3981-3989. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. 2006. The second PASCAL recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment. +Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC'09). +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics. +Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, and Jianfeng Gao. 2021. Posterior differential regularization with f-divergence for improving model robustness. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1078-1089, Online. Association for Computational Linguistics. +Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914-1925, Brussels, Belgium. Association for Computational Linguistics. +Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05, pages 177-190, Berlin, Heidelberg. Springer-Verlag. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of + +deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Thang Doan, João Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, and R. Devon Hjelm. 2019. On-line adaptative curriculum learning for gans. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 3470-3477. AAAI Press. +William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). +Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1126-1135. PMLR. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1243-1252. PMLR. +Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague. Association for Computational Linguistics. +Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, and Soumith Chintala. 2019. Generalized inner loop meta-learning. arXiv preprint arXiv:1910.01727. +Paulina Grnarova, Kfir Y. Levy, Aurélien Lucchi, Thomas Hofmann, and Andreas Krause. 2018. An online learning approach to generative adversarial networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, + +Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR. +Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics. +Haoming Jiang, Zhehui Chen, Yuyang Shi, Bo Dai, and Tuo Zhao. 2021. Learning to defend by learning to attack. In The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, pages 577-585. PMLR. +Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177-2190, Online. Association for Computational Linguistics. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, and Chao Zhang. 2020. Calibrated language model fine-tuning for in- and out-of-distribution data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1326–1340, Online. Association for Computational Linguistics. +Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Yan Li, Ethan X Fang, Huan Xu, and Tuo Zhao. 2019. Inductive bias of gradient descent based adversarial training on separable data. arXiv preprint arXiv:1906.02931. + +Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020a. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics. +Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jianfeng Gao. 2020b. The Microsoft toolkit of multitask deep neural networks for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 118-126, Online. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. 2015. Gradient-based hyperparameter optimization through reversible learning. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 2113-2122. JMLR.org. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. 2017. Unrolled generative adversarial networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Yifei Min, Lin Chen, and Amin Karbasi. 2020. The curious case of adversarially robust models: More data can help, double descend, or hurt generalization. arXiv preprint arXiv:2002.11080. +Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. + +Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 2901-2907. AAAI Press. +Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In Machine Learning, Proceedings of the Twenty-Second International Conference (ICML 2005), Bonn, Germany, August 7-11, 2005, volume 119 of ACM International Conference Proceeding Series, pages 625-632. ACM. +Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics. +Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035. +Barak A Pearlmutter and Jeffrey Mark Siskind. 2008. Reverse-mode ad in a functional framework: Lambda the ultimate backpropagator. ACM Transactions on Programming Languages and Systems (TOPLAS), 30(2):1-36. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages + +2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, and Percy Liang. 2020. Understanding and mitigating the tradeoff between robustness and accuracy. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 7909-7919. PMLR. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Motoki Sato, Jun Suzuki, and Shun Kiyono. 2019. Effective adversarial regularization for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 204-210, Florence, Italy. Association for Computational Linguistics. +Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4323-4330. ijcai.org. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John P. Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. 2019. Adversarial training for free! In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3353-3364. +Qianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, and Tuo Zhao. 2020. Deep reinforcement learning with robust and smooth policy. In Proceed + +ings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 8707-8718. PMLR. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. +David Stutz, Matthias Hein, and Bernt Schiele. 2019. Disentangling adversarial robustness and generalization. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6976-6987. Computer Vision Foundation / IEEE. +David Stutz, Matthias Hein, and Bernt Schiele. 2020. Confidence-calibrated adversarial training: Generalizing to unseen attacks. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9155-9166. PMLR. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. +Ngoc-Trung Tran, Viet-Hung Tran, Ngoc-Bao Nguyen, Linxiao Yang, and Ngai-Man Cheung. 2019. Self-supervised GAN: analysis and improvement with multi-class minimax game. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13232-13243. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Heinrich Von Stackelberg. 2010. Market structure and equilibrium. Springer Science & Business Media. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. + +Dilin Wang, ChengYue Gong, and Qiang Liu. 2019b. Improving neural language modeling via adversarial training. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 6555-6565. PMLR. +Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641. +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics. +Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. 2019. Freelb: Enhanced adversarial training for language understanding. arXiv preprint arXiv:1909.11764. + +# A Virtual Adversarial Training + +Virtual adversarial training (VAT, Miyato et al. 2017) solves the following min-max optimization problem: + +$$ +\begin{array}{l} \min _ {\theta} \mathcal {F} (\theta , \delta^ {*}) = \mathcal {L} (\theta) + \frac {\alpha}{n} \sum_ {i = 1} ^ {n} \ell_ {v} (x _ {i}, \delta_ {i} ^ {*}, \theta), \\ \delta_{i}^{*} = \operatorname *{argmax}_{\| \delta_{i}\| \leq \epsilon}\ell_{v}(x_{i},\delta_{i},\theta), \\ \end{array} +$$ + +where + +$$ +\ell_ {v} (x _ {i}, \delta_ {i}, \theta) = \mathrm {K L} \big (f (x _ {i}, \theta) | | f (x _ {i} + \delta_ {i}, \theta) \big). +$$ + +Note that the objective of the minimization problem is a function of both the model parameters and the perturbations. + +Because the min problem and the max problem are operating on the same loss function, i.e., the min problem seeks to minimize $\ell_v$ , while the max problem tries to maximize $\ell_v$ , this min-max optimization is essentially a zero-sum game. And we can find the game's equilibrium using gradient descent/ascient algorithms. + +Specifically, the adversarial player first generate an initial perturbation $\delta^0$ , and then refines it using $K$ steps of projected gradient ascent, i.e., + +$$ +\delta^ {k} = \Pi_ {\| \cdot \| \leq \epsilon} \left(\delta^ {k - 1} + \eta \frac {\partial \ell_ {v} (x , \delta^ {k - 1} , \theta)}{\partial \delta^ {k - 1}}\right), +$$ + +for $k = 1,\dots ,K$ + +Here $\Pi$ denotes projection onto the $\ell_2$ -ball or the $\ell_{\infty}$ -ball. Empirically, we find that these two choices yield very similar performance, although adversarial training models is robust to $\epsilon$ within a wider range when applying the $\ell_2$ constraint. + +After obtaining the $K$ -step refined perturbation $\delta^K$ , we use gradient descent to update the model parameters $\theta$ . Concretely, the gradient of the model parameters is computed as + +$$ +\frac {\partial \mathcal {F} (\theta , \delta^ {K})}{\partial \theta} = \frac {\mathrm {d} \ell (f (x _ {i} , \theta) , y _ {i})}{\mathrm {d} \theta} + \alpha \frac {\partial \ell_ {v} (x , \delta^ {K} , \theta)}{\partial \theta}. \tag {7} +$$ + +The training algorithm is demonstrated in Algorithm 2. + +Note that in this paper, we target for models' generalization performance on the unperturbed test data, therefore we do not want a strong adversary that "traps" the model parameters to a bad local optima. Most of the existing algorithms achieve this goal by carefully tuning the hyper-parameters + +$\epsilon$ and $K$ , i.e., a small $\epsilon$ usually generates weaker adversaries, so does a small $K$ . However, these heuristics do not work well, and at times $\delta^K$ is too strong. Consequently, conventional adversarial training results in undesirable underfitting on the clean data. + +# Algorithm 2: Virtual Adversarial Training. + +Input: $\mathcal{D}$ : dataset; $T$ : total number of training iterations; $\sigma^2$ : variance of initial perturbations; $K$ : number of inner training iterations; $\eta$ : step size to update $\delta$ ; Optimizer: optimizer to update $\theta$ . + +Initialize: model parameters $\theta$ + +for $t = 1,\dots ,T$ do + +for $(x,y)\in \mathcal{D}$ do + +Initialize $\delta^0\sim \mathcal{N}(0,\sigma^2 I)$ + +for $k = 1,\dots ,K$ do + +$$ +\begin{array}{l} {g ^ {k} \gets \partial \ell_ {v} (x _ {i}, \delta_ {i}, \theta) / \partial \delta_ {i};} \\ {\delta^ {k} \gets \Pi (\delta^ {k - 1} + \eta g ^ {k});} \end{array} +$$ + +Compute the gradient $g_{\theta}$ using + +Eq. 7; + +$\theta \gets \mathrm{Optimizer}(g_{\theta})$ + +Output: $\theta$ + +# B Training Details + +# B.1 Neural Machine Translation + +For the rich-resource WMT'16 En-De dataset, we use the pre-processed data from Ott et al. (2018)7. For the low-resource datasets, we use byte-pair encoding (Sennrich et al., 2016) with 10,000 merge operations to build the vocabulary for the IWSLT ('14, '15, '16) datasets. We follow the scripts in Ott et al. (2019)8 for other pre-processing steps. + +We use Adam (Kingma and Ba, 2015) as the leader's (i.e., the upper level problem that solves for model parameters) optimizer, and we set $\beta = (0.9, 0.98)$ . The follower's (i.e., the lower level problem that solves for perturbations) optimizer is chosen from Adam and SGD, where we observe only marginal empirical differences between these two choices. For low-resource translation, we set the batch size to be equivalent to 64k tokens. For example, when running the experiments on 4 GPUs, + +
CorpusTask#Train#Dev#Test#LabelMetrics
Single-Sentence Classification (GLUE)
CoLAAcceptability8.5k1k1k2Matthews corr
SSTSentiment67k8721.8k2Accuracy
Pairwise Text Classification (GLUE)
MNLINLI393k20k20k3Accuracy
RTENLI2.5k2763k2Accuracy
QQPParaphrase364k40k391k2Accuracy/F1
MRPCParaphrase3.7k4081.7k2Accuracy/F1
QNLIQA/NLI108k5.7k5.7k2Accuracy
Text Similarity (GLUE)
STS-BSimilarity7k1.5k1.4k1Pearson/Spearman corr
+ +Table 7: Summary of the GLUE benchmark. + +
BatchlrleaderlrfollowerσεKBeamLen-Pen
En-Vi (IWSLT'15)64k1 × 10-31 × 10-51 × 10-40.11101.0
De-En (IWSLT'14)64k1 × 10-31 × 10-41 × 10-40.3191.5
Fr-En (IWSLT'16)64k1 × 10-31 × 10-51 × 10-50.31102.0
En-De (WMT'16)450k1 × 10-31 × 10-41 × 10-40.3140.6
+ +Table 8: Hyper-parameters for machine translation. Here, $\sigma$ is the standard deviation of the initial perturbations, $\epsilon$ is the perturbation strength, $K$ is the number of unrolling steps, Beam is the size of beam search, and Len-Pen is the length penalty parameter during beam search. + +we set the tokens-per-GPU to be 8,000, and we accumulate gradients for 2 steps. For rich-resource translation, we set the batch size to be equivalent to $450\mathrm{k}$ tokens. In all the experiments, we constrain each perturbation according to its sentence-level $\ell_2$ norm, i.e., $\| \delta \| _2\leq \epsilon$ . Other hyper-parameters are specified in Table 8. + +# B.2 Natural Language Understanding + +Details of the GLUE benchmark, including tasks, statistics, and evaluation metrics, are summarized in Table 7. + +We use Adam as both the leader's and the follower's optimizer, and we set $\beta = (0.9, 0.98)$ . The learning rate of the leader $\mathrm{lr}_{\mathrm{leader}}$ is chosen from $\{5 \times 10^{-5}, 1 \times 10^{-4}, 5 \times 10^{-4}\}$ , and the follower's learning rate is chosen from $\{1 \times 10^{-5}, \mathrm{lr}_{\mathrm{leader}}\}$ . We choose the batch size from $\{4, 8, 16, 32\}$ , and we train for a maximum 6 epochs with early-stopping based on the results on the development set. We apply a gradient norm clipping of 1.0. We set the dropout rate in task specific layers to 0.1. We choose standard deviation of initial perturbations $\sigma$ from $\{1 \times 10^{-5}, 1 \times 10^{-4}\}$ , and $\ell_2$ constraints + +with perturbation strength $\epsilon = 1.0$ are applied. We set the unrolling steps $K = 2$ . We report the best performance on each dataset individually. + +# C Model Calibration + +Many applications require trustworthy predictions that need to be not only accurate but also well calibrated (Kong et al., 2020). A well-calibrated model is expected to output prediction confidence comparable to its classification accuracy. For example, given 100 data points with their prediction confidence 0.6, we expect 60 of them to be correctly classified. More precisely, for a data point $X$ , we denote by $Y(X)$ the ground truth label, $\widehat{Y}(X)$ the label predicted by the model, and $\widehat{P}(X)$ the output probability associated with the predicted label. The calibration error of the predictive model for a given confidence $p \in (0,1)$ is defined as: + +$$ +\mathcal {E} _ {p} = \left| \mathbb {P} \left[ \widehat {Y} (X) = Y (X) | \widehat {P} (X) = p \right] - p \right|. \tag {8} +$$ + +Since Eq. 8 involves population quantities, we usually adopt empirical approximations (Guo et al., 2017) to estimate the calibration error. Specifically, + +we partition all data points into 10 bins of equal size according to their prediction confidence. Let $\mathcal{B}_m$ denote the bin with prediction confidence bounded between $\ell_m$ and $u_{m}$ . Then, for any $p\in [\ell_m,u_m)$ , we define the empirical calibration error as: + +$$ +\widehat {\mathcal {E}} _ {p} = \widehat {\mathcal {E}} _ {m} = \frac {1}{| \mathcal {B} _ {m} |} \Big | \sum_ {i \in \mathcal {B} _ {m}} \left[ \mathbf {1} (\widehat {y} _ {i} = y _ {i}) - \widehat {p} _ {i} \right] \Big |, (9) +$$ + +where $y_{i},\widehat{y}_{i}$ and $\widehat{p_i}$ are the true label, predicted label and confidence for sample $i$ + +Reliability Diagram is a bar plot that compares $\widehat{\mathcal{E}}_p$ against each bin, i.e., $p$ . A perfectly calibrated would have $\widehat{\mathcal{E}}_p = (\ell_m + u_m) / 2$ for each bin. + +Expected Calibration Error (ECE) is the weighted average of the calibration errors of all bins (Naeini et al., 2015) defined as: + +$$ +\mathrm {E C E} = \sum_ {m = 1} ^ {M} \frac {\left| \mathcal {B} _ {m} \right|}{n} \widehat {\mathcal {E}} _ {m}, \tag {10} +$$ + +where $n$ is the sample size. + +We remark that the goal of calibration is to minimize the calibration error without significantly sacrificing prediction accuracy. Otherwise, a random guess classifier can achieve zero calibration error. \ No newline at end of file diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/images.zip b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3f969b82d8ea6b6230f22d2ee0d5a5ca85af839b --- /dev/null +++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b40778268d561a9fd563aca690b9a09099738ecc2c32525a85520ecf9290dca5 +size 617269 diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/layout.json b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c54712af3f3277040725b34bf039ce2cf0cd23c6 --- /dev/null +++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f327cb8adc324b019446065725aabe6651fccf095587abc26ea1a83cee97cbd8 +size 633512 diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_content_list.json b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5ae13df4682ced154399644cbaf1fe55a7bb1df5 --- /dev/null +++ b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0880758c6102c2d151a512fc7da3c0a5297939100b07ac823529b9e3377160aa +size 101510 diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_model.json b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..da17c7a216f24ad93fc9bdf1e613c56677b5b460 --- /dev/null +++ b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b2c08ad7c233fac8a15db627d5ae2436d499bb5e952ffd69efbdba512650437 +size 123175 diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_origin.pdf b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..14e91f2f6557fc7c15a180ee1e441dc70891a318 --- /dev/null +++ b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7d147604e0372fb2c9313319afd28a3dec6a91c899586ab691b26c7f05f479e +size 3267780 diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/full.md b/adversarialscrubbingofdemographicinformationfortextclassification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..961b0816951d2fd664c28ae7daa8799805f9f70a --- /dev/null +++ b/adversarialscrubbingofdemographicinformationfortextclassification/full.md @@ -0,0 +1,435 @@ +# Adversarial Scrubbing of Demographic Information for Text Classification + +Somnath Basu Roy Chowdhury + +Sayan Ghosh + +Yiyuan Li + +Junier B. Oliva + +{somnath, sayghosh, yiyuanli, joliva}@cs.unc.edu + +Shashank Srivastava + +Snigdha Chaturvedi + +{ssrivastava, snigdha}@cs.unc.edu + +UNC Chapel Hill + +# Abstract + +Contextual representations learned by language models can often encode undesirable attributes, like demographic associations of the users, while being trained for an unrelated target task. We aim to scrub such undesirable attributes and learn fair representations while maintaining performance on the target task. In this paper, we present an adversarial learning framework "Adversarial Scrubber" (ADS), to debias contextual representations. We perform theoretical analysis to show that our framework converges without leaking demographic information under certain conditions. We extend previous evaluation techniques by evaluating debiasing performance using Minimum Description Length (MDL) probing. Experimental evaluations on 8 datasets show that ADS generates representations with minimal information about demographic attributes while being maximally informative about the target task. + +# 1 Introduction + +Automated systems are increasingly being used for real-world applications like filtering college applications (Basu et al., 2019), determining credit eligibility (Ghailan et al., 2016), making hiring decisions (Chalfin et al., 2016), etc. For such tasks, predictive models are trained on data coming from human decisions, which are often biased against certain demographic groups (Mehrabi et al., 2019; Blodgett et al., 2020; Shah et al., 2020). Biased decisions based on demographic attributes can have lasting economic, social and cultural consequences. + +Natural language text is highly indicative of demographic attributes of the author (Koppel et al., 2002; Burger et al., 2011; Nguyen et al., 2013; Verhoeven and Daelemans, 2014; Weren et al., 2014; Rangel et al., 2016; Verhoeven et al., 2016; Blodgett et al., 2016). Language models can often encode such demographic associations even without having direct access to them. Prior works have + +shown that intermediate representations in a deep learning model encode demographic associations of the author or person being spoken about (Blodgett et al., 2016; Elazar and Goldberg, 2018; Elazar et al., 2021). Therefore, it is important to ensure that decision functions do not make predictions based on such representations. + +In this work, we focus on removing demographic attributes encoded in data representations during training text classification systems. To this end, we present "Adversarial Scrubber" (ADS) to remove information pertaining to protected attributes (like gender or race) from intermediate representations during training for a target task (like hate speech detection). Removal of such features ensures that any prediction model built on top of those representations will be agnostic to demographic information during decision-making. + +ADS can be used as a plug-and-play module during training any text classification model to learn fair intermediate representations. The framework consists of 4 modules: Encoder, Scrubber, Bias discriminator and Target classifier. The Encoder generates contextual representation of an input text. Taking these encoded contextual representations as input, the Scrubber tries to produce fair representations for the target task. The Bias discriminator and Target classifier predict the protected attribute and target label respectively from the Scrubber's output. The framework is trained end-to-end in an adversarial manner (Goodfellow et al., 2014). + +We provide theoretical analysis to show that under certain conditions Encoder and Scrubber converge without leaking information about the protected attribute. We evaluate our framework on 5 dialogue datasets, 2 Twitter-based datasets and a Biographies dataset with different target task and protected attribute settings. We extend previous evaluation methodology for debiasing by measuring Minimum Description Length (MDL) (Voita and Titov, 2020) of labels given representations, + +instead of probing accuracy. MDL provides a finer-grained evaluation benchmark for measuring debiasing performance. We compute MDL using off-the-shelf classifiers1 making it easier to reproduce. Upon training using ADS framework, we observe a significant gain in MDL for protected attribute prediction as compared to fine-tuning for the target task. Our contributions are: + +- We present Adversarial Scrubber (ADS), an adversarial framework to learn fair representations for text classification. +- We provide theoretical guarantees to show that Scrubber and Encoder converge without leaking demographic information. +- We extend previous evaluation methodology for adversarial debiasing by framing performance in terms of MDL. +- Experimental evaluations on 8 datasets show that models trained using ADS generate representations where probing networks achieve near random performance on protected attribute inference while performing similar to the baselines on target task. +- We show that ADS is scalable and can be used to remove multiple protected attributes simultaneously. + +# 2 Related Work + +Contextual representations learned during training for a target task can be indicative of features unrelated to the task. Such representations can often encode undesirable demographic attributes, as observed in unsupervised word embeddings (Bolukbasi et al., 2016) and sentence embeddings (May et al., 2019). Prior work has analysed bias in different NLP systems like machine translation (Park et al., 2018; Stanovsky et al., 2019; Font and Costa-Jussa, 2019; Saunders and Byrne, 2020), NLI (Rudinger et al., 2017), text classification (Dixon et al., 2018; Kiritchenko and Mohammad, 2018; Sap et al., 2019; Liu et al., 2021), language generation (Sheng et al., 2019) among others. + +Debiasing sensitive attributes for fair classification was introduced as an optimization problem by Zemel et al. (2013). Since then, adversarial training (Goodfellow et al., 2014) frameworks have been explored for protecting sensitive attributes for NLP tasks (Zhang et al., 2018; Li et al., 2018; Elazar and Goldberg, 2018; Liu et al., 2020). + +![](images/d4f0bce819ec62d3074b1866c7a67ef1e696a49517782330b3adb6c2a2497a53.jpg) +Figure 1: Architecture of the Adversarial Scrubber (ADS). Encoder receives an input $x$ to produce $e$ . Scrubber uses $e$ to produce $u$ . Bias discriminator $d$ and Target classifier $c$ infer protected attribute $z$ and target task label $y$ from $u$ . + +Our work is most similar to Elazar and Goldberg (2018), which achieves fairness by blindness by learning intermediate representations which are oblivious to a protected attribute. We compare the performance of ADS with Elazar and Goldberg (2018) in our experiments. + +# 3 Adversarial Scrubber + +ADS takes text documents $\{x_{1}, x_{2}, \ldots, x_{n}\}$ as input from a dataset $\mathcal{D}$ with corresponding target labels $\{y_{1}, y_{2}, \ldots, y_{n}\}$ . Every input $x_{i}$ is also associated with a protected attribute $z_{i} \in \{1, 2, \ldots, K\}$ . Our goal is to construct a model $f(x)$ such that it doesn't rely on $z_{i}$ while making the prediction $y_{i} = f(x_{i})$ . The framework consists of 4 modules: (i) Encoder $h(\cdot)$ with weights $\theta_{h}$ , (ii) Scrubber $s(\cdot)$ with weights $\theta_{s}$ , (iii) Bias discriminator $d(\cdot)$ with weights $\theta_{d}$ and (iv) Target classifier $c(\cdot)$ with weights $\theta_{c}$ as shown in Figure 1. The Encoder receives a text input $x_{i}$ , and produces an embedding $e_{i} = h(x_{i})$ , which is forwarded to the Scrubber. The goal of the Scrubber is to produce representation $u_{i} = s(h(x_{i}))$ , such that $y_{i}$ can be easily inferred from $u_{i}$ by the Target classifier, $c$ , but $u_{i}$ does not have the information required to predict the protected attribute $z_{i}$ by the Bias discriminator $d$ . Our setup also includes a Probing network $q$ , which helps in evaluating the fairness of the learned representations. + +# Algorithm 1 ADS Training algorithm + +1: for number of training iterations do +2: Sample a minibatch $\{x_i, y_i, z_i\}_{i=1}^m \sim \mathcal{D}$ +3: Bias discriminator $d$ is updated using the gradients: + +$$ +\nabla_ {\theta_ {d}} \frac {1}{m} \sum_ {i = 1} ^ {m} \mathcal {L} _ {d} \left(d \left(u _ {i}\right), z _ {i}\right) \tag {1} +$$ + +4: Update the Encoder $h$ , Scrubber $s$ , and Task Classifier $c$ using the gradients: + +$$ +\nabla_ {\theta_ {c}, \theta_ {s}, \theta_ {h}} \frac {1}{m} \sum_ {i = 1} ^ {m} \left[ \mathcal {L} _ {c} \left(c \left(u _ {i}\right), y _ {i}\right) - \lambda_ {1} H \left(d \left(u _ {i}\right)\right) + \lambda_ {2} \delta \left(d \left(u _ {i}\right)\right) \right] \tag {2} +$$ + +In the rest of this section, we describe ADS assuming a single Bias discriminator. However, ADS can easily be extended to incorporate multiple discriminators for removing several protected attributes (discussed in Section 6.1). + +Scrubber: The Scrubber receives the input representation $h(x_{i})$ from Encoder and generates representation $u_{i} = s(h(x_{i}))$ . The goal of the Scrubber is to produce representations such that the Bias discriminator finds it difficult to predict the protected attribute $z_{i}$ . To this end, we consider two loss functions: + +Entropy loss: In the Entropy loss, the Encoder and Scrubber parameters are jointly optimized to increase the entropy of the prediction probability distribution, $H(d(u_i))$ . + +$\delta$ loss: The $\delta$ -loss function penalizes the model if the discriminator assigns a high probability to the correct protected-attribute class. For every input instance, we form an output mask $m_{i}\in \mathbb{R}^{1\times K}$ where $K$ is the number of protected attribute classes. $m_i^{(k)} = 1$ if $z_{i} = k$ and 0 otherwise. The Encoder and Scrubber minimizes the $\delta$ -loss defined as: + +$$ +\delta (d (u _ {i})) = m _ {i} ^ {T} \operatorname {s o f t m a x} _ {\text {g u m b l e}} (d (u _ {i})) \tag {3} +$$ + +where $\mathrm{softmax}_{\mathrm{gumble}}(\cdot)$ is the gumble softmax function (Jang et al., 2017). In our experiments, we use a combination of the entropy and $\delta$ losses. + +Target classifier: The Target classifier predicts the target label $y_{i}$ from $u_{i}$ by optimizing the cross entropy loss: $\mathcal{L}_c(c(u_i),y_i)$ . + +The Scrubber, Target classifier, and Encoder parameters are updated simultaneously to minimize + +the following loss: + +$$ +\begin{array}{r l} \mathcal {L} _ {s} (e _ {i}, y _ {i}) = \mathcal {L} _ {c} (c (u _ {i}), y _ {i}) - \lambda_ {1} H (d (u _ {i})) & \\ + \lambda_ {2} \delta (d (u _ {i})) & \end{array} \tag {4} +$$ + +where $\lambda_{1}$ and $\lambda_{2}$ are positive hyperparameters. + +Bias discriminator: The Bias discriminator, which predicts the protected attribute $z_{i}$ , is trained to reduce the cross-entropy loss for predicting $z_{i}$ denoted as $\mathcal{L}_d(d(u_i),z_i)$ . The discriminator output is $d(u_{i})\in \mathbb{R}^{K}$ , where $K$ is the number of protected attribute classes. + +Training: The Bias discriminator and Scrubber (along with Target classifier and Encoder) are trained in an iterative manner as shown in Algorithm 1. First, the Bias discriminator is updated using gradients from the loss in Equation 1. Then, the Encoder, Scrubber and Target classifier are updated simultaneously using the gradients shown in Equation 2. + +Probing Network: Elazar and Goldberg (2018) showed that in an adversarial setup even when the discriminator achieves random performance for predicting $z$ , it is still possible to retrieve $z$ using a separately trained classifier. Therefore, to evaluate the amount of information related to $y$ and $z$ present in representations $u$ , we use a probing network $q$ . After ADS is trained, we train $q$ on representations $h(x)$ and $s(h(x))$ , to predict $y$ and $z$ ( $q$ is trained to predict $y$ and $z$ separately). We consider an information leak from a representation, if $z$ can be predicted from it with above random performance. If the prediction performance of $q$ for $z$ is significantly above the random baseline, it means that there is information leakage of the protected attribute and it is not successfully guarded. + +# 4 Theoretical Analysis + +Proposition 1. Minimizing $\mathcal{L}_s$ is equivalent to increasing Bias discriminator loss $\mathcal{L}_d$ . + +Proof: Entropy and $\delta$ -loss components of $\mathcal{L}_s$ tries to increase the bias discriminator loss. The discriminator cross-entropy loss $\mathcal{L}_d$ can be written as: + +$$ +\begin{array}{l} \mathcal {L} _ {d} \left(v _ {i}, o _ {i}\right) = H \left(v _ {i}, o _ {i}\right) \tag {5} \\ = D _ {K L} \left(v _ {i}, o _ {i}\right) + H \left(v _ {i}\right) \\ \end{array} +$$ + +where $o_i = d(u_i)$ , the Bias discriminator output probability distribution and $v_i$ is a one-hot target distribution $\{v_i \in \mathbb{R}^K, v_i^k = 1 | z_i = k\}$ . As $H(o_i)$ increases (Equation 4), $D_{KL}(o_i, v_i)$ value also increases (since $v_i$ is a one-hot vector), thereby increasing $\mathcal{L}_d(o_i, v_i)$ (in Equation 5). Therefore, $\mathcal{L}_d$ increases as we minimize the Scrubber loss component $-H(o_i)$ . + +The same holds true for the $\delta$ -loss component. $\delta(o_i)$ reduces the probability assigned to the true output class which increases the cross entropy loss $\mathcal{L}_d$ (detailed proof provided in Appendix A.2 due to space constraint). Minimizing the entropy and $\delta$ -loss components of the Scrubber loss $\mathcal{L}_s$ increases $\mathcal{L}_d$ for a fixed Bias discriminator. Therefore, assuming our framework converges to $(\theta_s^*, \theta_h^*, \theta_d^*)$ using gradient updates from $\mathcal{L}_s$ we have: + +$$ +\mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \geq \mathcal {L} _ {d} \left(\theta_ {s}, \theta_ {h}, \theta_ {d} ^ {*}\right) \tag {6} +$$ + +where $(\theta_s,\theta_h)$ can be any Scrubber and Encoder parameter setting. + +Proposition 2. Let the discriminator loss $\mathcal{L}_d$ be convex in $\theta_d$ , and continuous differentiable for all $\theta_d$ . Let us assume the following: + +(a) $\theta_h^{(0)}$ and $\theta_s^{(0)}$ are Encoder and Scrubber parameters when the Scrubber output representation $s(h(x))$ does not have any information about $z$ (one trivial case would be when $s(h(x)) = \vec{0}$ , if $\theta_s = \vec{0} \vee \theta_h = \vec{0}$ ). +(b) $\theta_d^{(0)}$ minimizes $\mathcal{L}_d$ when $s(h(x))$ does not have any information about $z$ (this is achieved when $d(\cdot)$ always predicts the majority baseline for $z$ ). $\forall (\theta_s, \theta_h)$ , the following holds true: + +$$ +\mathcal {L} _ {d} (\theta_ {s}, \theta_ {h}, \theta_ {d} ^ {(0)}) = \mathcal {L} _ {d} (\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)}) +$$ + +(c) the adversarial framework converges with parameters $\theta_s^*$ , $\theta_h^*$ and $\theta_d^*$ . + +Then, $\mathcal{L}_d(\theta_s^*,\theta_h^*,\theta_d^*) = \mathcal{L}_d(\theta_s^*,\theta_h^*,\theta_d^{(0)})$ which implies that the Bias discriminator loss does not + +benefit from updates of $\theta_{s}$ and $\theta_{h}$ . + +Proof: As the Bias discriminator converges to $\theta_d^*$ , we have: + +$$ +\mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \leq \mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {(0)}\right) \tag {7} +$$ + +$\theta_h$ and $\theta_s$ are updated using gradients from $\mathcal{L}_s$ (Equation 4). Since the Encoder and the Scrubber parameters converge to $\theta_h^*$ and $\theta_s^*$ respectively, from Proposition 1 (Equation 6) we have: + +$$ +\mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \geq \mathcal {L} _ {d} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {*}\right) \tag {8} +$$ + +We can show that: + +$$ +\begin{array}{l} \mathcal {L} _ {d} (\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {(0)}) \\ \geq \mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \quad (Equation7) \\ \geq \mathcal {L} _ {d} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {*}\right) \quad (Equation8) \\ \geq \mathcal {L} _ {d} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)}\right) \quad (\text {A s s u m p t i o n 2 b}) \\ = \mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {(0)}\right) \quad \text {(A s s u m p t i o n 2 b)} (9) \\ \end{array} +$$ + +Therefore, $\mathcal{L}_d(\theta_s^*,\theta_h^*,\theta_d^*) = \mathcal{L}_d(\theta_s^*,\theta_h^*,\theta_d^{(0)})$ + +Proposition 3. Let us assume that the Bias discriminator $d(\cdot)$ is strong enough to achieve optimal accuracy of predicting $z$ from $s(h(x))$ and assumptions in Proposition 2 hold true. Then, Encoder and Scrubber converge to $(\theta_h^*, \theta_s^*)$ without leaking information about the protected attribute $z$ . + +Proof: An optimal Bias discriminator $d(\cdot)$ minimizes the prediction entropy, thereby increasing the entropy and $\delta$ -loss. Given $(\theta_h^{(0)},\theta_s^{(0)})$ , the Scrubber loss $\mathcal{L}_s$ is maximized for an optimal $\theta_d^{(0)}$ (From Proposition 1, $\mathcal{L}_s(\theta_s^{(0)},\theta_h^{(0)},\theta_d^{(0)}) \geq \mathcal{L}_s(\theta_s^{(0)},\theta_h^{(0)},\theta_d)$ , since $\mathcal{L}_d$ is decreasing with $\delta(o_i)$ and $-H(o_i)$ ). Then, for any other discriminator $\theta_d^*$ we have: + +$$ +\mathcal {L} _ {s} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {*}\right) \leq \mathcal {L} _ {s} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)}\right) \tag {10} +$$ + +Following assumption 2b, do where $\theta_d^{(0)}$ is the optimal Bias discriminator we can show that: + +$$ +\begin{array}{l} \mathcal {L} _ {s} (\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)}) \\ \geq \mathcal {L} _ {s} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {*}\right) \quad (E q u a t i o n 1 0) \\ \geq \mathcal {L} _ {s} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \quad \left(\mathcal {L} _ {s} \text {c o n v e r g e s}\right) \tag {11} \\ \end{array} +$$ + +Therefore, $\mathcal{L}_s(\theta_s^*,\theta_h^*,\theta_d^*)\leq \mathcal{L}_s(\theta_s^{(0)},\theta_h^{(0)},\theta_d^{(0)})$ + +From $(\theta_s^{(0)},\theta_h^{(0)},\theta_d^{(0)})$ , our framework converges to $(\theta_s^*,\theta_h^*,\theta_d^*)$ as the Scrubber loss $\mathcal{L}_s$ decreases + +
DATASETSplit
TrainDevTest
Funpedia24K2.9K2.9K
Wizard3.5K0.1K0.1K
ConvAI269K4.5K4.5K
LIGHT38K2.2K4.5K
OpenSub210K25K29K
DIAL166K-151K
PAN16160K-9K
PAN16160K-10K
Biographies257K40K99K
+ +(Equation 11). Then, from Proposition 2 we have + +$$ +\mathcal {L} _ {d} (\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}) \geq \mathcal {L} _ {d} (\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)}) +$$ + +As $\mathcal{L}_d$ does not decrease, and $d(\cdot)$ is optimal it shows that no additional information about $z$ is revealed which the Bias discriminator can leverage to reduce $\mathcal{L}_d$ . This shows that starting from $(\theta_s^{(0)},\theta_h^{(0)},\theta_d^{(0)})$ where assumptions in Proposition 2 hold, our framework converges to $(\theta_s^*,\theta_h^*,\theta_d^*)$ without revealing information about $z$ . + +# 5 Experiments + +In this section, we describe our experimental setup and evaluate ADS on several benchmark datasets. + +# 5.1 Dataset + +We evaluate ADS on 5 dialogue datasets, 2 Twitter-based datasets and a Biographies dataset. + +(a) Multi-dimensional bias in dialogue systems: We evaluate ADS on 5 dialogue datasets: Funpedia, ConvAI2, Wizard, LIGHT and OpenSub, introduced by Dinan et al. (2020). These datasets are annotated with multi-dimensional gender labels: the gender of the person being spoken about, the gender of the person being spoken to, and gender of the speaker. We consider the gender of the person being spoken about as our protected attribute. The target task in our setup is sentiment classification. For obtaining the target label, we label all instances using the rule-based sentiment classifier VADER (Hutto and Gilbert, 2014), into three classes: positive, negative and neutral. The dialogue datasets: Funpedia, Wizard, ConvAI2, LIGHT and OpenSub were downloaded from "md_gender" dataset in huggingface library. We use the same data split provided in huggingface for these dataset. + +Table 1: Dataset statistics. + +
DATASETzyEpochλ1λ2
FunpediaGender (3)Sentiment (3)211
WizardGender (2)Sentiment (3)310
ConvAI2Gender (2)Sentiment (3)110
LIGHTGender (2)Sentiment (3)210
OpenSubGender (2)Sentiment (3)210
DIALRace (2)Sentiment (2)8100
PAN16Gender (2)Mention (2)5100
PAN16Age (2)Mention (2)3100
BiographiesGender (2)Occupation (28)2100
+ +Table 2: Hyperparameter settings. Each entry for $z / y$ are shown the format "Attribute Name (c)", where $c$ is the number of classes for that attribute. + +(b) Tweet classification: We experiment on two Twitter datasets. First, we consider the DIAL dataset (Blodgett et al., 2016), where each tweet is annotated with "race" information of the author, which is our protected attribute and the target task is sentiment classification. We consider two race categories: non-Hispanic blacks and whites. Second, we consider the PAN16 (Rangel et al., 2016) dataset where each tweet is annotated with the author's age and gender information both of which are protected attributes. The target task is mention detection. We use the implementation3 of Elazar and Goldberg (2018) to annotate both datasets. +(c) Biography classification: We evaluate ADS on biographies dataset (De-Arteaga et al., 2019). The target task involves classification of biographies into 28 different profession categories, and protected attribute is the gender of the person. The dataset has been downloaded and processed from this open-sourced project.4 We use the same traindev-test split of 65:10:25 as the authors. + +All datasets used in our experiments are balanced. The dataset statistics are reported in Table 1. + +# 5.2 Implementation details + +We use a 2-layer feed-forward neural network with ReLU non-linearity as our Scrubber network $s$ . We use BERT-base (Devlin et al., 2019) as our Encoder $h$ . Bias discriminator $d$ and Target classifier $c$ take the pooled output of BERT [CLS] representation followed by a single-layer neural network. All the models were using AdamW optimizer with a learning rate of $2 \times 10^{-5}$ . Hyperparameter details for different datasets are mentioned in Table 2. $z$ and $y$ sections in the table report the protected attribute and the target task for each dataset. For each + +![](images/0b446e1f1c84704d713039622c6a951302d411a54bbe06812602a4c3bdcfd8e6.jpg) +(a) Pre-trained $h(x)$ + +![](images/925a5e52cd4baa72328781c0fce82b8ed7cc99f262c8ec379cff7b4b802ea3f7.jpg) +(b) w/o adversary $h(x)$ + +![](images/346a8f4795e86c2a8a60efa7c00782c3e3462c41808ed48249a516272dd8c235.jpg) +(c) AdS - $h(x)$ +Figure 2: Evaluation setup. We evaluate the performance of the probing network on 4 different representations. (a) Pre-trained $h(x)$ obtained using pre-trained Encoder (b) w/o adversary $h(x)$ when the Encoder $h$ was fine-tuned on the target task (c) ADS $h(x)$ Encoder embeddings and (d) ADS - $s(h(x))$ embeddings from the Scrubber are representations obtained from ADS. + +![](images/4515191624388c26cad48ff9cab630b6025ce481f485a195b14b2560d900ef61.jpg) +(d) AdS - $s(h(x))$ + +task we also report the number of output classes in parenthesis (e.g. Sentiment (3)). The implementation of this project is publicly available here: https://github.com/brcsomnath/AdS. + +# 5.3 Evaluation Framework + +In our experiments, we compare representations obtained from 4 different settings as shown in Figure 2. Figure 2(a), (b) and (c) are our baselines. In Figure 2(a), we retrieve $h(x)$ from pre-trained BERT model. In Figure 2(b), we retrieve $h(x)$ from BERT fine-tuned on the target task. In Figure 2(c), Encoder output $h(x)$ from ADS is evaluated. In Figure 2(d), Scrubber output, $s(h(x))$ is evaluated. This represents our final setup ADS - $s(h(x))$ . + +# 5.4 Metrics + +We report the F1-score (F1) of the probing network for each evaluation. However, previous work has shown that probing accuracy is not a reliable metric to evaluate the degree of information related to an attribute encoded in representations (Hewitt and Liang, 2019). Therefore, we also report Minimum Description Length (MDL) (Voita and Titov, 2020) of labels given representations. MDL captures the amount of effort required by a probing network to achieve a certain accuracy. Therefore, it provides a finer-grained evaluation benchmark which can even differentiate between probing models with comparable accuracies. We compute the online code (Rissanen, 1984) for MDL. In the online setting, blocks + +of labels are encoded by a probabilistic model iteratively trained on incremental blocks of data (further details about MDL is provided in Appendix A.1). We compute MDL using sklearn's MLClassifier at timesteps corresponding to $0.1\%$ , $0.2\%$ , $0.4\%$ , $0.8\%$ , $1.6\%$ , $3.2\%$ , $6.25\%$ , $12.5\%$ , $25\%$ , $50\%$ and $100\%$ of each dataset as suggested by Voita and Titov (2020). A higher MDL signifies that more effort is required to achieve the probing performance. Hence, we expect the debiased representations to have higher MDL for predicting $z$ and a lower MDL for predicting $y$ . + +# 6 Results + +The evaluation results for all datasets are reported in Table 3. For all datasets, we report performances in 4 settings described in Section 5.3. + +Dialogue and Biographies dataset: First, we focus on the results on the dialogue and biographies datasets reported in Table 3 (first two rows). We observe the following: (i) for pre-trained $h(x)$ , MDL of predicting $z$ is lower than $y$ for these datasets. This means that information regarding $z$ is better encoded in the pre-trained $h(x)$ , than the target label $y$ . (ii) In "w/o adversary $h(x)$ " setup, the Encoder is fine-tuned on the target task (without debiasing), upon which MDL for $y$ reduces significantly (lowest MDL achieved in this setting for all datasets) accompanied by a rise in MDL for $z$ . However, it is still possible to predict $z$ with a + +
Dataset → Setup ↓FUNPEDIAWIZARDCONVAI2
Gender (z)Sentiment (y)Gender (z)Sentiment (y)Gender (z)Sentiment (y)
F1 ↓MDL ↑F1 ↑MDL ↓F1 ↓MDL ↑F1 ↑MDL ↓F1 ↓MDL ↑F1 ↑MDL ↓
Random33.3-33.3-50.0-33.3-50.0-33.3-
Pre-trained h(x)56.824.762.346.378.63.846.57.680.3100.662.7133.7
w/o adversary h(x)51.030.992.82.867.45.285.10.272.8109.095.66.5
ADS - h(x)44.135.490.310.363.46.588.10.358.3134.095.310.9
ADS - s(h(x))29.841.490.210.854.76.993.20.256.0133.595.311.0
Dataset → Setup ↓LIGHTOPENSUBBIORPHIES
Gender (z)Sentiment (y)Gender (z)Sentiment (y)Gender (z)Occupation (y)
F1 ↓MDL ↑F1 ↑MDL ↓F1 ↓MDL ↑F1 ↑MDL ↓F1 ↓MDL ↑F1 ↑MDL ↓
Random50.0-33.3-50.0-33.3-50.0-3.6-
Pre-trained h(x)78.647.160.588.772.3192.463.9426.299.227.674.3499.9
w/o adversary h(x)75.355.991.48.270.2311.997.525.162.3448.999.92.2
ADS - h(x)60.473.892.216.740.7371.996.937.462.1444.799.93.0
ADS - s(h(x))52.874.792.316.440.7373.796.937.157.1449.599.93.3
Dataset → Setup ↓DIALPAN16
Race (z)Sentiment (y)Gender (z)Mention (y)Age (z)Mention (y)
F1 ↓MDL ↑F1 ↑MDL ↓F1 ↓MDL ↑F1 ↑MDL ↓F1 ↓MDL ↑F1 ↑MDL ↓
Random50.0-50.0-50.0-50.0-50.0-50.0-
Pre-trained h(x)74.3242.663.9300.760.9300.572.3259.757.7302.072.8262.6
w/o adversary h(x)81.7176.276.999.068.6267.689.74.059.0295.489.34.8
ADS - h(x)69.7273.072.451.062.3304.289.77.162.4302.889.35.3
ADS - s(h(x))58.2290.672.956.948.6313.989.77.650.5315.189.26.0
+ +Table 3: Evaluation results for all datasets. Expected trends for a metric are shown in $\uparrow$ - higher scores and $\downarrow$ - lower scores. Statistically significant best probing performances for $z$ (lowest F1/highest MDL) and $y$ (highest F1/lowest MDL) are in bold. $^{6}$ ADS - $s(h(x))$ performs the best in guarding information leak of $z$ for all datasets. + +
SETUPDIALPAN16
Race (z)Gender (z)Age (z)
ΔzAccyΔzAccyΔzAccy
w/o adversary LSTM14.567.410.177.59.474.7
Elazar and Goldberg (2018)4.863.84.174.35.770.1
w/o adversary BERT31.276.418.589.710.189.3
ADS - s(h(x))8.272.90.889.84.789.2
+ +Table 4: Comparing ADS with existing baseline. The best and second best performances are in bold and underlined respectively. ADS - $s(h(x))$ achieve the best performance on both settings in the PAN16 dataset and is able to reduce $\Delta_z$ better than baseline on DIAL. + +F1-score significantly above the random baseline, (iii) "ADS - $h(x)$ " setup achieves similar F1 score for predicting $y$ , but still has a F1-score for $z$ significantly above the random baseline. (iv) "ADS - $s(h(x))$ " performs the best in terms of guarding the protected attribute $z$ (lowest prediction F1-score and highest MDL) by achieving near random F1-score across all datasets. It is also able to maintain performance on the target task, as we observe only a slight drop compared to the fine-tuning performance ("w/o adversary $h(x)$ " for predicting $y$ ). + +DIAL & PAN16: Next, we focus on the Twitter-based datasets DIAL & PAN16, where the target task is sentiment classification/mention detection and the protected attribute is one of the demographic associations (race/gender/age) of the author. The evaluation results are reported in Table 3 (third row). For these datasets, we observe that (i) "w/o adversary $h(x)$ " representations have higher F1 and lower MDL for predicting $z$ , compared to "Pre-trained $h(x)$ ". This shows that fine-tuning on the target task $y$ encodes information about the protected attribute $z$ . (ii) "ADS - $h(x)$ " performs similar to "w/o adversary $h(x)$ " representations on the target task but still leaks significant information about $z$ , unlike the previous datasets. (iii) "ADS - $s(h(x))$ " achieves the best performance in terms of guarding the protected variable $z$ (achieves almost random performance in PAN16 dataset), without much performance drop in the target task. + +Comparison with Prior Work: We report two metrics following Elazar and Goldberg (2018): (i) $\Delta_z$ - which denotes the performance above the random baseline for $z$ (50% for both PAN16 and DIAL) (ii) $\mathrm{Acc}_y$ - is the probing accuracy on the + +
SETUPPAN16
Age (z1)Gender (z2)Mention (y)
F1↓MDL↑F1↓MDL↑F1↑MDL↓
Random50.0-50.0-50.0-
w/o adversary h(x)66.5196.469.3192.088.66.8
ADS s(h(x)) - (age)61.5224.262.6218.788.714.3
ADS s(h(x)) - (gender)60.6222.664.2216.888.612.9
ADS s(h(x)) - (both)53.8231.554.4230.988.65.5
+ +target task. Our framework cannot be directly compared with Elazar and Goldberg (2018) as they have used LSTM Encoder. Therefore, we report the baseline Encoder performances as well. In Table 4, we observe that it possible to retrieve $z$ and $y$ from "w/o adversary BERT" with a higher performance compared to "w/o adversary LSTM". This indicates that BERT encodes more information pertaining to both $y$ and $z$ compared to LSTM. In the DIAL dataset, ADS is able to reduce $\Delta_z$ by an absolute margin of $25\%$ compared to $9.7\%$ by Elazar and Goldberg (2018), while the absolute drop in $\mathrm{Acc}_y$ is $3.5\%$ compared to $3.6\%$ by Elazar and Goldberg (2018). In PAN16 dataset, ADS achieves the best $\Delta_z$ and $\mathrm{Acc}_y$ performance for both setups with protected attributes: age and gender respectively. ADS - $s(h(x))$ also achieves performance comparable to the "w/o adversary BERT" setup, which is fine-tuned on the target task. Therefore, ADS is successful in scrubbing information about $z$ from the representations of a stronger encoder compared to Elazar and Goldberg (2018). + +# 6.1 Scrubbing multiple protected attributes + +In this experiment, we show that using ADS it is possible to guard information about multiple protected attributes. $\mathcal{L}_s$ in this setup is defined as: + +$$ +\begin{array}{l} \mathcal {L} _ {s} (e _ {i}, y _ {i}) = \mathcal {L} _ {c} (c (u _ {i}), y _ {i}) - \lambda_ {1} \sum_ {n = 1} ^ {N} H (d _ {n} (u _ {i})) \\ + \lambda_ {2} \sum_ {n = 1} ^ {N} \delta \left(d _ {n} \left(u _ {i}\right)\right) \\ \end{array} +$$ + +where $N$ is the number of protected attributes and $d_{n}(\cdot)$ is the Bias discriminator corresponding to the $n^{th}$ protected attribute $z_{n}$ . + +We evaluate on PAN16 dataset considering two protected attributes $z_{1}$ (age) and $z_{2}$ (gender). The target task is mention prediction. We consider the + +Table 5: Evaluation results of protecting multiple attributes using ADS. Statistically significant best performances are in bold. Expected trends for a metric are shown in $\uparrow$ - higher scores and $\downarrow$ - lower scores. "ADS $s(h(x))$ - (both)" achieves the best performance. $^7$ + +
Scrubber lossGender (z)Sentiment (y)
F1↓P↓R↓F1↑P↑R↑
Random33.333.333.333.333.333.3
δ-loss (w/o entropy)49.547.753.991.291.291.2
Entropy (w/o δ-loss)35.736.453.291.591.691.5
Entropy + δ-loss29.833.327.090.290.589.9
+ +Table 6: Ablation experiments on Funpedia using F1-score (F1), Precision (P) and Recall (R). Expected trends for a metric are shown in $\uparrow$ - higher scores and $\downarrow$ - lower scores. ADS with both loss components performs the best in guarding $z$ . + +subset of PAN16 that contains samples with both gender and age labels. This subset has 120K training instances and 30K test instances. Evaluation results are reported in Table 5. Similar to previous experiments, we observe that "w/o adversary $h(x)$ " (fine-tuned BERT) leaks information about both protected attributes age and gender. We evaluate the information leak when "ADS $s(h(x))$ " is retrieved from a setup with single Bias discriminator (age/gender). We observe a significant gain in MDL for the corresponding $z_{n}$ in both cases, indicating that the respective $z_{n}$ is being protected. Finally, we train ADS using two Bias discriminators and "ADS - $s(h(x))$ (both)" representations achieve the best performance in guarding $z_{1} \& z_{2}$ , while performing well on the target task. This shows that ADS framework is scalable and can be leveraged to guard multiple protected attributes simultaneously. + +# 6.2 Efficacy of different losses + +We experiment with different configurations of the Scrubber loss $\mathcal{L}_s$ to figure out the efficacy of individual components. We show the experimental results on the Funpedia dataset in Table 6 (with $\lambda_1 = \lambda_2 = 1$ ). We observe that most leakage in $z$ (increase in prediction F1-score) occur when the entropy loss is removed. Removing $\delta$ -loss also results in a slight increase in leakage accompanied by a gain in performance for predicting $y$ . This shows that both losses are important for guarding $z$ . + +Empirically, we found that $\delta$ -loss is not suitable for binary protected attributes. This is because during training when the Scrubber is encouraged to learn representations that do not have information about $z$ , it learns to encode representations in a manner such that the Bias discriminator predicts the opposite $z$ class. Hence, the information about $z$ is still present and is retrievable using a probing network $q$ . For this reason, we use $\delta$ -loss for only Funpedia ( $\lambda_2$ values in Table 2) where we + +![](images/e8d198be5f1e03d2a7678e96813a0d6ad91ec78fc7b615a3f2d1c24a568dcb16.jpg) +(a) Pre-trained + +![](images/b4df0201da0d7d4244919154819de14fd2d30a80015cc5bd5bff4ea08abe7581.jpg) +(b) After training +Figure 3: UMAP projection of Scrubber output representations $s(h(x))$ from Biographies corpus with profession as "professor". Blue and red labels indicate female and male biographies respectively. (a) Pre-trained BERT representations (b) BERT representations post training in ADS. + +considered 3 gender label classes. + +# 6.3 Visualization + +We visualize the UMAP (McInnes et al., 2018) projection of Encoder output representations, $h(x)$ , in Figure 3. Blue and red labels indicate female and male biographies respectively. Figure 3a and Figure 3b show representations before and after ADS training. In Figure 3a, male and female labeled instances are clearly separated in space. This shows that text representations encode information relating to gender attributes. In Figure 3b, we observe that after training in our adversarial framework both male and female labeled instances are difficult to segregate. This indicates that post training in ADS, it is difficult to identify biography representations on the basis of gender. + +# 7 Conclusion + +In this work, we proposed Adversarial Scrubber (ADS) to remove demographic information from contextual representations. Theoretical analysis showed that under certain conditions, our framework converges without leaking information about protected attributes. We extend previous evaluation metrics to evaluate fairness of representations by using MDL. Experimental evaluations on 8 datasets show that ADS is better at protecting demographic attributes than baselines. We show that our approach is scalable and can be used to remove multiple protected attributes simultaneously. Future work can explore leveraging ADS towards learning fair representations in other NLP tasks. + +# 8 Acknowledgement + +This work was supported in part by grants NIH 1R01AA02687901A1 and NSF IIS2133595. + +# Ethical considerations + +We propose ADS, an adversarial framework to prevent text classification modules from taking biased decisions. ADS is intended to be used in scenarios, where the user is already aware of the input attributes they want to protect. ADS can only be trained on data where protected attributes are annotated. It is possible that representations retrieved from ADS, contain sensitive information which were not defined as the protected variables. Even in such a scenario, ADS won't reveal information more than its already available in the dataset. One potential way of misusing ADS would to define relevant features for a task (e.g. experience for a job application) as a protected attribute, then the classification system may be forced to rely on sensitive demographic information for predictions. In such cases, it is possible to flag systems by evaluating the difference in True Positive Rate (TPR) when the protected attribute is changed $(\mathrm{GAP}_{z,y}^{\mathrm{TPR}})$ metric (De-Arteaga et al., 2019)). All experiments were performed on publicly available data, where the identity of author was anonymous. We did not perform any additional data annotation. + +# References + +Kanadpriya Basu, Treena Basu, Ron Buckmire, and Nishu Lal. 2019. Predictive models of student college commitment decisions using machine learning. Data, 4(2):65. +Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476, Online. Association for Computational Linguistics. +Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1119-1130, Austin, Texas. Association for Computational Linguistics. +Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In + +Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4349-4357. +John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1301-1309, Edinburgh, Scotland, UK. Association for Computational Linguistics. +Aaron Chalfin, Oren Danieli, Andrew Hillis, Zubin Jelveh, Michael Luca, Jens Ludwig, and Sendhil Mullainathan. 2016. Productivity and selection of human capital with machine learning. American Economic Review, 106(5):124-27. +Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnamaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In proceedings of the Conference on Fairness, Accountability, and Transparency, pages 120-128. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, and Adina Williams. 2020. Multi-dimensional gender bias classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 314-331, Online. Association for Computational Linguistics. +Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73. +Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11-21, Brussels, Belgium. Association for Computational Linguistics. +Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2021. Amnesic probing: Behavioral explanation with amnesic counterfactuals. Transactions of the Association for Computational Linguistics, 9:160-175. + +Joel Escudé Font and Marta R Costa-Jussa. 2019. Equalizing gender biases in neural machine translation with word embeddings techniques. arXiv preprint arXiv:1901.03116. +Omar Ghailan, Hoda MO Mokhtar, and Osman Hegazy. 2016. Improving credit scorecard modeling through applying text analysis. institutions, 7(4). +Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. arXiv preprint arXiv:1406.2661. +John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics. +Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the International AAAI Conference on Web and Social Media, volume 8. +Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. Open-Review.net. +Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43-53, New Orleans, Louisiana. Association for Computational Linguistics. +Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically categorizing written texts by author gender. _Literary and linguistic computing_, 17(4):401-412. +Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25-30, Melbourne, Australia. Association for Computational Linguistics. +Haochen Liu, Wei Jin, Hamid Karimi, Zitao Liu, and Jiliang Tang. 2021. The authors matter: Understanding and mitigating implicit bias in deep text classification. arXiv e-prints, pages arXiv-2105. +Haochen Liu, Wentao Wang, Yiqi Wang, Hui Liu, Zitao Liu, and Jiliang Tang. 2020. Mitigating gender bias for neural dialogue generation with adversarial learning. In Proceedings of the 2020 Conference on + +Empirical Methods in Natural Language Processing (EMNLP), pages 893-903, Online. Association for Computational Linguistics. +Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Minnesota. Association for Computational Linguistics. +Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. +Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635. +Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. "how old do you think i am?" a study of language and age in twitter. In Proceedings of the International AAAI Conference on Web and Social Media, volume 7. +Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799-2804, Brussels, Belgium. Association for Computational Linguistics. +Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. 2016. Overview of the 4th author profiling task at pan 2016: cross-genre evaluations. Working Notes Papers of the CLEF, 2016:750-784. +Jorma Rissanen. 1984. Universal coding, information, prediction, and estimation. IEEE Transactions on Information theory, 30(4):629-636. +Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 74-79, Valencia, Spain. Association for Computational Linguistics. +Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668-1678, Florence, Italy. Association for Computational Linguistics. +Danielle Saunders and Bill Byrne. 2020. Reducing gender bias in neural machine translation as a domain adaptation problem. In Proceedings of the 58th Annual Meeting of the Association for + +Computational Linguistics, pages 7724-7736, Online. Association for Computational Linguistics. +Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5248-5264, Online. Association for Computational Linguistics. +Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407-3412, Hong Kong, China. Association for Computational Linguistics. +Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics. +Ben Verhoeven and Walter Daelemans. 2014. CLiPS stylometry investigation (CSI) corpus: A Dutch corpus for the detection of age, gender, personality, sentiment and deception in text. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3081-3085, Reykjavik, Iceland. European Language Resources Association (ELRA). +Ben Verhoeven, Walter Daelemans, and Barbara Plank. 2016. TwiSty: A multilingual Twitter stylometry corpus for gender and personality profiling. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1632-1637, PortoRož, Slovenia. European Language Resources Association (ELRA). +Elena Voita and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computational Linguistics. +Edson RD Weren, Anderson U Kauer, Lucas Mizusaki, Viviane P Moreira, J Palazzo M de Oliveira, and Alejandro K Wives. 2014. Examining multiple features for author profiling. Journal of information and data management, 5(3):266-266. +Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, volume 28 of JMLR Workshop and Conference Proceedings, pages 325-333. JMLR.org. + +Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 335-340. + +# A Appendix + +# A.1 Minimum Description Length + +Minimum Description Length (MDL) measures the description length of labels given a set of representations. MDL captures the amount of effort required to achieve a certain probing accuracy, characterizing either complexity of probing model, or amount of data required. + +Estimating MDL involves a dataset $\{(x_1, y_1), \ldots, (x_n, y_n)\}$ , where $x_i$ 's are data representations from a model and $y_i$ 's are task labels. Now, a sender Alice wants to transmit labels $\{y_1, \ldots, y_n\}$ to a receiver Bob, when both of them have access to the data representations $x_i$ 's. In order to transmit the labels efficiently, Alice needs to encode $y_i$ 's in an optimal manner using a probabilistic model $p(y|x)$ . The minimum codelength (Shannon-Huffman code), required to transmit the labels losslessly is: $\mathcal{L}_p(y_{1:n}|x_{1:n}) = -\sum_{i=1}^{n} \log_2 p(y_i|x_i)$ . + +There are two ways of evaluating MDL for transmitting the labels $y_{1:n}$ (a) variational code - transmit $p(y|x)$ explicitly and then use it to encode the labels (b) online code - encodes the model and labels without explicitly transmitting the model. In our experiments, we evaluate the online code for estimating MDL. In the online setting, the labels are transmitted in blocks in $n$ timesteps $\{t_0,\dots ,t_n\}$ . Alice encodes the first block of labels $y_{1:t_1}$ using a uniform code. Bob learns a model $p_{\theta_1}(y|x)$ using the data $\{(x_i,y_i)\}_{i = 1}^{t_1}$ , Alice then transmits the next block of labels $y_{t_1 + 1:t_2}$ using $p_{\theta_1}(y|x)$ . In the next iteration, the receiver trains a new model using a larger chunk of data $\{(x_i,y_i)\}_{i = 1}^{t_2}$ , which encodes $y_{t_2 + 1:t_3}$ . This continues till the whole set of labels $y_{1:n}$ is transmitted. The total codelength required for transmission using this setting is given as: + +$$ +\begin{array}{l} \mathcal {L} _ {\text {o n l i n e}} \left(y _ {1: n} \mid x _ {1: n}\right) = t _ {1} \log_ {2} C - \\ \sum_ {i = 1} ^ {n - 1} \log_ {2} p _ {\theta_ {i}} \left(y _ {t _ {i} + 1: t _ {i + 1}} \mid x _ {t _ {i} + 1: t _ {i + 1}}\right) \tag {12} \\ \end{array} +$$ + +where $y_{i}\in \{1,2,\ldots ,C\}$ . The online codelength $\mathcal{L}_{online}(y_{1:n}|x_{1:n})$ is shorter if the probing model is + +
DATASETTime/ epoch (min.)
FUNPEDIA2
WIZARD1
CONVAI214
LIGHT4
OPENSUB15
BIOGRAPHIES260
DIAL16
PAN16 (gender)15
PAN16 (age)15
+ +Table 7: Runtime for each dataset. + +able to perform well using fewer training instances, therefore capturing the effort needed to achieve a prediction performance. + +# A.2 Theoretical Analysis + +Proposition. Minimizing $\delta$ -loss is equivalent to increasing the Bias discriminator loss $\mathcal{L}_d$ . + +Proof: The $\delta$ -loss function can be written as: + +$$ +\begin{array}{l} \delta \left(o _ {i}\right) = m _ {i} ^ {T} \operatorname {s o f t m a x} _ {\text {g u m b l e}} \left(o _ {i}\right) \\ = \frac {\exp \left(\frac {\log o _ {i} ^ {k} + g _ {k}}{\tau}\right)}{\sum_ {j} \exp \left(\frac {\log o _ {i} ^ {j} + g _ {j}}{\tau}\right)} \tag {13} \\ \end{array} +$$ + +where $o_i^j$ is the raw logit assigned to the $j^{th}$ output class, the true output class is $k = z_{i}$ and $g_{j},g_{k}$ are i.i.d samples from Gumble(0,1) distribution. The cross entropy loss of the bias discriminator $\mathcal{L}_d$ can be written as: + +$$ +\mathcal {L} _ {d} = - \log \frac {\exp \left(o _ {i} ^ {k}\right)}{\sum_ {j} \exp \left(o _ {i} ^ {j}\right)} \tag {14} +$$ + +The gumble softmax generates a peaked version of the normal softmax distribution. But the individual gumble softmax logit values (Equation 13) are still proportional to vanilla softmax logits (Equation 14): $\delta(o_i) \propto \frac{\exp o_i^k}{\sum_j \exp o_i^j}$ . Then, bias discriminator loss $\mathcal{L}_d$ can be written as: + +$$ +\mathcal {L} _ {d} \propto - \log \delta \left(o _ {i}\right) \tag {15} +$$ + +Therefore, minimizing $\delta (o_i)$ increases $\mathcal{L}_d$ + +# A.3 Implementation Details + +All experiments are conducted in PyTorch framework using Nvidia GeForce RTX2080 GPU with + +
DATASETPre-trained h(x)w/o adversary h(x)ADS h(x)ADS s(h(x))
\( \overrightarrow{MDL}_z \)\( \overrightarrow{MDL}_y \)\( \overrightarrow{MDL}_z \)\( \overrightarrow{MDL}_y \)\( \overrightarrow{MDL}_z \)\( \overrightarrow{MDL}_y \)\( \overrightarrow{MDL}_z \)\( \overrightarrow{MDL}_y \)
FUNPEDIA1.031.941.290.121.480.431.730.45
WIZARD1.082.151.470.061.840.091.950.06
CONVAI21.461.941.580.091.940.161.930.16
LIGHT1.212.281.440.211.890.431.920.42
OPENSUB0.922.031.490.121.770.181.780.18
BIOGRAPHIES0.111.941.740.011.730.011.740.01
DIAL1.461.811.060.601.650.311.750.34
PAN16 (gender)1.871.621.670.031.900.041.960.05
PAN16 (age)1.891.641.850.031.890.031.970.04
+ +Table 8: Probing performance of representations retrieved from different settings in terms of $\overrightarrow{\mathrm{{MDL}}}$ . + +12GB memory. We use an off-the-shelf MLPClassifier from sklearn as our probing network $q$ . ADS has a total of 110M parameters (all 4 modules combined). The average runtime per epoch for each dataset is reported in Table 7. + +# A.4 Measuring Fairness in Representations + +MDL scales linearly with the dataset size (Equation 12), therefore making it hard to compare across different datasets. In order to make it comparable, we measure a normalized description length measure for transmitting 1000 labels: + +$$ +\overrightarrow {\mathrm {M D L}} = \frac {1 0 0 0 \times \mathrm {M D L}}{| \mathcal {D} |} \tag {16} +$$ + +$|\mathcal{D}|$ is the dataset size. Performance using this measure are reported in Table 8 for all datasets. In all experiments we report the MDL required for transmitting the labels in the training set. \ No newline at end of file diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/images.zip b/adversarialscrubbingofdemographicinformationfortextclassification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4d04feb293a441c33550a5aefdfcf923e7d63bfc --- /dev/null +++ b/adversarialscrubbingofdemographicinformationfortextclassification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:224eec6c80cdc61acbaea1c9d4996085a2ecfbb823a69449bfbd070e866c7486 +size 646873 diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/layout.json b/adversarialscrubbingofdemographicinformationfortextclassification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c4a409537473ae37e0ec72f2fa742ae6544d7b98 --- /dev/null +++ b/adversarialscrubbingofdemographicinformationfortextclassification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daa65476d063128595e4c68a1b867046bc17a5a365dc4dc4e24292c97da14a1f +size 663489 diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_content_list.json b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8f310eca7204cee789fe6b55df1698eeaeef298a --- /dev/null +++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74dce01bda1ccd68b685288645a495236ebab7e27192d14fe51df8fd0d5cccf2 +size 94501 diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_model.json b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fb460119ad8fd2308eaddcadf64ad938b3f6843e --- /dev/null +++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ffedd8a6b9c005f33a6def5a64132f10dc02bdd8507b7c78c84d27f7b469566 +size 108746 diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_origin.pdf b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7a49a56eba3251417a2dceb055298e0f92739986 --- /dev/null +++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33451011de2461634808d4bc16a79c4353a29316b2cb2070b5686de656f80c25 +size 1612464 diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/full.md b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ab3b01656b8c5db7b68190d9b6e0af66f28aad25 --- /dev/null +++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/full.md @@ -0,0 +1,325 @@ +# AESOP: Paraphrase Generation with Adaptive Syntactic Control + +Jiao Sun $^{1,2}$ , Xuezhe Ma $^{1,2}$ and Nanyun Peng $^{1,2,3}$ + +1Computer Science Department, University of Southern California + +$^{2}$ Information Sciences Institute, University of Southern California + +3Computer Science Department, University of California, Los Angeles + +jiaosun@usc.edu, xuezhema@usc.edu, violetpeng@cs.ucla.edu + +# Abstract + +We propose to control paraphrase generation with carefully chosen target syntactic structures to generate more proper and higher quality paraphrases. Our model, AESOP, leverages a pretrained language model and purposefully selected syntactical control via a retrieval-based selection module to generate fluent paraphrases. Experiments show that AESOP achieves state-of-the-art performances on semantic preservation and syntactic conformation on two benchmark datasets with ground-truth syntactic control from human-annotated exemplars. Moreover, with the retrieval-based target syntax selection module, AESOP generates paraphrases with even better qualities than the current best model using human-annotated target syntactic parses according to human evaluation. We further demonstrate the effectiveness of AESOP to improve classification models' robustness to syntactic perturbation by data augmentation on two GLUE tasks. + +# 1 Introduction + +Syntactically-controlled paraphrase generation, which aims to generate paraphrases that conform with given syntactic structures, has drawn increasing attention in the community. On the one hand, paraphrase generation has benefited a wide range of NLP applications, such as neural machine translation (Yang et al., 2019), dialogue generation (Gao et al., 2020), as well as improving model robustness (Huang et al., 2021) and interpretability (Jiang et al., 2019). On the other hand, syntactically-controlled paraphrasing has been used for diverse question generation (Yu and Jiang, 2021), diversifying creative generation (Tian et al., 2021) and improving model robustness (Iyyer et al., 2018; Huang and Chang, 2021). + +However, selecting suitable target syntactic structures to control paraphrase generation for diverse and high-quality results is a lesser explored direction. Prior works usually use a fixed set of syntactic + +![](images/680c38b37d2d109ce4a46418761187d688723e0b9dc198de3a9905eb7c3ba80f.jpg) +Figure 1: Given a source sentence, AESOP selects target syntactic parses adaptively to guide paraphrase generation. Paraphrases here are all generated by AESOP, which preserve the semantics from source sentences and conform with the selected syntactic parses. + +structures for all input sentences (Iyyer et al., 2018; Huang and Chang, 2021). A challenge with this method is that not all sentences can be paraphrased into the same set of syntactic structures. For example, it is impossible to turn a long sentence with multiple clauses into a noun phrase. Thus, Chen et al. (2019b) proposed to use crowd-sourcing to collect exemplars that can provide compatible syntax with the source sentence to guide generation. Disadvantages with this method are that the crowd-sourcing process is costly, and one exemplar sentence can only provide a specific syntactic guidance, while there are many syntactic parses that can properly guide the paraphrase generation (as shown in Figure 1). + +In contrast, we propose to automatically select multiple syntactic parse structures to control paraphrase generation for more diverse and higher quality generation. Our first contribution is the proposal of AESOP (Adaptive Syntactically-Controlled Paraphrasing), a model that integrates pretrained Language Models (LMs) with a novel retrieval-based target syntactic parse selection module to control paraphrase generation. By leveraging the expressiveness of pretrained LMs and + +![](images/3846412b543187648e1b8d48fccaceda4847a43465a3e85fc99c1d851763c78f.jpg) +Figure 2: Prune a constituency parse tree at heights H. + +
H=2(ROOT (S (NP) (VP) (.)))
H=3(ROOT (S (NP (DT)) (VP (VBZ) (ADJP)) (.)))
H=4(ROOT (S (NP (DT)) (VP (VBZ) (ADJP (JJ)) (.)))
+ +the adaptive selection module, AESOP is capable of generating fluent and syntactically-diverse paraphrases. With ground-truth target syntactic parses from human-annotated exemplars, AESOP achieves the state-of-the-art performance on both semantic preservation and syntactic conformation metrics. By human evaluation, we show that AESOP can generate paraphrases with even better quality than the current best model using human annotated exemplars, which points out the importance of studying the adaptive target parse selection for future works on controlled paraphrase generation. + +Our second contribution is the construction of two datasets containing adversarial examples with syntactic perturbation generated by AESOP that are further validated and labeled by crowd workers. Experiments show that the two datasets are challenging to current classification models, and using AESOP to augment the training data can effectively improve classification models' robustness to syntactic attacks. $^{1}$ + +# 2 Task Formulation + +We formulate the task of adaptive syntactically-controlled paraphrase generation as: given an input sentence $X$ , find a set of proper syntactic controls $Y$ to generate paraphrases $Z$ , such that $Z$ 's syntax conforms to $Y$ while retaining the semantics of $X$ . + +We use the term target syntactic parses to refer to the syntactic structure that guides the generation, which could be from exemplar sentences, a set of fixed templates, or our adaptive selection module. + +Algorithm 1 Adaptive Target Parse Selection +Input: source parse at level $H$ : $T_s^H$ ; all (source parse, target parse) combinations in the training data $\{(T_{s1}^H, T_{t1}^H), \dots, (T_{sn}^H, T_{tn}^H)\}$ ; frequencies for each combination $\{F_1, \dots, F_n\}$ . +Output: $k$ target parse $T_t^H$ +1: for $i \in \{1, 2, \dots, N\}$ do +2: calculate the similarity score $S$ of $(T_s^H, T_i^H)$ +3: end for +4: m parses with highest $S$ with $T_s^H$ : $\{T_{s1}'^H, \dots, T_{sm}'^H\}$ +5: for $T_{si}'^H \in \{T_{s1}'^H, \dots, T_{sm}'^H\}$ do +6: // freq. distribution of possible target parses for $T_{si}'^H$ +7: sample $k/m$ target parses for $T_{si}'^H$ by distribution +8: end for + +# 3 AESOP: Adaptive Syntactically-Controlled Paraphrasing + +AESOP has two components: i) a retrieval-based module that adaptively selects a set of target syntactic parses to guide the paraphrase generation; ii) an encoder-decoder architecture that leverages BART (Lewis et al., 2020) to generate paraphrases. + +# 3.1 Adaptive Target Syntactic Parse Selection + +In AESOP, we propose a retrieval-based strategy to select target syntactic parse adaptively (i.e., Algorithm 1). For a given syntactic parse of source sentence pruned at height $H$ (as shown in Figure 2), denoted as $T_{s}^{H}$ , we aim to find $k$ suitable target syntactic parses to guide the generation. First, we collect (source sentence $X$ , paraphrase $Z$ ) pairs from the training data. Then, we prune $X$ and $Z$ 's constituency parse trees at height $H$ simultaneously and get corresponding $(T_{s}^{H}, T_{t}^{H})$ pairs. By counting, we have the frequencies of all unique paired combination of pruned source parses with target syntactic parses from their paraphrases, as $\{(T_{s1}^{H}, T_{t1}^{H}), ..., (T_{sn}^{H}, T_{tn}^{H})\}$ . + +Ranker. For a pruned source parse $T_{s}^{H}$ , we calculate the similarity between $T_{s}^{H}$ and all other unique parses at height $H$ in the training data $\{T_{1}^{H},\dots,T_{i}^{H},\dots,T_{N}^{H}\}$ , where $N$ is the number of unique parses pruned at level $H$ . We linearize both $T_{s}^{H}$ and $T_{i}^{H}$ as constituency parse strings and calculate their similarity score $S$ by calculating weighted ROUGE scores (Lin, 2004) between parse strings: + +$$ +S \left(T _ {s} ^ {H}, T _ {i} ^ {H}\right) = a * \text {R O U G E 1} + b * \text {R O U G E 2} + c * \text {R O U G E L}. ^ {2} \tag {1} +$$ + +Retriever. We rank and get $m$ parses that have the highest similarity scores with $T_{s}^{H}$ , denoted as + +![](images/1059d5574ee0f2d1efed5c26023ee93e02d48d506620078512d52509e4bee371.jpg) +Figure 3: AESOP Framework. With a source sentence as input, AESOP has i) a retrieval-based selection module that adaptively chooses a set of target syntactic parses as control signals, together with ii) an encoder-decoder architecture to generate fluent paraphrases. With ground-truth target syntactic parses from exemplars, AESOP leverages the syntactic information at different heights from exemplars to guide the generation. + +$\{T_{s1}^{\prime H}, \dots, T_{sm}^{\prime H}\}$ . Then, for each parse $T_{si}^{\prime H}$ , we retrieve all possible target syntactic parses from pairwise parse combinations from the training data. For each combination, we count how many time it occurs in the training data. For one certain combination with its occurrence frequency $\# (T_{si}^{\prime H}, T_t^H)$ , we divide its frequency over the sum of frequencies for all possible target syntactic parses for $T_{si}^{\prime H}$ and get a list of frequency ratios. We use the ratio distribution as probabilities to select $k / m$ target syntactic parses $T_t^H$ for each of $m$ parse $T_{si}^{\prime H}$ as shown in Equation 2, which results in $k (= m * k / m)$ target syntactic parses in total. + +$$ +T _ {t} ^ {H} \sim P (T _ {t} ^ {H} | T _ {s i} ^ {' H}) = \frac {\# (T _ {s i} ^ {' H} , T _ {t} ^ {H})}{\sum_ {j = 1} ^ {N} \# (T _ {s i} ^ {' H} , T _ {t j} ^ {H})}. (2) +$$ + +In our later experiments, we use the ranker in Equation 1 to retrieve top-tanked target syntactic parses and their corresponding paraphrases. Using the two-step strategy instead of ranking all syntactic parses based on similarity, we aim to find diverse target syntactic parses suitable for the source sentence. We use the weighted sampling strategy rather than directly choose the most frequently occurred combinations to take care of compatible combinations that occur less in a specific dataset. + +# 3.2 Architecture of AESOP + +AESOP takes as inputs the source sentence $X$ , its full syntactic parse $T_{S}$ and target syntactic parse(s) $Y$ , and generates as outputs a paraphrase $Z$ of $X$ together with a duplication of the target parse $Y$ . Specifically, given source sentences $X$ , we tokenize and get their constituency-based parse trees3, denoted as $T_{s}$ (shown as source parse tree in Figure 3). Similar to previous works (Iyyer et al., 2018; Chen et al., 2019a; Kumar et al., 2020), we linearize the constituency parse tree to a sequence (shown as source full syntactic parse in Figure 3). + +To utilize the encoder-decoder BART (Lewis et al., 2020) model for syntactic-controlled paraphrase generation, we propose an effective design of having source sentencesource full syntactic parsetarget syntactic parse as the input sequence for the encoder. The output sequence from the decoder is the sequence of target syntactic parseparaphrase. We will showcase the efficiency of our model design in Section 4 and provide a visual interpretation that AESOP successfully disentangles the semantic and syntactic information in Section 5. During training, we get gold target syntactic parses directly from parallel-annotated paraphrases. + +
ModelBLEU↑ROUGE-1↑ROUGE-2↑ROUGE-L↑METEOR↑TED-R↓TED-E↓
QQP -Possource-as-output17.251.926.352.931.116.216.7
exemplar-as-output16.838.220.543.217.64.80.0
CGEN (Chen et al., 2019a)34.962.642.765.437.46.76.0
SGCP-F (Kumar et al., 2020)36.766.945.069.639.84.81.8
⊕ SGCP-R (Kumar et al., 2020)38.067.645.370.024.86.65.7
AESOP-H236.867.143.869.042.28.08.6
AESOP-H343.471.350.973.146.56.77.0
AESOP-H447.373.354.175.149.75.65.6
AESOP-F40.569.649.372.043.84.81.9
Para NMT -smallsource-as-output18.850.623.247.728.812.013.1
exemplar-as-output3.324.47.529.15.96.00.0
CGEN (Chen et al., 2019a)13.644.821.048.324.86.73.3
SGCP-F (Kumar et al., 2020)15.346.621.849.725.96.11.4
⊕ SGCP-R (Kumar et al., 2020)16.449.422.950.328.88.77.0
AESOP-H220.751.427.153.130.68.79.5
AESOP-H321.353.028.355.231.97.57.2
AESOP-H422.954.429.856.432.76.95.7
AESOP-F20.452.027.855.330.06.11.9
+ +Table 1: Performance comparison with ground-truth syntactic control. With coarse syntactic control from shallow height of pruning, $\spadesuit$ AESOP started to outperform the current state-of-the-art model $\oplus$ SGCP. $\spadesuit$ AESOP-H4 outperforms $\oplus$ SGCP across all semantic preservation (BLUE, ROUGE Scores and METEOR) and syntactic conformation metrics (TED-R and TED-E). $\uparrow$ means higher is better, while $\downarrow$ means lower is better. With the full syntactic parse (-F), AESOP achieves its best controllability, which is comparable to previous best performance. source-as-input and exemplar-as-output are for quality check purpose and not for comparison. + +In our setting, we train separate models using pruned trees of target parses at different heights $H$ . During inference, the target syntactic parses are either from exemplar sentences, fixed templates or our adaptive selection module. + +# 4 Paraphrase Generation with Syntactic Control + +We train and evaluate AESOP on ParaNMT-small (Chen et al., 2019b) and QQP-Pos (Kumar et al., 2020). Our train/dev/test split follows previous work (Kumar et al., 2020). During our experiments, we aim to answer three research questions: + +- Q1: Will AESOP conform with the syntactic control while preserving the semantics, given ground-truth target parses? (Section 4.1, Table 1) +- Q2: Can AESOP generate fluent paraphrases with the adaptive target parse selection module when ground-truth target parses are unavailable? (Section 4.2, Table 2) +- Q3: Does the adaptive selection module produce high-quality target parses? (Section 4.3, Table 3) + +Baselines. For supervised models that utilize exemplar sentences to get target parses, we compare with CGEN (Chen et al., 2019a) and two versions + +of SGCP (Kumar et al., 2020): SGCP-R and SGCP-F. SGCP prunes constituency parse trees of exemplar sentences from height 3 up to 10. During the evaluation, SGCP-R chooses the best paraphrase out of many, and SGCP-F uses the full parse tree. To the best of our knowledge, SGCP-R is the current state-of-the-art model under this setting. For models that utilize a fixed set of target syntactic parses, we compare with SCPN (Iyyer et al., 2018) that proposes 10 syntactic parses at height 2 to guide the generation. + +# 4.1 Ground-truth Syntactic Control + +To answer Q1, we evaluate AESOP on both datasets with ground-truth target syntactic parses from exemplar sentences. + +Experiment Setup. First, we get the constituency parse trees of exemplar sentences. Then, we remove all leaf nodes (i.e., tokens in the sentences) from the constituency parse trees to prevent any semantics propagating from exemplar sentences into generation. We further prune the parse trees of exemplars at different heights to get different levels of syntactic specifications. Technically, the deeper we prune the parse tree, the more fine-grained syntactic information the model can use. Practically, it is less likely to provide fine + +
ModelBLEU↑ROUGE-1↑ROUGE-2↑ROUGE-L↑METEOR↑TED-E@2↓Valid@100↑Votes↑
QQP -Pos⊕ SGCP-R38.067.645.370.024.80.841.019.3
SCPN14.945.920.948.125.40.732.015.3
AESOP-static18.552.527.652.030.62.557.028.3
AESOP24.656.231.557.632.81.161.037.0
Para NMT -small⊕ SGCP-R16.449.422.950.328.80.730.012.0
SCPN12.135.715.132.923.30.554.030.0
AESOP-static14.446.020.546.525.52.962.022.0
AESOP15.047.021.347.326.12.668.036.0
+ +Table 2: Performance of AESOP without ground-truth target parse. Valid@100 is the validity check for the best paraphrases of first 100 test instances, and Votes is the percent of received votes for a paraphrase from one model to be the best among 4 models. Human evaluation indicates AESOP generate even better-quality paraphrases than the current best model $\oplus$ SGCP that uses the human-annotated target syntactic parse from exemplars. + +grained target syntactic parses. For example, it is easy to provide a target syntactic parse at height 2 containing a verb phrase and a noun phrase as (ROOT (S (NP) (VP) (.)) ), but it is hard to provide more fine-grained syntactic information even for experts. In AESOP, we try to use the syntactic information from exemplar sentences as shallow as possible. We train separate models by using target syntactic parses from pruning the constituency parse tree of paraphrases at heights 2, 3 and 4. Correspondingly, we denote them as AESOP(-H2/H3/H4). During evaluation, we only use the target syntactic parse from the exemplar sentences at that corresponding height. + +Evaluation Metrics. We evaluate the quality of paraphrases with: 1) alignment-based metrics to examine the semantics preservation: including BLEU (Papineni et al., 2002), ROUGE scores (Lin, 2004) and METEOR (Iyer et al., 2016) between the generated paraphrase and gold paraphrase. 2) syntactic conformation metrics: Tree-Edit Distances (TED) scores (Zhang and Shasha, 1989) between the constituency parse trees of generated paraphrases versus exemplar sentences (TED-E) and parallel-annotated paraphrases (TED-R). + +Quality Check. We use source sentences and exemplar sentences to check the quality of the datasets in Table 1. Using the source sentences as paraphrases will lead to high semantic preservation scores, but they have distinct syntactic structure with paraphrases, so TED-R scores are poor. On the other hand, exemplar sentences have distinct semantics with both the source sentences and paraphrases, which lead to poor semantic-preservation + +metrics. From TED-R scores, we can see that the tree-edit-distance between parse trees of exemplar sentences and paraphrases is low but not 0. It indicates that the quality of such human-annotated exemplar sentences are good yet imperfect. + +Experiment Results. Table 1 shows the performance comparison. Unsurprisingly, the deeper we prune target syntactic parse from exemplars, AESOP gets more syntactic information to achieve better controllability. With full target syntactic parse tree, AESOP achieves its best syntactic controllability, which is comparable to previous best performance. On the other hand, AESOP outperforms SGCP-R in semantic-preservation metrics by only using coarse syntactic information from height 2 (AESOP-H2) for ParaNMT-small and height 3 (AESOP-H3) for QQP-Pos. With more syntactic information, AESOP-H4 outperforms the current state-of-the-art SGCP-R in both semantics preservation and syntactic conformation metrics. It showcases AESOP's great ability of syntactically-controlled paraphrase generation. + +# 4.2 Adaptive Target Parse Selection + +To answer Q2, we evaluate AESOP without annotated exemplars. By having SGCP-R in our experiments, we aim to evaluate if AESOP can generate even better paraphrases compared to the current best model with human-annotated exemplars. + +Experiment Setup. How to select suitable target syntactic parses to guide the generation is still an open problem in the paraphrase generation community. To fairly compare with SCPN which proposes 10 syntactic templates at height 2, we also adopt AESOP trained at height 2 (shown as AESOP-H2 + +
top-1top-3top-5top-7top-10
QQP -PosSCPN32.2 (±7.8)32.2 (±3.3)33.4 (±1.3)32.6 (±0.0)33.0 (±0.0)
AESOP-static58.6 (±4.5)58.7 (±1.8)57.5 (±2.1)57.9 (±0.9)58.0 (±0.0)
AESOP100.0 (±0.0)94.7 (±0.0)90.8 (±0.0)84.3 (±0.0)65.0 (±0.0)
Para NMT -smallSCPN16.2 (±4.1)16.9 (±1.5)18.0 (±1.1)17.2 (±0.9)17.4 (±0.0)
AESOP-static47.0 (±6.2)48.9 (±2.0)48.6 (±1.3)48.6 (±1.3)49.0 (±0.0)
AESOP90.0 (±0.0)86.7 (±0.0)84.4 (±0.0)80.0 (±0.0)70.6 (±0.0)
+ +Table 3: Human validity check of top-k selected target syntactic parses. All numbers are 10-round mean with standard deviation. In AESOP, we use the ranker in Equation 1 to sort and get top-k target parses, while others use random selection. High validity rate of paraphrases indicate the high quality of our retrieved target syntactic parses. The trend that higher-ranked syntactic parses have higher validity rates verifies the efficiency of our ranker. + +in Table 1). Unlike previous work, AESOP uses the adaptive selection module to decide a set of target syntactic parses automatically. For a fair comparison, we also feed the same 10 syntactic target parses from SCPN to AESOP, denoted as AESOP-static. It is hard to evaluate retrieved target syntactic parses because paraphrases are intrinsically diverse, so that many target syntactic parses could be reasonable. Therefore, we use the quality of generated paraphrases, which is our end goal, to reflect the quality of retrieved target syntactic parses. For evaluation, we use automatic metrics together with extensive human evaluations. + +Automatic Metrics. First, we generate 10 paraphrases from each model. To establish a strong baseline, we chose the best paraphrase with the highest BLEU scores with source sentences across all models. As shown in Table 2, the improvement from AESOP-static to AESOP indicates the effectiveness of our adaptive selection strategy. SCPN performs better at TED-E@2 metrics on both datasets. After qualitative checks, we share the same finding with previous works (Kumar et al., 2020; Chen et al., 2019a) that SCPN tends to strictly adhere to syntactic parses at the cost of semantics. On the other hand, AESOP leans towards generating fluent paraphrases and can make up for the case when the target syntactic parse is less reasonable – AESOP achieves a better syntactic conformation when the syntactic control signal is more accurate, indicated by the decreases of TED-E@2 scores in Table 2. + +Human Evaluation. We validate the chosen paraphrases for the first 100 instances in the test sets on Amazon Mturk, and report as Valid@100 in + +Table 2. Besides, we show workers 4 paraphrases from all models and ask them to vote for which one is the best. Then we report the percentage of votes that each model got as votes. In result, AESOP generates more valid paraphrases than all baselines and gets the most votes, even than SGCP-R that utilizes human-annotated exemplars. Such finding demonstrates the effectiveness of AESOP and points out the importance of studying automatic target parse selection in paraphrase generation. $^{8}$ + +# 4.3 Quality of Retrieved Syntactic Parsing + +To answer Q3, we evaluate the quality of retrieved top- $k$ target syntactic parses by checking the validity of their corresponding paraphrases. We generate 10 paraphrases for each of the first 50 test instances (500 in total) using SCPN, AESOP-static, and AESOP and ask workers to validate. After annotation, we use the similarity ranker in Equation 1 to rank and get the top- $k$ target syntactic parses and their corresponding paraphrases for AESOP. For other baselines, as they use a fixed set of target syntactic parses and do not have any ranking mechanism, we do random permutation to rank target parses to get top- $k$ paraphrases. We run the experiments for 10 rounds and report the validity rate of paraphrases for top- $k$ target syntactic parses in Table 3. Comparing to pre-designed syntactic parses, the higher validity rates of paraphrases from AESOP indicate the better quality of our retrieved target syntactic parses. The trend that higher-ranked syntactic parses have higher validity rates also verifies the efficiency of our ranker. + +# 5 Model Analysis and Interpretation + +Ablation Studies. We take out each part of sequence in both encoder and decoder and conduct + +
ModelBLEU↑ROUGE-1↑ROUGE-2↑ROUGE-L↑METEOR↑TED-R↓TED-E↓
QQP -Pos1 AESOP47.373.354.175.149.75.65.6
2 w/o tp in dec39.9 (+7.4)68.4 (+4.9)49.0 (+5.1)70.5 (+4.6)44.5 (+5.2)8.1 (+2.5)8.1 (+2.5)
3 w/o fp in enc42.3 (+5.0)71.6 (+1.7)50.9 (+3.2)73.4 (+1.7)45.3 (+4.4)6.4 (+0.9)6.2 (+0.6)
4 w/o fp, tp in enc, tp in dec23.9 (+23.4)56.2 (+17.1)32.2 (+21.9)57.6 (+17.5)34.0 (+15.7)12.9 (+7.3)13.4 (+7.8)
5 w/o fp in enc, tp in dec38.2 (+9.1)67.7 (+5.6)47.5 (+6.6)70.0 (5.1)42.4 (+7.3)8.0 (+2.4)7.9 (+2.3)
Para NMT -small1 AESOP22.954.429.856.432.76.95.7
2 w/o tp in dec19.2 (+3.7)51.3 (+3.1)27.3 (+2.5)53.5 (+2.9)30.8 (+1.9)9.7 (+1.8)8.8 (+2.9)
3 w/o fp in enc24.0 (-1.1)54.8 (-0.4)30.5 (-0.7)57.1 (-0.7)33.4 (-0.7)6.8 (-0.1)5.7 (0.0)
4 w/o fp, tp in enc, tp in dec16.7 (+6.2)49.8 (+0.6)25.2 (+4.6)50.4 (+6.0)29.1 (+3.6)11.7 (+4.8)12.8 (7.1)
5 w/o fp in enc, tp in dec20.0 (+2.9)53.7 (+0.7)29.3 (+0.5)55.7 (+0.7)31.6 (+1.1)8.7 (+1.8)7.7 (+2.0)
+ +Table 4: Ablation studies that justify our model design. + shows how much better AESOP is compared to the that design, while - shows how much worse (dec, enc: decoder, encoder. tp: target parse, fp: source full parse). + +several ablation studies on AESOP-H4 with exemplars. We show how each part of sequences would influence AESOP's performance in Table 4. Takeaways from our ablation studies are: 1) AESOP's performance plummets without any syntactic specifications (row1&row4). 2) Taking out target parse $(tp)$ in the output sequence will lead to worse performance in both semantic preservation and syntactic controllability (row1&row2, row3&row4). We will visually interpret the benefit of such design later in this section. 3) Taking out each part in the input sequence for the encoder will leads to a significant performance drop of AESOP on QQP-Pos dataset for both criteria (i.e., semantic preservation and syntactic controllability). The trend is the same for ParaNMT-small dataset, except only taking out the full parse $(fp)$ will leads to around $1\%$ improvement on semantic preservation metrics, while the syntactic controllability stays almost the same. Considering the much larger performance drop on criteria, we decided the current design of AESOP. + +Interpretation. In Figure 4, we visualize cross attentions between encoder and decoder for two designs, i.e., AESOP with (right) and without (left) target syntactic parse in the decoder on the test set of ParaNMT-small. Technically, we search for the final output with $\text{beam} = 4$ and take the average of cross attention scores of 12 attention heads from the last layer of the decoder. Finally, we add the attention of all tokens within each component ( $ss$ , $fp$ and $tp$ ). To manifest the difference, we denote the highest attention scores as 100, and calculate the relative cross attention to the highest. + +Compared to the design without target syntactic parse in the decoder, cross attention between paraphrases and source sentences stays the highest in AESOP. However, the ratio of cross attention + +scores of (paraphrases, target parses) and (paraphrases, full source parses) decreases. Such decreases indicate that having target parses in the decoder helps to disentangle semantic and syntactic information from the input sequence. Instead, AESOP learns the syntactic information from target syntactic parses through self-attention in the decoder. As a result, it leads to a performance boost in Table 4. At the same time, target parses influence paraphrase generation directly during decoding through the decoder's self-attention, which leads to better controllability of AESOP. Take the example in Figure 4, without target parse in the decoder, the model outputs a large black dog sits in the corner beside him. as the paraphrase to by his side crouched a huge black wolfish dog .. After adding the target parse in the decoder, the model no longer generates prepositional phrase in the corner and outputs a large black dog sits beside him., which matches better with the input target parse. + +# 6 Improve Robustness + +Recent works show that powerful LMs (e.g., BERT (Devlin et al., 2019)) are capturing the superficial lexical features McCoy et al. (2019) and are vulnerable to simple perturbations (Jin et al., 2020). Motivated by this, we first test if BERT is robust to syntactic perturbations by paraphrasing. + +We fine-tune BERT models on two GLUE (Wang et al., 2018) tasks (SST-2 and RTE). Then, we generate 10 paraphrases using AESOP-H2 for each test instance in the dev set and choose top-5 to get 2 larger dev sets. We run trained BERT models on new dev sets again. + +Human Annotation. We collect the paraphrases where models fail but succeeded at their original + +![](images/b549f2ccefba15e95d6b14c24480aaabfd3b21ead498226329b2badd9f1d0f82.jpg) + +![](images/b587f1388290ca2a4cc574e646463b167dfec95c255bf520df67be42141a713b.jpg) + +![](images/dbcff30306139d41a5e4e042968d563dacfeb469be16576e7975fca11e0e6749.jpg) +Figure 4: Cross Attention without and with $tp$ (target parse) in the decoder. Line thickness is proportional to relative cross attention scores. By duplicating $tp$ in the decoder, relative cross attention scores for both (paraphrases, full source parse) and (paraphrases, target parse) decrease. It indicates that duplicating target syntactic parses in the decoder lets AESOP disentangle the semantics and syntactic information from the input sequence. + +
Original DevCollectedCombined
DatasetModelBeforeAfterParaGAPBeforeAfterParaGAPBeforeAfterParaGAP
SST-2SCPN (Iyyer et al., 2018)91.989.7-2.218.646.5+27.968.076.1+8.1
SynPG (Huang and Chang, 2021)85.3-6.647.0+28.773.3+5.3
AESOP-tp88.9-3.049.5+30.976.4+8.4
AESOP91.1-0.848.5+29.977.6+9.6
RTESCPN (Iyyer et al., 2018)62.868.6+5.846.949.6+2.756.058.1+2.1
SynPG (Huang and Chang, 2021)61.7-1.149.0+2.156.6+0.6
AESOP-tp60.3-2.555.7+8.857.8+1.8
AESOP62.5-0.358.4+11.561.0+5.0
+ +sentence as adversarial examples. We then put all these examples on MTurk and ask workers to re-associate. $^{10}$ For SST-2, we ask workers to assign sentiment labels as positive, negative or undecided (mixed sentiments). For RTE, one test instance has sentence1 and sentence2 with a label if sentence1 entails sentence2. We generate paraphrases for sentence2 and ask workers to binary-decide if sentence1 entails generated paraphrases. We show the statistics of collected adversarial set and original dev set in Table 6. Researchers can test their models' robustness to syntactic perturbations on our collected datasets. + +Augmentation. We augment each training instance with 5 best paraphrases from AESOP-H2. For SynPG and SCPN, as the pre-designed templates for SynPG is a subset of SCPN's. We generate 5 paraphrases using selected templates in SynPG. Then, we retrain BERT models with augmented training data from each model. Then, we re + +Table 5: ParaGAP is the accuracy difference between BERT models after and before using paraphrases augment the training data. Among 4 models, AESOP improves BERT's robustness to syntactic perturbations the most. + +
OriginalCollectedCombined
SST-28724041276
RTE277341618
+ +Table 6: Dataset statistics. Combined is the combination of the original dev set and collected data. + +train BERT models after augmentation and get their test accuracies. We define ParaGAP as the accuracy difference for after- and before-augmentation using paraphrase generation models. ParaGAP indicates how efficient the augmentation is to improve the model robustness to syntactic perturbations. + +Experiment Result. As shown in Table 5, BERT models perform poorly in our collected datasets before augmentation, which indicate that our collected adversarial datasets are challenging, and BERT is vulnerable to syntactic perturbations. After using 4 different paraphrasing models to augment the training data, models' robustness to such perturbations all get improved. Among all models, + +AESOP yields the best ParaGAP on the combined dataset of original dev sets and collected datasets, which shows that using AESOP improves the classification model's robustness to syntactic perturbations more effectively.[11] + +# 7 Related Work + +Recent advances have been using neural models for syntactically controlled paraphrase generation. From the modeling perspective, there are roughly two categories: unsupervised and supervised methods. Unsupervised models do not use parallel paraphrases during training. Wieting and Gimpel (2018); Wieting et al. (2017) use back-translation to generate paraphrases. Huang and Chang (2021) propose a transformer-based model SynPG for paraphrase generation. AESOP is a supervised paraphrase generation model, which means that we require parallel paraphrases during training. Previous supervised paraphrase models are mostly RNN-based models, including SCPN (Iyyer et al., 2018), CGEN (Chen et al., 2019a) and SGCP (Kumar et al., 2020). Such models suffer from generating long sentences and do not utilize the power of recent pretrained language models. Goyal and Durrett (2020a) is a concurrent work with ours that also builds on BART to generate paraphrases but has a different model design. For syntactic control, Goyal and Durrett (2020b) use target syntactic parses to reorder source sentences to guide the generation, while other works, including AESOP, directly use target syntactic parses to guide the generation. CGEN (Chen et al., 2019a) and SGCP (Kumar et al., 2020) use target syntactic parses from crowd-sourced exemplars, SCPN (Iyyer et al., 2018) and SynPG (Huang and Chang, 2021) use pre-designed templates, while AESOP retrieves target syntactic parses automatically. + +# 8 Conclusion and Future Works + +In this work, we propose AESOP for paraphrase generation with adaptive syntactic control. One interesting and surprising finding of this paper is that using automatically retrieved parses to control paraphrase generation can result in better qualities than the current best model using human-annotated exemplars. Such finding manifests the benefits of adaptive target parse selection for controlled paraphrase generation – it does not only generate + +diverse paraphrases, but also higher quality paraphrases. This suggests future works on syntactically controlled paraphrase generation to pay more attention to target parse selection, and we hope AESOP can serve as a strong baseline for this direction. In our work, we use generated paraphrases to reflect the quality of automatically-selected target parses; future works can design specific metrics to evaluate the quality of retrieved syntactic parses. In addition, we find that having the control signal in the decoder can lead to better controllability of AESOP. Future works can test the generalizability of this modeling strategy in other controlled generation tasks. In addition, we show that AESOP can effectively attack classification models and contribute two datasets to test models' robustness to syntactic perturbation. We find that using AESOP to augment training data can effectively improve classification models' robustness to syntactic perturbations. + +# Acknowledgments + +Many thanks to I-Hung Hsu for his constructive suggestion and fruitful discussion for AESOP. We thank Kuan-Hao Huang, Sarik Ghazarian, Yu Hou and anonymous reviewers for their great feedback to improve our work. + +# Ethical Consideration + +Our proposed model AESOP utilizes a pretrained language model to generate paraphrases. Trained on massive online texts, it is well-known that such pretrained language models could capture the bias reflecting the training data. Therefore, AESOP could potentially generate offensive or biased content. We suggest interested parties carefully check the generated content before deploying AESOP in any real-world applications. Note that AESOP might be used for malicious purposes because it does not have a filtering mechanism that checks the toxicity, bias, or offensiveness of source sentences from the input. Therefore, AESOP can generate paraphrases for harmful content that may offend certain groups or individuals. + +Our collected datasets are based on the development sets of two public classification tasks on GLUE, including SST-2 for sentiment analysis and RTE for textual entailment. These do not contain any explicit detail that leaks information about a user's name, health, racial or ethnic origin, religious or philosophical affiliation or beliefs. + +# References + +Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019a. Controllable paraphrase generation with a syntactic exemplar. Association for Computational Linguistics. +Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019b. A multi-task approach for disentangling syntax and semantics in sentence representations. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics. +Silin Gao, Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020. Paraphrase augmented task-oriented dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Tanya Goyal and Greg Durrett. 2020a. Neural syntactic preordering for controlled paraphrase generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Tanya Goyal and Greg Durrett. 2020b. Neural syntactic preordering for controlled paraphrase generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +James Y. Huang, Kuan-Hao Huang, and Kai-Wei Chang. 2021. Disentangling semantics and syntax in sentence embeddings with pre-trained language models. In *NAACL (short)*. +Kuan-Hao Huang and Kai-Wei Chang. 2021. Generating syntactically controlled paraphrases without using annotated parallel pairs. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Association for Computational Linguistics. +Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics. +Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics. + +Zhengbao Jiang, F. F. Xu, J. Araki, and Graham Neubig. 2019. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438. +Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In AAAI. +A. Kumar, Kabir Ahuja, Raghuram Vadapalli, and P. Talukdar. 2020. Syntax-guided controlled generation of paraphrases. Transactions of the Association for Computational Linguistics, 8:330-345. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Association for Computational Linguistics. +Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics. +Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +M. McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia Medica, 22:276 - 282. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Yufei Tian, Arvind krishna Sridhar, and Nanyun Peng. 2021. Hypogen: Hyperbole generation with commonsense and counterfactual knowledge. In Findings of the Association for Computational Linguistics: EMNLP. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. + +GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics. +John Wieting and Kevin Gimpel. 2018. ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics. +John Wieting, Jonathan Mallinson, and Kevin Gimpel. 2017. Learning paraphrastic sentence embeddings from back-translated bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics. +Xuewen Yang, Yingru Liu, Dongliang Xie, Xin Wang, and Niranjan Balasubramanian. 2019. Latent part-of-speech sequences for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics. +Xiaojing Yu and Anxiao Jiang. 2021. Expanding, retrieving and infilling: Diversifying cross-domain question generation with flexible templates. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Association for Computational Linguistics. +K. Zhang and D. Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM J. Comput., 18:1245-1262. + +# A Appendix + +# A.1 Implementation Details + +Parameters. We use a learning rate $3 \times e^{-5}$ to train AESOP. We use 6 layers of encoder and 6 layers of decoder with model dimension of 768 and 12 heads. For the input sequence, we set the max length to 128 and max output sequence length as 62. We train 25 epochs for each model. It takes about one days to finish training for ParaNMT-small and about half a day for QQP-Pos on one NVIDIA GeForce RTX 2080. + +Optimization. We use Adam $(\beta_{1} = 0.9, \beta_{2} = 0.999)$ with a linear learning rate decay schedule for optimization. All experiments are done using Huggingface library (Wolf et al., 2020).12 + +# A.2 Diverse Syntax with Deeper Pruning + +Table 7 is a supplementary to Table 2. Using AESOP-H2 yields a better performance in terms of the semantic preservation metrics. We share the same finding from Section 4.1 that the syntactic controllability will get better when we use the deeper heights of syntactic parse trees. However, the semantic preservation metrics get worse with more fine-grained syntactic control, we hypothesize this is because deeper-level of control signals can be misleading, but such signals restrict models to generate paraphrases that conform to the provided misleading syntactic signals, which impairs the ability of pretrained language models to generate fluent texts. + +# A.3 Validity Check on Paraphrases + +In Section A.3.1, we will give more details of the human validity check in Table 2 and more details of human evaluation of Table 3 in Section A.3.2. + +# A.3.1 Validity@100 and Votes + +We choose the best paraphrases among 10 generated paraphrases from SCPN, AESOP-static and AESOP for the first 100 test instances in the both datasets. For SGCP, we take its output paraphrases that uses the exemplar sentences. Then, we perform the human validity check of these 400 paraphrases on Amazon Mturk platform. For each source sentence, we provide all 4 paraphrases from these four models to three workers. In our instruction, we ask them to annotate three-level of validity: invalid paraphrase, imperfect paraphrase that does + +not lose key information, and perfect paraphrases. We binarize worker's labels with both imperfect and perfect paraphrases as a valid instance, otherwise invalid. Then, we the majority vote of labels among three workers as the final label. We calculate the ratio of valid instances over 100 and report the ratio as Validity@100 in Table 2. As a supplementary, Table 8 shows the breakdown annotation of three-level validity check. In addition, we ask workers to vote for the best paraphrase among the four paraphrases, and report the ratio of total votes the model gets over all 300 votes as Votes in Table 2 to reduce the influence of personal preference. We use fleiss's kappa scores (McHugh, 2012) to measure the Inter Annotator Agreement (IAA). The IAA for validity@100 is 0.63, which indicates a substantial agreement among workers. + +Mturk Setup Details. We set the qualification as the location needs to be in the US, completed HITs no less than 5000, and approval rate no lower than $98\%$ . Our one HIT contains 10 instances. For one HIT, we have three respondents (workers) to work on it. For payment, we pay workers $0.8 per HIT with a potential bonus of $1 if they participate over 5 HITs published by us. + +# A.3.2 Validity@500 + +The annotators of the human evaluation in Section 4.3 are three graduate students from our institute. None of them are involved in this project. We have two of them work on validity checks for ParaNMT-small and QQP-Pos, and there was one student who worked on both. We check their understandings about paraphrases before the study and instruct them to only label a paraphrase as valid when the paraphrase is natural, fluent, and preserves the semantics of the source sentence. To understand the Inter Annotator Agreement (IAA), we randomly selected 50 samples of (source sentence, paraphrase) pairs and asked them to annotate if they are valid paraphrases independently. After the annotation, we count it as an agreement if they agree on the same label (either valid paraphrase or invalid). The average IAA is 0.9 between the three of them, which indicates a good agreement. Then, we have these three works to annotate all instances sampled on Table 2. After annotation, we count a paraphrase as a valid paraphrase only if both of 2 annotators think it is valid. + +
ModelBLUEROUGE-1ROUGE-2ROUGE-LMETEORTED-E
ParaNMT -smallAESOP-H215.047.021.347.326.12.6
AESOP-H312.141.516.742.622.82.3
AESOP-H48.433.210.935.318.31.5
QQP-PosAESOP-H224.656.231.557.632.80.7
AESOP-H322.554.829.756.131.50.8
AESOP-H419.951.425.952.030.61.1
+ +Table 7: A supplementary to Table 2. When we use the deeper levels of syntactic parse trees, the syntactic controllability of AESOP will get better. However, the semantic preservation metrics get worse, because such signals can be misleading and it restricts models to generate paraphrases that conform to the control signal. + +
DatasetModelInvalidImperfectPerfect
QQP -PosSGCP591922
SCPN681220
AESOP-static432334
AESOP392437
Para -NMT -smallSGCP70219
SCPN462430
AESOP-static382042
AESOP322741
+ +Table 8: Three-level validity annotation breakdowns for Validity@100. + +# A.4 Case Study with Invalid Target Syntax + +Strict conformation to inappropriate target syntactic parse sometimes leads to semantics lost and abrupt termination of sentences, which hurts the goal of generating fluent and natural paraphrases as indicated in Section 4.2. For example, given the input sentence $i$ had a dream yesterday and it was about you and a target syntactic parse with height 2 (ROOT(S(ADVP)(NP)(VP)(.))), SCPN generates maybe it was about you . that has the same syntactic parse with the target parse, while AESOP generates you were in my dream last night . whose syntax parse at height 2 is (ROOT(S(NP)(VP)(.))). + +# A.5 Qualitative Comparison + +We provide a qualitative comparison between AESOP and other competitive paraphrase generation models under both settings with or without exemplar sentences in Table 9. We show that with ground-truth syntactic control (Setting I), AESOP can generate paraphrases that are closer to ground-truth paraphrases. Without ground truth, AESOP can generate diverse paraphrases that are more natural and better preserve the semantics than SCPN. + +# A.6 Adversarial Set Collection + +We contribute two datasets constructed from AESOP in Section 6 by crowd-sourcing. We collect all adversarial examples that successfully attacked + +the models, as shown in the all column of Table 5 and put them on Amazon MTurk to annotate if the paraphrases are valid. We set the qualification as the location needs to be in the US, completed HITs no less than 5000, and approval rate no lower than $98\%$ . One HIT contains 12 instances and have 3 respondents (workers) work on it. For payment, we pay workers $\$0.4$ per HIT as qualification test. After selecting qualified workers, we pay them $\$1$ per HIT with another potential bonus of $\$1$ if they participate over 5 HITs published by us. On average, experienced workers spent around 10 minutes to complete one HIT, which means our payment is above the federal minimum wage in the US. + +Instruction and Annotation. As sentiment analysis on SST-2 is intuitive, we list examples as an instruction to guide the annotation. We count it as an agreement if all of three workers given the same label to one instance (i.e., positive, negative or undecided), and we calculate IAA as the ratio of agreements over all instances for qualified workers. The average IAA of three workers among all instances are 0.8, which indicates a good agreement. During the dataset collection, we use the majority vote to decide the final label of one instance. For textual entailment on RTE dataset, we refer to the guideline from the original guide line of RTE-4 $^{13}$ to explain the textual entailment task itself with examples. The IAA for RTE annotation is 0.71. + +# A.7 AESOP Helps to Improve the Decision Boundary + +We conduct a study on how augmenting the training data would influence models' decision boundaries. More specifically, we test BERT models before and after augmentation with AESOP, on the combination of the original gold test set and our collected adversarial datasets on two downstream tasks. For + +
ModelExemplar/Selected Target ParsesGenerated Paraphrases
QQP -Possource sentence: what is the best way to get manchester united tickets?
SGCP(ROOT (FRAG (NP (NP (NNS ))whats the way ?
AESOP-H4(NP (DT ) (NN ))) (.)))how can i get free manchester united tickets ?
SCPN(ROOT (S (NP ) (VP ) (.))that ’s the best way to get manchester united tickets ?
(ROOT (FRAG (SBAR ) (.))what ’s the best way to get manchester ?
AESOP(ROOT (SBARQ (WHADVP ) (SQ ) (.)))how can i get free manchester united tickets ?
(ROOT (SQ (VBZ ) (NP ) (VP ) (.)))is there any way to get free manchester united tickets ?
Para NMT -smallsource sentence: by his side crouched a huge black wolfish dog .
SGCP(ROOT (S (NP (DT ) (JJ ) (JJ ) (NN ))his side waving a huge black dog .
AESOP-H4(VP (VBZ ) (PP (IN ) (NP )) (.)))a large black dog sits beside him .
SCPN(ROOT (S (NP ) (VP ) (.))his side was a huge black dog .
(ROOT (NP (NP ) (.))a huge black dog on his side .
AESOP(ROOT (S (S ) (NP ) (VP ) (.)))there was a big black wolf lying next to him .
(ROOT (NP (NP ) (.)))a large , black , wolf like dog lay beside him .
+ +Table 9: A qualitative comparison of generated paraphrases with or without exemplar sentences from AESOP. SGCP and AESOP-H4 use target syntactic parses from exemplar sentences to guide the generation. SCPN use fixed target syntactic templates, while AESOP retrieves target syntactic parses automatically. + +![](images/e079d3956d3dcd8c6cde1a627ca1aed61b797bd7713deb4d34aeaa3a4805237d.jpg) +(a) SST before augmentation + +![](images/f254f2c1430135b7baa39060833d2e6372566937a417534a87336f409544c297.jpg) +(b) SST after augmentation + +![](images/a2ddd160b95e2eb6ba512c6cd2849a6041162930c7bb64b32ecff1129c0abbbb.jpg) +(c) RTE before augmentation + +![](images/b0151e3578f73075cd8d6c07836ae83356eb3ccd761b8b2ee3bd6c4327fdd1e8.jpg) +(d) RTE after augmentation +Figure 5: AESOP helps to improve the model decision boundary. For visualization, we use TSNE to reduce the dimension of [CLS] token from the last layer of BERT model combining the collected data and dev set for SST-2 and RTE. + +visualization, we use TSNE to reduce the dimension of [CLS] token from the last layer of BERT model. Figure 5 show that AESOP helps BERT models to improve the decision boundary to be more clear, which is also indicated by Table 5 in the main content. \ No newline at end of file diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/images.zip b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4f1a09021d6e49b273c1b9a6984333521f255bb2 --- /dev/null +++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bbc22db7ade5fefcad245b5f1af1ae7c0a9ae266c672bcf50ac8cc620a6b19e +size 882263 diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/layout.json b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c6c4e348bce8e33679589e034743f03c2c591d62 --- /dev/null +++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:519caa5a8abfbc032dbe1a741a5fa5e4709abf50e10989b54150e38062c8b2b1 +size 407166 diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_content_list.json b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ac771f3dd045f7c51398d612089c3a97a4675ccd --- /dev/null +++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:491c33f42cee81803441d87d63713fda630bb4c17c880ab05e9d31f1be09218e +size 86399 diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_model.json b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5e4971fb555ba127f8f104085b8682c52faa8e89 --- /dev/null +++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d1f16a9f6e2226f7d93520c183c93a52f8ab01966cb04350d647a5728183802 +size 104328 diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_origin.pdf b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5410c858af70a70dda299d9ebcb81742f5ce1a4c --- /dev/null +++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35aa015e4623f5395f824ceeca5f61105d32be38b131e64764459aaed2d4a988 +size 424820 diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/full.md b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f0e2a46e90ac9efaac843edb6c3e9cdc8c8db0af --- /dev/null +++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/full.md @@ -0,0 +1,360 @@ +# A Fine-Grained Domain Adaption Model for Joint Word Segmentation and POS Tagging + +Peijie Jiang $^{1}$ Dingkun Long Yueheng Sun $^{2}$ Meishan Zhang $^{1*}$ Guangwei Xu Pengjun Xie + +$^{1}$ School of New Media and Communication, Tianjin University, China + +$^{2}$ College of Intelligence and Computing, Tianjin University, China + +{jzx555,yhs,zhangmeishan}@tju.edu.cn + +{longdingkun1993,ahxgwOnePiece,xpjandy}@gmail.com + +# Abstract + +Domain adaption for word segmentation and POS tagging is a challenging problem for Chinese lexical processing. Self-training is one promising solution for it, which struggles to construct a set of high-quality pseudo training instances for the target domain. Previous work usually assumes a universal source-to-target adaption to collect such pseudo corpus, ignoring the different gaps from the target sentences to the source domain. In this work, we start from joint word segmentation and POS tagging, presenting a fine-grained domain adaption method to model the gaps accurately. We measure the gaps by one simple and intuitive metric, and adopt it to develop a pseudo target domain corpus based on fine-grained subdomains incrementally. A novel domain-mixed representation learning model is proposed accordingly to encode the multiple subdomains effectively. The whole process is performed progressively for both corpus construction and model training. Experimental results on a benchmark dataset show that our method can gain significant improvements over a variety of baselines. Extensive analyses are performed to show the advantages of our final domain adaption model as well. + +# 1 Introduction + +Chinese Word Segmentation (CWS) and Part-Of-Speech (POS) tagging are two fundamental tasks for natural language processing (NLP) in Chinese (Emerson, 2005; Jin and Chen, 2008), serving as backbones for a number of downstream NLP tasks. The joint models of the two tasks can lead to better performance because they are closely-related and the pipeline models suffer from the error propagation problem (Ng and Low, 2004; Zhang and Clark, 2008; Wang et al., 2011; Zeng et al., 2013; Zhang et al., 2018; Tian et al., 2020a), which can be alleviated in the joint architecture. + +![](images/71c6f43a2e9240d6344a73bc03b20f64af888c2dcdff6b608caf948724f26d13.jpg) +Figure 1: The idea of fine-grained domain adaption. + +Currently, joint CWS and POS tagging has gained great achievements with BERT inputs (Tian et al., 2020a,b). Our preliminary results show that the F1-score of joint POS tagging can be close to $95\%$ when the training and test corpus both belong to a standard newswire domain. Unfortunately, it is not always the case in real applications. The performance might be degraded dramatically when the source and target domains are highly different. Taken the ZhuXian (a novel from Internet) as an example (Zhang et al., 2014), the same model can only obtain an F1-score of $89\%$ for POS tagging according to our results. + +It is a typical domain adaption problem targeted to joint CWS and POS tagging. Self-training could be one promising solution (Inoue et al., 2018; Zou et al., 2019; Saito et al., 2020) which can accomplish the goal in a fully-automatic manner without any human intervention (Liu and Zhang, 2012). By using a source model to automatically label a large-scale raw corpus of the target domain, and then selecting a set of high-confidence pseudo-labeled instances as additional training data, we can obtain boosted performance on the target domain. The quality of pseudo corpus is the key to success. For the target sentences which are far from the source domain, the generated corpus based on them might be of extremely-low quality (Shu et al., 2018; Zhao et al., 2019). Thus, these sentences should be either filtered, resulting in a biased corpus to the target domain, or be kept with great noises to degrade the overall target performance. + +In this work, we suggest a fine-grained domain + +adaption method to alleviate the above problem of self-training. We define a simple and intuitive metric to measure the distance (gap) of a target sentence to the source domain. Based on the metric, we create a set of high-quality training corpora incrementally according to the distances of the target sentences to the source domain. Figure 1 shows the main idea. The process is conducted by several iterations in a progressive manner, where at each new iteration, we add a small set of high-quality instances which are not as distant from the previous iteration. Finally, we arrive at a training corpus covering the target domain of various distances fully. At each iteration, we go only a little further by the distance, thus the quality of the pseudo corpus can be greatly ensured by the previous model. + +By the fine-grained domain adaption, we can obtain a training corpus of multiple types from different iterations, where each type differs from the other in both quality and input distribution. During the early iterations, the produced instances are possible with higher quality and close to the source domain, while for the later iterations, the quality might be lower and the distance to the source domain is larger. To make full use of the corpus together with the source training set, we present a domain-mixed model for sophisticated representation learning to capture domain-aware and domain-invariant features (Daumé III, 2007; Ganin et al., 2016; Tzeng et al., 2017), which is also strengthened progressively by the incremental style of the fine-grained domain adaption. + +We conduct experiments on a benchmark ZhuXian dataset (Zhang et al., 2014) to show the effectiveness of our method. In detail, the Penn Chinese Treebank version (Xue et al., 2005) 6.0 (CTB6) is used as the source corpus, belonging to the newswire domain, while the target ZhuXian corpus is from an Internet novel. Experimental results show that our fine-grained domain adaption is significantly better than previous self-training studies. Moreover, we find that our domain-mixed representation learning model suits the fine-grained framework perfectly. We also conduct extensive analyses to understand our model comprehensively. We will release our codes at github.com/JZX555/FGDA under Apache License 2.0 to help the reproduction. + +# 2 Joint CWS and POS Tagging + +This section describes the basic model of our joint CWS and POS tagging. Concretely, we regard our + +joint task as a character-level sequence labeling problem following Tian et al. (2020a). Given an input character sequence $X = [x_{1},\dots,x_{n}]$ , the output labels $Y = [y_{1},\dots,y_{n}]$ are concatenations of word boundaries (i.e., BMES) and POS tags for all sentential characters. We exploit an ADBERT-BiLSTM-CRF model as our basic model, which is very strong in performance and highly parameter efficient. The model includes two parts sequentially: (1) ADBERT for character representation, (2) BiLSTM-CRF for feature extraction, label inference and training. Below, we introduce the ADBERT directly and the BiLSTM-CRF is exactly the same as Tian et al. (2020a) which can be referred to in their work for the details. + +Adapter $\circ$ BERT We exploit BERT (Devlin et al., 2019) to derive character representations for a given sentence $X = [x_{1},\dots,x_{n}]$ , as it brings state-of-the-art performances for a range of Chinese language processing tasks. In particular, we patch BERT with adapters (Houlsby et al., 2019) inside all the included transformer units. By this way, fine-tuning BERT parameters is no longer necessary across different tasks, and we only need to tune the adapter parameters. More particularly, we let all adapters across different transformer units use a shared set of parameters to reduce the scale of tunable model parameters of our joint task. Here we refer to this method as ADBERT: + +$$ +\boldsymbol {e} _ {1}, \dots , \boldsymbol {e} _ {n} = \mathrm {A D B E R T} (X = [ x _ {1}, \dots , x _ {n} ]), \quad (1) +$$ + +where the detailed network of transformer with adapters is illustrated in our Appendix A. + +# 3 Our Method + +The above joint CWS and POS tagging model can perform well on the standard setting when the test domain is similar to the training domain (Tian et al., 2020a,b). However, the performance might be degraded dramatically when the test (i.e., target) domain differs from the training (i.e., source) domain significantly. There have two studies for cross-domain of joint CWS and POS tagging (Liu and Zhang, 2012; Zhang et al., 2014), both of which have exploited self-training due to its effectiveness as well as simplicity for domain adaption. The self-training aims to produce a set of high-confidence training instances of the target domain which are used to train a target model. Here we follow this line of work, presenting a novel fine-grained domain adaption strategy. + +The fine-grained domain adaption is an extension of the standard self-training, aiming to produce a helpful pseudo training corpus of the target domain. The line of work is essentially orthogonal to the representation learning methods which aim to learn sophisticated (e.g., domain-aware and domain-invariant) features for domain adaption. Thus, we also present a novel domain-mixed model based on the basic ADBERT-BiLSTM-CRF for effective exploration of our fine-grained domain adaption. In the following, we first describe the fine-grained domain adaption method in detail, and then introduce our representation learning model. + +# 3.1 Fine-Grained Domain Adaption + +The overall flow of self-training includes three steps: (1) first, we train an initial model by the source corpus; (2) second, we apply the source model onto a large-scale raw corpus, obtaining auto-labeled pseudo instances of the target domain; (3) finally, we select a set of high-confidence instances from the pseudo corpus which would be added to train the target model. The flow can be conducted repeatedly by several iterations, where the model in step 1 is trained by the progressively added step-3 instances. However, according to our preliminary results, the plain iterative self-training can only achieve very marginal improvement. + +The reason may lie in that the above process is difficult to ensure the quality of the selected instances, especially when the input target sentences are very distant from the source domain (Sohn et al., 2020). The step-1 models do not perform well on these sentences without any specialization. If these sentences are excluded because of their low quality, the final target model would be trained on a biased corpus, while these sentences are added into the target training corpus, great noises are introduced which would degrade the overall performance. Aiming for the problem, we propose a fine-grained domain adaption strategy to alleviate the influence of the large gaps during the automatic corpus construction. + +Concretely, we guide the iterative self-training by a specific explicit distance metric. At each iteration, we add a set of high-confidence pseudo instances whose distances are only a little larger than the previous iteration. The sentences during each selection can be regarded as from a special fine-grained subdomain of the target domain. By this way, the target model is gradually adapted to + +Algorithm 1: Fine-Grained Adaption +Data: Source domain training dataset $S$ Target domain raw corpus $D_{1}$ +Output: Latest model M +1 Initial training dataset $\mathrm{T_1} = S$ +2 for $i = 1,2,3\dots$ until converge do +3 Model training: $M_{i} = \mathrm{Train}(T_{i})$ +4 Data auto-labeling: $\hat{D}_i = M_i(D_i)$ +5 Lexicon: $L_{\mathrm{tgt}} = L_{\mathrm{tgt}}\cup L_{\mathrm{top - K}}(\hat{D}_i)$ +6 Progress ith auto instances: $\mathrm{ST}_i = \{\}$ +7 foreach instance $(\hat{X},\hat{Y})$ in $\hat{D}_i$ do +8 Coov: numOOV $\leq i$ +9 Clex: all oov in $L_{\mathrm{tgt}}$ +10 Cconf: $p(\hat{Y} |\hat{X})\geq p_{\mathrm{threshold}}$ +11 if Coov && Clex && Cconf then +12 $\mathrm{ST}_i = \mathrm{ST}_i + \{(\hat{X},\hat{Y})\}$ +13 end +14 end +15 $\mathrm{T}_{i + 1} = \mathrm{T}_i + \mathrm{ST}_i$ . +16 $D_{i + 1} = D_{i}\backslash \mathrm{ST}_{i}.X$ . +17 end + +the distant sentences far away from the source domain, producing a higher-quality corpus of various distances. Compared with the direct source-to-target adaption, we adopt the OOV (i.e., the newly-generated words which are out of the training vocabulary) number as the distance measurement, which is highly simple and intuitive. We construct a set of high-quality automatic corpora by choosing from the zero/one-number-OOV target sentences to the large-number-OOV target sentences progressively. + +Algorithm 1 shows the pseudo codes of fine-grained domain adaption. Initially, we set the first-iteration training dataset by the source corpus $S$ , and then execute the pseudo codes of lines 3-16 repeatedly. First, we train a model $M_{i}$ by current-iteration training dataset $\mathrm{T}_i$ , and apply the model to the remaining raw corpus of the target domain, resulting in auto-labeled corpus $\hat{D}_i$ , as shown by the codes at lines 3-4. Next, we conduct a lexicon building process at line 5 which would be used for quality assurance. At each iteration, we collect a set of top-K confident word-POS pairs $L_{\mathrm{top - K}}$ by their weighted frequencies in $\hat{D}_i$ ,1 which are added to the target lexicon $L_{\mathrm{tgt}}$ . Then, the key arrives + +![](images/4f3de971ba9065c24dc6245ed9858e2354ee39c44a925e09518bbdaa68a4c5cd.jpg) +Figure 2: The structure of the domain-mixed model, where the four objectives are defined in Equation 4. + +at lines 6-15 for new training dataset selection to obtain $\mathrm{ST}_i$ , which advances the training corpus to $\mathrm{T}_{i + 1}$ . We traverse all instances in $\hat{D}_i$ , and add the instances which satisfy $\mathrm{C_{oov}}$ , $\mathrm{C_{lex}}$ and $\mathrm{C_{conf}}$ together, where $\mathrm{C_{oov}}$ indicates the OOV number to control the distance to the source domain, and $\mathrm{C_{lex}}$ and $\mathrm{C_{conf}}$ ensure the instance quality. Finally, at line 16, we remove the selected instances from the target domain corpus and start the next iteration. + +# 3.2 Our Domain-Mixed Model + +By fine-grained domain adaptation, we can obtain a training corpus of multiple types (i.e., $S$ , $\mathrm{ST}_1, \dots, \mathrm{ST}_n$ (n denotes the last iteration) in Algorithm 1) where each type corresponds to a domain (i.e., $S$ ) or subdomain (i.e., $\mathrm{ST}_n$ ). Thus, the exploration of the training corpus can be regarded as multi-source domain adaption (Zhang et al., 2015; Sun et al., 2015). To better explore the corpus, we propose a novel domain-mixed model to fully benefit from the fine-grained domain adaptation. + +Our domain-mixed model follows a standard representation learning framework of domain adaptation, which attempts to capture effective domain-aware and domain-invariant features. Figure 2 shows the overall architecture of the model, where two individual ADBERT-BiLSTM-CRF components are included, which are used for domain-aware and domain-invariant feature learning, respectively. The feature learning modules are both adapted at the ADBERT, and a shared BiLSTM-CRF is exploited across the two components. In the below, we introduce the (sub)domain-aware and (sub)domain-invariant components, respectively, and then describe the overall inference and training. + +The (Sub)Domain-Aware Component A major problem of our basic ADBERT-BiLSTM-CRF model is that it treats all (sub)domain types of our final training corpus equally. Here we take the (sub)domain types as inputs along with the sentences deriving domain-aware features. Concretely, we follow Jia et al. (2019) and Üstün et al. (2020), exploiting Parameter Generator Network (PGN) on the adapter layers to achieve our goal, which generates (sub)domain-aware parameters for the adapters inside the ADBERT. + +We pack all parameters of the adapter layers into a single vector $V$ by reshaping and concatenation, which can be reverse unpacked perfectly for adapter calculation. As shown in Figure 2(a), we refer to ADBERT with PGN as PGN-ADBERT. Taken the input sentence and (sub)domain type pair by $(X,\mathrm{dt})$ , and the overall calculation of the (sub)domain-aware character representations is formalized as follow: + +$$ +\begin{array}{l} e _ {1} ^ {\mathrm {d m}}, \dots , e _ {n} ^ {\mathrm {d m}} = \operatorname {P G N - A D B E R T} (X, \mathrm {d t}) \tag {2} \\ = \operatorname {A D B E R T} (X, \boldsymbol {V} = \Theta e ^ {\mathrm {d t}}), \\ \end{array} +$$ + +where $\Theta$ is a learnable parameter in this component, $e^{\mathrm{dt}}$ is the (sub)domain type embedding, and PGN-ADBERT is a special case of ADBERT with specified module parameters $\mathbf{V}$ . The resulted representations are then fed into BiLSTM-CRF for our joint task. + +The (Sub)Domain-Invariant Component The domain-invariant features have been extensively investigated because of their generalization capability across different domains (Daumé III, 2007). Here we present a (sub)domain-invariant component to learn these general features across our source domain and fine-grained target subdomains, parallel to the (sub)domain-aware component. Figure 2(b) shows the architecture of this part. Firstly, the character inputs $X$ go through ADBERT, deriving the domain-invariant features $e_1^{\mathrm{iv}}, \ldots, e_n^{\mathrm{iv}}$ , and then we reconstruct the domain-aware features $\bar{e}_1^{\mathrm{dm}}, \ldots, \bar{e}_n^{\mathrm{dm}}$ by specifying the input (sub)domain type dt, which are then fed into BiLSTM-CRF for our joint task following our basic model. + +The domain-invariant features $e_1^{\mathrm{iv}}, \ldots, e_n^{\mathrm{iv}}$ , are learned in an adversarial manner (Ganin and Lempitsky, 2015; Ganin et al., 2016) for sentence-level (sub)domain type classification. We derive sentence-level representation $v$ by averaged pooling over these features, and then determine the (sub)domain type of the input sentence by a simple + +linear classifier. Note that we will intentionally cheat the classifier to make the $\pmb{v}$ domain irrelevant, aiming to obtain good domain-invariant features. + +In natural, the domain-invariant component tries to reconstruct and approximate the domain-aware component since they share the same decoding part. We unite the domain-invariant features $e_1^{\mathrm{iv}}, \ldots, e_n^{\mathrm{iv}}$ and the (sub)domain type dt to reconstruct the domain-aware features, which are then used for our joint task. The advantages of this manner are that we can maximize the capacity of the domain-invariant features and further enhance the interaction between the domain-aware and domain-invariant features. + +Concretely, the reconstruction is implemented by a variational module with reparameterization (Kingma and Welling, 2014). Given the (sub)domain type dt and the character representation $e_i^{\mathrm{iv}}(i \in [1, n])$ , the domain-aware representation can be calculated by: + +$$ +\boldsymbol {\mu} _ {i} = \operatorname {B i A f f i n e} _ {\text {m e a n}} \left(\boldsymbol {e} _ {i} ^ {\mathrm {i v}}, \boldsymbol {e} ^ {\mathrm {d t}}\right), +$$ + +$$ +\log \left(\sigma_ {i} ^ {2}\right) = \operatorname {B i A f f i n e} _ {\operatorname {v a r}} \left(e _ {i} ^ {\mathrm {i v}}, e ^ {\mathrm {d t}}\right), \tag {3} +$$ + +$$ +\bar {\boldsymbol {e}} _ {i} ^ {\mathrm {d m}} \sim \mathcal {N} (\boldsymbol {\mu} _ {i}, \sigma_ {i} ^ {2}), +$$ + +where we use BiAffine operations to generate a Gaussian distribution and then sample the domain-aware features $\overline{e}_i^{\mathrm{dm}}$ from the distribution. + +# 3.3 Inference and Training + +We regard the (sub)domain-aware component as our major component, which outputs the final joint CWS and POS tagging results. The (sub)domain-invariant component is an auxiliary component to help the learning of the major one. Intuitively, through an alignment between the major and auxiliary components, the learned features of our major component can be naturally decomposed into domain-aware and domain-invariant features. + +Inference For inference, we use the (sub)domain types of $S$ and $\mathrm{ST}_{\mathfrak{n}}$ (i.e., the last fine-grained sub-domain type) to perform decoding of the source and target domains, respectively. + +Training We exploit four optimization objectives for training, as shown in Figure 2: + +$$ +\mathcal {L} _ {\text {m a j o r}} (X, Y, \mathrm {d t}) = - \log p _ {\text {m a j o r}} (Y | X, \mathrm {d t}), +$$ + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {a u x}} (X, Y, \mathrm {d t}) = - \log p _ {\mathrm {a u x}} (Y | X, \mathrm {d t}), \\ \mathcal {L} _ {\mathrm {a u x}} (Y, \mathrm {d t}) = 1 \quad \text {(d t} | Y) \end{array} \tag {4} +$$ + +$$ +\mathcal {L} _ {\mathrm {a d v}} (X, \mathrm {d t}) = \log p _ {\mathrm {a d v}} (\mathrm {d t} | X), +$$ + +$$ +\mathcal {L} _ {\mathrm {m s e}} (X) = \left\| \boldsymbol {E} ^ {\mathrm {d m}} - \bar {\boldsymbol {E}} ^ {\mathrm {d m}} \right\| ^ {2}, +$$ + +
Data Set#sents#words#chars
CTB6Train23,401641,3721,055,586
Devel2,07859,929100,276
Test2,79581,579134,149
ZXTest1,39434,35548,075
Raw32,023N/A1,417,418
+ +Table 1: Data statistics of CTB6 and ZhuXian. + +where the first two are the losses of the two components of joint CWS and POS tagging, the third one is referred to as the adversarial loss to deceive the (sub)domain type classification, and the last is to minimize the distance of the domain-aware features between our two components leading to highly-resembled (aligned) character representations from variational reconstruction. Further, we sum the four objectives together: + +$$ +\begin{array}{l} \mathcal {L} = \mathcal {L} _ {\text {m a j o r}} (X, Y, \mathrm {d t}) + \mathcal {L} _ {\text {a u x}} (X, Y, \mathrm {d t}) \tag {5} \\ + \lambda_ {1} \mathcal {L} _ {\mathrm {a d v}} (X, \mathrm {d t}) + \lambda_ {2} \mathcal {L} _ {\mathrm {m s e}} (X), \\ \end{array} +$$ + +resulting in the final objective of our domain-mixed model, where $\lambda_{1}$ and $\lambda_{2}$ are two hyperparameters. + +# 4 Experiment + +# 4.1 Datasets + +We use the CTB6 dataset as the source domain (newswire), splitting the dataset into training, development and test sections following Tian et al. (2020a). To verify the effectiveness of our proposed domain adaption method, we exploit the ZhuXian dataset (Zhang et al., 2014) as the target domain, which belongs to a novel from Internet and is the only-one benchmark dataset for domain adaption of joint CWS and POS tagging. We strictly follow unsupervised domain adaptation where there is only a test corpus of the target domain. Table 1 shows the data statistics, where the detailed sentence, word as well as character numbers are reported. For the Zhuxian dataset, we use only the raw text and test corpus, which is available from Zhang et al. (2014). + +# 4.2 Setting + +Evaluation We adopt the standard word-level matching method to evaluate the performance of CWS and POS tagging. In particular, the joint strategy is used for POS tagging evaluation, considering word boundaries as well as POS tags as a whole. We calculate precision (P), recall (R) values, and use their F1-score as the major evaluation metric. + +
ModelCTB6ZhuXian
CWSPOSCWSPOS
PRF1PRF1PRF1PRF1
(1) Baseline
Vanilla97.2996.8597.0794.7394.3094.5194.1293.6193.8789.1988.7088.94
(2) Self-Training
Vanilla97.1896.7696.9794.3193.9194.1194.2393.9494.0889.2488.9689.10
+Iterative97.1796.8597.0194.4494.1394.2994.3093.8994.1089.3688.9689.16
+Domain-PGN97.2196.8497.0394.4094.0494.2294.2594.0394.1489.2789.0689.17
+Domain-Mixed97.2996.9097.0994.5594.1794.3694.4594.1294.2889.6189.2989.45
(3) Fine-Grained Domain Adaption
Vanilla97.1796.997.0394.5194.2494.3794.4494.8694.6589.6790.0789.87
+Domain-PGN97.3397.0497.1994.5794.2994.4394.7494.7194.7290.0790.0490.06
+Domain-Mixed97.4497.1897.3194.8394.5894.7194.9995.1495.0790.5190.6590.58
+ +Table 2: Main results, where the instance selection of self-training is simply implemented by ranking the auto-labeled sentences according to their output probabilities during the decoding, the Vanilla model refers to as the ADBERT-BiLSTM-CRF model, Iterative indicates the vanilla model with iterative self-training, and Domain-PGN indicates the model with only the (sub)domain-aware way. + +Considering that there is no development corpus available for the target domain in a real scenario, we use the CTB6 development set to select the best-performing models. + +Hyperparameters All hyperparameters are set empirically according to the previous studies as well as our preliminary findings (Tian et al., 2020a,b). Most importantly, our fine-grained domain adaption consumes 12 iterations to reach the peak, and the values for all other hyperparameters are described in our Appendix B. + +# 4.3 Main Results + +Table 2 shows the main results on the test datasets of both CTB6 and ZhuXian. The CTB6 results are reported to show whether the domain-adapted models could handle the source domain as well. First, we examine the F1 values of the baseline performances. Our vanilla (i.e., ADBERT-BiLSTM-CRF) model can obtain comparable performances on both CWS and POS tagging with state-of-the-art models such as Tian et al. (2020a) ${}^{2}$ . We can see that the model performances can drop significantly on the ZhuXian domain, resulting in decreases of ${97.07} - {93.87} = {3.20}$ and ${94.51} - {88.94} = {5.57}$ for CWS and POS tagging,respectively. The observation indicates that domain adaption is very important for our joint task. + +Next, we compare fine-grained domain adaption with various self-training. Based on the vanilla + +model, the self-training obtains very small performance gains (including iterative self-training), i.e., only close to $0.2\%$ which is very insignificant. The result is inconsistent with Zhang et al. (2014) which shows large improvements by simple self-training. The main reason might be due to the strong baseline with the BERT representations. + +With fine-grained domain adaption, we can generate a higher quality pseudo corpus. Therefore, the gains by the vanilla model are very significant over the baseline, where the improvements are 0.78 and 0.93 for CWS and POS tagging, respectively, significantly better than the vanilla self-training systems due to the quality differences of pseudo corpora. By using the final domain-mixed model, our fine-grained domain adaption can be improved further, leading to another improvement of 0.42 and 0.71 for CWS and POS tagging. The observations indicate that our method is highly effective for domain adaption of joint CWS and POS tagging. + +We can see that our domain-mixed model can help the normal self-training as well, showing the effectiveness of the representation learning for domain adaption. We also compare our proposed domain-mixed model with the major component alone (Domain-PGN for short), where the latter has been demonstrated to be effective in a different scenario (Jia et al., 2019). According to the results, the Domain-PGN gives slightly better performances on CWS and POS tagging for both self-training and fine-grained domain adaption compared with the counterpart baseline. Our final domain-mixed + +
ModelCTB6ZhuXianTrainable Params Size
CWSPOSCWSPOS
Finetuning97.2494.7493.9188.95120M
Adapter97.3694.8193.8188.9835M
Adapter (shared)97.0794.5193.8788.9414M
+ +model is much better, leading to significant performance increases on both tasks especially in fine-grained domain adaption. + +Interestingly, we find that our final model is capable of bringing better performances on the source CTB6 test dataset as well, unlike the observations as shown in the self-training models which can hurt the source performance to a certain extent. The finding indicates that our final model is with strong practical values, since it enables one model to perform well on multiple domains. + +# 4.4 Analysis + +In this subsection, we conduct detailed experimental analyses for a comprehensive understanding of our method in-depth. + +The Exploration of BERT Our work exploits ADBERT instead of the standard exploration of BERT finetuning. Here we examine the differences between them considering both performance and the size of trainable model parameters. Since ADBERT freezes all parameters of BERT, the number of trainable model parameters would be reduced greatly. Table 3 shows the comparison results, where Finetuning indicates the standard BERT-CRF model with BERT parameters tunable, Adapter denotes the ADBERT model that all adapters own separate parameters, and Adapter (shared) indicates our final ADBERT that all adapters across different transformer layers share the same parameters. As shown, we can see that our final choice can achieve comparable performance to the others with much fewer number of trainable parameters, thus our final ADBERT is highly parameter efficient. + +The Instance Selection Strategy As mentioned in Algorithm 1, we include three conditions for instance selection at each iteration: $C_{\mathrm{oov}}$ , $C_{\mathrm{lex}}$ and $C_{\mathrm{conf}}$ . Here we conduct ablation experiments to check the necessity of them. Note that when $C_{\mathrm{oov}}$ is excluded, we select at most 2K instances at each + +Table 3: Comparisons between BERT fine-tuning and ADBERT. + +
ModelPRF1ΔFi
Final90.5190.6590.58-
-CooV90.2090.1690.18-0.40
-Clex90.3990.2590.32-0.26
-Cconf90.2890.3890.33-0.25
-CooV - Clex90.0289.9990.00-0.58
Self-Training89.6189.2989.45-1.13
+ +Table 4: Ablation study of the instance selection strategies of our final model (F1 values of POS are reported). + +![](images/75669a8693bbd4edb431d3ceaf666fa3e669c5bf4e1254b10833fd80c32010f0.jpg) +Figure 3: The POS tagging performance with respect to the number of the pseudo training instances. + +iteration by the probabilities from high to low. Table 4 shows the results. As shown, we can see that all conditions are useful, and in addition, all results outperform the plain iterative self-training method. In particular, the model $-\mathrm{C_{oov}} - \mathrm{C_{lex}}$ is degraded into the self-training with iterative adaptation combined with the domain-mixed model. The comparison further demonstrates the advantage of our domain-mixed model. + +The Size of Pseudo Training Corpus It is very interesting to compare the fine-grained domain adaption with (one-iteration) self-training under the view of the pseudo training dataset size. We align the iteration of fine-grained domain adaption with self-training by the added training corpus size of the ZhuXian domain. Figure 3 shows the comparison results. As shown, the performance of self-training would be hardly increasing after 3K instances, while our fine-grained method can give significant improvements continually until iteration 12 (consuming 20K corpus). The comparison shows that our fine-grained domain adaption is much more effective than self-training. However, our iterative fine-grained domain adaption needs more time to training than non-iterative self-training4. + +
ModelPRF1
BaselineVanilla93.9993.5593.77
Self-TrainingVanilla94.0194.0894.04
+Iterative94.1894.0694.12
+Domain-PGN94.2094.0194.11
+Domain-Mixed94.4794.0794.27
Fine-Grained +AdaptionVanilla94.6594.4794.51
+Domain-PGN94.7094.5194.60
+Domain-Mixed95.2794.6494.86
+ +Table 5: The results of independent CWS task using our method on ZhuXian dataset. + +The Independent CWS Task Our major goal is for joint CWS and POS tagging, while it is expected to examine our method for the CWS task alone. Here we also use the CTB6 dataset as the source corpus and the ZhuXian dataset as the target domain. The basic model can be exactly the same. Table 5 shows the final results. Our method can achieve significant improvements on the CWS alone, resulting in increases of $94.86 - 93.77 = 1.09$ , which means that our fine-grained domain adaption method can be suitable for CWS as well. The other model tendencies are consistent with the joint task. Interestingly, we find that the independent CWS model has a lower improvement in recall. The reason may be that the POS tagging can provide several additional features, which let the joint model prefer more fine-grained segmentation, leading to a larger recall value. + +Domain-Aware vs. Domain-Invariant It is interesting to compare our (sub)domain-aware (PGN) and (sub)domain-invariant (VAR) components comprehensively. In fact, the two components alone can serve for domain adaption as well besides our integrated usage. The PGN can be used directly for inference, while for VAR, we can perform decoding by setting $\bar{e}_i^{\mathrm{dm}} = \mu_i$ in Equation 3. Here we analyze four models, PGN and VAR alone, and the integrated model inferencing with PGN (Final-PGN) and VAR (Final-VAR), respectively. All four models are trained on the same and full training corpus (i.e., $S + \mathrm{ST}_1,\dots,S + \mathrm{ST}_1+\ldots +\mathrm{ST}_n$ , respectively and gradually). Figure 4 shows the results. As shown, we can see that PGN and VAR are actually comparable to each other, and in our final model, PGN is slightly better than VAR. We find that in our integrated model, both PGN and VAR are much better than using them alone, which shows the importance of the joint learning by the carefully-designed $\mathcal{L}_{\mathrm{mse}}$ . + +![](images/dcf9030eaedbd7a38f45bf7bfc63d2d5984081ac77ac34e6a6094e689bd35c9a.jpg) +Figure 4: Comparisons between (sub)domain-aware (PGN) and (sub)domain-invariant (VAR) components, where PGN and VAR indicate that they are exploited separately for representation learning, and Final-PGN and Final-VAR denote our final model by using PGN/VAR for decoding, respectively. + +![](images/e640f0731d1ed05ef776801f2df007ab4b18307e44a2a30bb33e383d0da76f48.jpg) +Figure 5: The results of Domain-Mixed model on different OOV distribution test datas use self-training and fine-grained adaption. + +The Sentential OOV Number Our fine-grained domain adaption is mainly advanced by the sentential OOV numbers with respect to the source training dataset. Thus, it is meaningful to examine the model performance on sentences with different OOV numbers. We divide the ZhuXian test dataset by four categories according to the OOV number in sentence, which are respectively [0-1], [2-3], [4-5] and $\geq 6$ . All categories include a sufficient number of sentences for statistical comparisons. Based on the division, we compare the performance of the fine-grained adaption, self-training as well as baseline models. Figure 5 shows the results. We can see that with the increase of OOV number, the model performance can be decreased as a whole, which is reasonable. In addition, our final model can significantly improve the model performance with higher OOV numbers in sentence. + +The Subdomain Type of Our Final Inference For the training of our final model, we have several fine-grained subdomain types of the target domain, and we select the last subdomain type for the final inference, which might be unmatched with the real subdomain type. Here we analyze the input domain + +
Domain TypeZhuXian-CWSZhuXian-POS
PRF1PRF1
ST195.0095.1795.0890.4990.6390.56
ST694.9895.1695.0790.4990.6590.57
ST1194.9995.1495.0790.5190.6590.58
+ +Table 6: The influence of using different domain types. + +type selection in depth by comparing the model performance with the first $(\mathrm{ST}_1)$ , median $(\mathrm{ST}_6)$ and last $(\mathrm{ST}_{11})$ subdomain types. Table 6 shows the results. As shown, there is almost no difference between the three selections for the ZhuXian domain, indicating that the selection of fine-grained subdomain types is not important in our final model. The observation is reasonable since the test corpus cover a range of the specified subdomains and fixed selection can face the same issue, thus the final selection could be totally empirical. + +# 5 Related Work + +CWS and POS tagging are closely-related tasks for Chinese processing, which could be handled either jointly or in a pipeline way (Ng and Low, 2004; Shi and Wang, 2007; Zhang and Clark, 2008; Jiang et al., 2008; Kruengkrai et al., 2009; Jiang et al., 2009; Sun, 2011). The joint models are able to obtain better performances, as they can alleviate the error propagation problem between two tasks (Ng and Low, 2004; Zhang and Clark, 2008; Jiang et al., 2009; Wang et al., 2011). Recently, neural models lead to state-of-the-arts for joint CWS and POS tagging (Zheng et al., 2013; Shao et al., 2017; Zeng et al., 2013; Tian et al., 2020a). In particular, the BERT representations (Devlin et al., 2019) and the BiLSTM neural network (Graves et al., 2013; Huang et al., 2015) have shown impressive results for the joint task (Zhang et al., 2018; Diao et al., 2019; Tian et al., 2020a,b). In this work, we adopt both BERT and BiLSTM to reach a strong baseline for cross-domain adaption. + +Domain adaptation has been extensively studied in both the machine learning and NLP communities (Daumé III, 2007; Ben-David et al., 2007; Chen et al., 2011; Søgaard, 2013; Zou et al., 2019; Saito et al., 2020). The typical methods of domain adaptation can be divided into two categories mainly. The first category aims to create a set of pseudo training corpora for the target domain, while the second category attempts to learn transferable features from the source domain to the target. Self-training is one most representative method of the first category + +(McClosky et al., 2006; Yu et al., 2015; Zou et al., 2019). For the second category, the representation learning of domain-specific and domain-invariant features has received the most attention recently (Glorot et al., 2011; Ganin et al., 2016; Tzeng et al., 2017; Long et al., 2017; Hoffman et al., 2018). + +For the joint CWS and POS tagging task, Liu and Zhang (2012) and Zhang et al. (2014) investigate the task under the cross-domain adaption setting, both of which exploit self-training. In particular, Zhang et al. (2014) suggest a lexicon-based type-supervised model for further enhancement, and meanwhile publish a benchmark dataset which is publicly available for cross-domain adaption of joint CWS and POS tagging. Unfortunately, there is no future work for the joint task since then, while the majority of studies focus on the cross-domain of the two individual tasks (Liu et al., 2014; Schnabel and Schütze, 2014; Peng and Dredze, 2016; Huang et al., 2017; Zhou et al., 2017; Gui et al., 2017; Ding et al., 2020). We propose a novel fine-grained domain adaption method with a domain-mixed representation learning model for the joint task. + +# 6 Conclusion + +We suggested a novel fine-grained domain adaption method for joint word segmentation and POS tagging. We started from self-training strategy, which exploits various transfers to generate pseudo training instances for the target domain, and argued that the strategy might lead to low-quality of the auto-labeled instances when the target sentences are distant from the source domain. To address the problem, we proposed fine-grained domain adaption, regarding the OOV number to the source training corpus as the main advancing indicator to construct a higher quality corpus progressively. In addition, we combined our method with another line of representation learning of domain adaption, presenting a domain-mixed model for full exploration of the produced training instances. We evaluated our method on the benchmark ZhuXian dataset by using CTB6 as the source domain. The results showed that our method is highly effective, and our final model can achieve significant improvements on the joint task. + +# Acknowledgments + +This work is supported by grants from the National Key Research and Development Program of China (No. 2018YFC0832101) and the National Natural Science Foundation of China (No. 62176180). + +# References + +Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. 2007. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19:137. +Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21. +Minmin Chen, Kilian Q Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In Proceedings of NeurIPS. +Hal Daumé III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th ACL, pages 256-263. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL, pages 4171-4186. +Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. 2019. Zen: pre-training chinesetext encoder enhanced by n-gram representations.arXiv preprint arXiv:1911.00720. +Ning Ding, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Xiaobin Wang, and Haitao Zheng. 2020. Coupling distant annotation and adversarial training for cross-domain Chinese word segmentation. In Proceedings of the ACL, pages 6662-6671. +Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing. +Yaroslav Ganin and Victor S. Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the ICML, pages 1180-1189. +Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Lavi-olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. JMLR, 17(1):2096-2030. +Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the ICML. +Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE. + +Tao Gui, Qi Zhang, Haoran Huang, Minlong Peng, and Xuan-Jing Huang. 2017. Part-of-speech tagging for twitter with adversarial neural networks. In Proceedings of the EMNLP, pages 2411-2420. +Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. 2018. Cycada: Cycle-consistent adversarial domain adaptation. In Proceedings of the ICML, pages 1989-1998. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In Proceedings of the ICML, pages 2790-2799. +Shen Huang, Xu Sun, and Houfeng Wang. 2017. Addressing domain adaptation for Chinese word segmentation with global recurrent structure. In Proceedings of the IJCNLP, pages 184-193. +Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm- crf models for sequence tagging. arXiv preprint arXiv:1508.01991. +Naoto Inoue, Ryosuke Furuta, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2018. Cross-domain weakly-supervised object detection through progressive domain adaptation. In Proceedings of the CVPR, pages 5001-5009. +Chen Jia, Xiaobo Liang, and Yue Zhang. 2019. Cross-domain ner using cross-domain language modeling. In Proceedings of the ACL, pages 2464-2474. +Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Automatic adaptation of annotation standards: Chinese word segmentation and pos tagging-a case study. In Proceedings of the ACL-IJCNLP, pages 522-530. +Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan Lu. 2008. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of ACL, pages 897-904. +Guangjin Jin and Xiao Chen. 2008. The fourth international chinese language processing bakeoff: Chinese word segmentation, named entity recognition and chinese pos tagging. In Proceedings of the sixth SIGHAN workshop on Chinese language processing. +Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of the ICLR. +Canasai Kruengkrai, Kiyotaka Uchimoto, Yiou Wang, Kentaro Torisawa, Hitoshi Isahara, et al. 2009. An error-driven word-character hybrid model for joint chinese word segmentation and pos tagging. In Proceedings of the ACL-IJCNLP, pages 513-521. +Yang Liu and Yue Zhang. 2012. Unsupervised domain adaptation for joint segmentation and POS-tagging. In Proceedings of COLING 2012: Posters, pages 745-754. + +Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for crf-based chinese word segmentation using free annotations. In Proceedings of the EMNLP, pages 864-874. +Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the ICML, pages 2208-2217. +David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of the ACL-COLING, pages 337-344. +Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part-of-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Proceedings of the EMNLP, pages 277-284. +Nanyun Peng and Mark Dredze. 2016. Multi-task domain adaptation for sequence tagging. In Proceedings of the 2nd Workshop on Representation Learning for NLP. +Kuniaki Saito, Donghyun Kim, Stan Sclaroff, and Kate Saenko. 2020. Universal domain adaptation through self supervision. arXiv preprint arXiv:2002.07953. +Tobias Schnabel and Hinrich Schütze. 2014. Flors: Fast and simple domain adaptation for part-of-speech tagging. TACL, 2:15-26. +Yan Shao, Christian Hardmeier, Jörg Tiedemann, and Joakim Nivre. 2017. Character-based joint segmentation and POS tagging for Chinese using bidirectional RNN-CRF. In Proceedings of the IJCNLP, pages 173-183. +Yanxin Shi and Mengqiu Wang. 2007. A dual-layer crfs based joint decoding method for cascaded segmentation and labeling tasks. In Proceedings of the IJCAI, pages 1707-1712. +Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. 2018. A dirt-t approach to unsupervised domain adaptation. +Anders Søgaard. 2013. Semi-supervised learning and domain adaptation in natural language processing. Synthesis Lectures on Human Language Technologies, 6(2):1-103. +Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. 2020. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685. +Shiliang Sun, Honglei Shi, and Yuanbin Wu. 2015. A survey of multi-source domain adaptation. Information Fusion, 24:84-92. +Weiwei Sun. 2011. A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of the ACL, pages 1385-1394. + +Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiao-jun Quan, Tong Zhang, and Yonggang Wang. 2020a. Joint chinese word segmentation and part-of-speech tagging via two-way attentions of auto-analyzed knowledge. In Proceedings of the ACL, pages 8286-8296. +Yuanhe Tian, Yan Song, and Fei Xia. 2020b. Joint Chinese word segmentation and part-of-speech tagging via multi-channel attention of character n-grams. In Proceedings of the 28th COLING, pages 2073-2084. +Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the CVPR, pages 7167-7176. +Ahmet Üstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language adaptation for truly Universal Dependency parsing. In Proceedings of the EMNLP, pages 2302-2315. +Yiou Wang, Yoshimasa Tsuruoka, Wenliang Chen, Yu-jie Zhang, Kentaro Torisawa, et al. 2011. Improving Chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th IJCNLP, pages 309-317. +Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207. +Juntao Yu, Mohab El-karef, and Bernd Bohnet. 2015. Domain adaptation for dependency parsing via self-training. In Proceedings of the 14th International Conference on Parsing Technologies, pages 1-10. +Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, and Isabel Trancoso. 2013. Graph-based semi-supervised model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of the ACL, pages 770-779. +Kun Zhang, Mingming Gong, and Bernhard Scholkopf. 2015. Multi-source domain adaptation: A causal view. In Proceedings of the AAAI, volume 29. +Meishan Zhang, Nan Yu, and Guohong Fu. 2018. A simple and effective neural model for joint word segmentation and pos tagging. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(9):1528-1538. +Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Type-supervised domain adaptation for joint segmentation and POS-tagging. In Proceedings of the 14th EACL, pages 588-597. +Yue Zhang and Stephen Clark. 2008. Joint word segmentation and pos tagging using a single perceptron. In Proceedings of the ACL-08: HLT, pages 888-896. + +Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. 2019. On learning invariant representations for domain adaptation. In Proceedings of the ICML, pages 7523-7532. + +Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the EMNLP, pages 647-657. + +Hao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2017. Word-context character embeddings for chinese word segmentation. In Proceedings of the EMNLP, pages 760-766. + +Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jinsong Wang. 2019. Confidence regularized self-training. In Proceedings of the ICCV, pages 5982-5991. + +# A Transformer with Adapters + +Figure 6 illustrates the internal network structure of the transformer unit in ADBERT. As shown, we can see that two adapter layers are inserted inside each transformer unit: + +$$ +\boldsymbol {h} _ {\text {m i d}} = \operatorname {G E L U} \left(\boldsymbol {W} _ {1} ^ {\text {s h a r e}} \boldsymbol {h} _ {\text {i n}} + \boldsymbol {b} _ {1} ^ {\text {s h a r e}}\right), \tag {6} +$$ + +$$ +\boldsymbol {h} _ {\text {o u t}} = \boldsymbol {W} _ {2} ^ {\text {s h a r e}} \boldsymbol {h} _ {\text {m i d}} + \boldsymbol {b} _ {2} ^ {\text {s h a r e}} + \boldsymbol {h} _ {\text {i n}}, +$$ + +where $W_1^{\mathrm{share}}$ , $W_2^{\mathrm{share}}$ , $b_1^{\mathrm{share}}$ , $b_2^{\mathrm{share}}$ are adapter parameters, which are much smaller than those of BERT in scale. + +Here we further emphasize that when BERT is powered with adapters, BERT can be regarded as a static knowledge by freezing all the pretrained parameters for downstream tasks, since the BERT parameter values can be shared across these tasks. + +# B Hyperparameters + +For the model part, we set all the hidden sizes of BiLSTM to 400, and set the hidden sizes of all shared adapters to 192. We exploit the pretrained BERT-base-Chinese model for the character representations, thus the output dimensional size of character representation is 768. The embedding of domain type is with a dimensional size of 50. For fine-grained domain adaption, the number of high-confidence word-tag pairs in Top-K is set by 1000, the probability threshold $p_{\text{threshold}}$ is 0.8. + +For training, we exploit online learning with a batch size of 16 to update the model parameters, and use the Adam algorithm with a constant learning rate $2 \times 10^{-5}$ to optimize the parameters. The gradient clipping mechanism by a maximum value + +![](images/6d4581c38c861e75f9ae60b167c8f645c7abc24bccf463a4aeaafc5d9bcae1a8.jpg) +Figure 6: The structure of ADBERT. + +of 5.0 is adopted to avoid gradient explosion. We use sequential-level dropout to the character representations to avoid overfitting, where the sequential hidden vectors are randomly set to zeros with a probability of 0.2. In particular, we have two hyperparameters $\lambda_{1}$ and $\lambda_{2}$ in our overall training objective, which is auto-adjust during the training from 0 to 1 by exponential annealing in the first 5,000 steps (Bowman et al., 2016). \ No newline at end of file diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/images.zip b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..da0e9a47129dfbbd34861c32841c2b13b46b2b62 --- /dev/null +++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b07878bd60c1f5561ac159c5f4c4678742c6835ad8d982e88a34c79032153ce +size 498581 diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/layout.json b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e5e2335b83a96e2d7ae8e5b8fb1967df96c9c707 --- /dev/null +++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae65c9f7aac0455264652d87e28447793dcf98a8c263dd053e7998bde2a75285 +size 402626 diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_content_list.json b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6cf315bc868549c7adfb03eb09b55448c1004f87 --- /dev/null +++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9eb5a83ee078cd3f24f06213bba67c96613d6a6f38aaa38d51472e3b7b96760 +size 95937 diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_model.json b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3d674d99f31213bee6a8f6edbb5d369b8511bba4 --- /dev/null +++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e94262212a82770c5b6e607e3514d164342974293fcfde202b1ae8267736264 +size 116688 diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_origin.pdf b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f4c304f4b1e23a61308f204ee4ab191f053aa70e --- /dev/null +++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba84bd8046fc88ea59b4bf6f950ba244e9b1e3926c389efa05472f96e31b5f84 +size 472323 diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/full.md b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..68d9b8ff7db9d3da477a95c54c650febf90f0ebb --- /dev/null +++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/full.md @@ -0,0 +1,350 @@ +# AFROMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages + +Machel Reid1, Junjie Hu2, Graham Neubig2, Yutaka Matsuo1 + +1The University of Tokyo, 2Carnegie Mellon University + +{machelreid, matsuo}@weblab.t.u-tokyo.ac.jp + +{junjieh,gneubig}@cs.cmu.edu + +# Abstract + +Reproducible benchmarks are crucial in driving progress of machine translation research. However, existing machine translation benchmarks have been mostly limited to high-resource or well-represented languages. Despite an increasing interest in low-resource machine translation, there are no standardized reproducible benchmarks for many African languages, many of which are used by millions of speakers but have less digitized textual data. To tackle these challenges, we propose AFROMT, a standardized, clean, and reproducible machine translation benchmark for eight widely spoken African languages. We also develop a suite of analysis tools for system diagnosis taking into account unique properties of these languages. Furthermore, we explore the newly considered case of low-resource focused pretraining and develop two novel data augmentation-based strategies, leveraging word-level alignment information and pseudo-monolingual data for pretraining multilingual sequence-to-sequence models. We demonstrate significant improvements when pretraining on 11 languages, with gains of up to 2 BLEU points over strong baselines. We also show gains of up to 12 BLEU points over cross-lingual transfer baselines in data-constrained scenarios. All code and pretrained models will be released as further steps towards larger reproducible benchmarks for African languages. $^{1}$ + +# 1 Introduction + +Accuracy of machine translation systems in many languages has improved greatly over the past several years due to the introduction of neural machine translation (NMT) techniques (Bahdanau et al., 2015; Sutskever et al., 2014; Vaswani et al., 2017), as well as scaling to larger models (Ott et al., 2018). However, many of these advances have + +been demonstrated in settings where very large parallel datasets are available (Meng et al., 2019; Arivazhagan et al., 2019), and NMT systems often underperform in low-resource settings when given small amounts of parallel corpora (Koehn and Knowles, 2017; Guzmán et al., 2019). One solution to this has been leveraging multilingual pretraining on large sets of monolingual data (Connieu and Lample, 2019; Song et al., 2019; Liu et al., 2020), leading to improvements even with smaller parallel corpora. However, this thread of work has focused on scenarios with the following two properties: (1) pretraining on a plurality of European languages and (2) cases in which the monolingual pretraining data greatly exceeds the parallel data used for finetuning (often by over 100 times) (Guzmán et al., 2019; Liu et al., 2020). + +However, in the case of many languages in the world, the above two properties are often not satisfied. In particular, taking the example of African languages (the focus of our work), existing (small) parallel corpora for English-to-African language pairs often comprise the majority of available monolingual data in the corresponding African languages. In addition, African languages are often morphologically rich and from completely different language families, being quite distant from European languages. Moreover, despite the importance of reproducible benchmarks to measuring progress on various tasks in an empirical setting, there exists no standardized machine translation benchmark for the majority of African languages. + +In this work, we introduce (1) a new machine translation benchmark for African languages, and (2) pretraining techniques to deal with the previously unexplored case where the size of monolingual data resources for pretraining is similar or equal to the size of parallel data resources for finetuning, and (3) evaluation tools designed for measuring qualities regarding the unique grammar of these languages in machine translation systems for + +better system evaluation. + +Our proposed benchmark, AFROMT, consists of translation tasks between English and 8 African languages — Afrikaans, Xhosa, Zulu, Rundi, Sesotho, Swahili, Bemba, and Lingala — four of which are not included in commercial translation systems such as Google Translate (as of Feb. 2021). In §2, we describe the detailed design of our benchmark, including the language selection criterion and the methodology to collect, clean and normalize the data for training and evaluation purposes. In §3, we provide a set of strong baselines for our benchmark, including denoising sequence-to-sequence pretraining (Lewis et al., 2020; Liu et al., 2020), transfer learning with similar languages (Zoph et al., 2016; Neubig and Hu, 2018), and our proposed data augmentation methods for pretraining on low-resource languages. Our first method leverages bilingual dictionaries to augment data in high-resource languages (HRL), and our second method iteratively creates pseudo-monolingual data in low-resource languages (LRL) for pretraining. Extensive experiments in §4 show that our proposed methods outperform our baselines by up to $\sim 2$ BLEU points over all language pairs and up to $\sim 15$ BLEU points in data-constrained scenarios. + +# 2 AFROMT benchmark + +In this section, we detail the construction of our new benchmark, AFROMT. We first introduce our criteria for selecting the languages (§2.1), and then describe the steps to prepare the dataset (§2.2, 2.3). + +# 2.1 Language Selection Criteria + +Given AFROMT's goal of providing a reproducible evaluation of African language translation, we select languages based on the following criteria: + +Coverage of Speakers & Language Representation We select languages largely based on the coverage of speakers as well as how represented they are in commercial translation systems. In total, the AFROMT benchmark covers 225 million L1 and L2 speakers combined, covering a large number of speakers within Sub-Saharan Africa. + +Linguistic Characteristics With the exception of English and Afrikaans, which belong to the Indo-European language family, all of the considered languages belong to the Niger-Congo family which is Africa's largest language family in terms of geographical area and speaking popula + +tion (see Appendix). Similar to English, the Niger-Congo family generally follows the SVO word order. One particular characteristic feature of these languages is their morphosyntax, especially their system of noun classification, with noun classes often exceeding 10, ranging from markers denoting male/female/animate/inanimate and more $^2$ . These noun classes can be likened in some sense to the male/female designation found in romance languages. However, in contrast with these languages, noun markers in Niger-Congo languages are often integrated within the word, usually as a prefix (Bendor-Samuel and Hartell, 1989). For example: in Zulu, isiziZulu refers to the Zulu language, whereas amaZulu refers to the Zulu people. Additionally, these languages also use "verb extensions", verb-suffixes used to modify the meaning of the verb. These qualities contribute to the morphological richness of these languages — a stark contrast with European languages. + +# 2.2 Data Sources + +For our benchmark, we leverage existing parallel data for each of our language pairs. This data is derived from two main sources: (1) open-source repository of parallel corpora, OPUS3 (Tiedemann, 2012) and (2) ParaCrawl (Esplà et al., 2019). From OPUS, we use the JW300 corpus (Agić and Vulić, 2019), OpenSubtitles (Lison and Tiedemann, 2016), XhosaNavy, Memat, and QED (Abdelali et al., 2014). Despite the existence of this parallel data, these text datasets were often collected from large, relatively unclean multilingual corpora, e.g. JW300 which was extracted from Jehovah's Witnesses text, or QED which was extracted from transcribed educational videos. This leads to many sentences with high lexical overlap, inconsistent tokenization, and other undesirable properties for a clean, reproducible benchmark. + +# 2.3 Data Preparation + +Training machine translation systems with small and noisy corpora for low-resource languages is challenging, and often leads to inaccurate translations. These noisy examples include sentences which contain only symbols and numbers, sentences which only consist of one token, sentences which are the same in both the source and target sides, etc. Furthermore, in these noisy extractions + +
LanguageISO CodeLang. Family# Noun ClassesSourcesAFROMT (En→XX)Monolingual Data
TrainValidTestGoldPseudo
AfrikaansAfIndo-EuropeanJ, O743K300030001.3G
BembaBemNiger-Congo9/6/15J275K3000300038M1.0G
LingalaLnNiger-Congo9/6/15J382K3000300067M1.4G
RundiRunNiger-Congo9/7/16J253K3000300026M1.1G
SesothoStNiger-Congo6/5/11J595K3000300084M1.0G
SwahiliSwNiger-Congo9/9/18J, P700K300030001.8G1.2G
XhosaXhNiger-Congo8/7/15J, X, M, Q610K30003000203M1.2G
ZuluZuNiger-Congo6/10/16J664K30003000121M1.4G
+ +Table 1: Language characteristic and dataset statistics for AFROMT. Statistics for AFROMT are measured in term of sentences. Monolingual data sizes are measured on the raw, pretokenized corpora. We abbreviate the sources for our benchmark as follows: J=JW300, O=OpenSubtitles, P=ParaCrawl, X=XhosaNavy, M=Memat, Q=QED. The # Noun Classes column shows the number of singular/plural/total noun classes. + +from large multilingual corpora such as JW300, there is a key issue of large text overlap over sentences. Given the risk of data leakage, this prevents one from naively splitting the corpus into random train/validation/test splits. + +To mitigate these issues, when preparing our data, we use a combination of automatic filtering techniques and manual human verification at each step to produce clean parallel data for the construction of our benchmark. For consistency across language pairs, we perform cleaning mainly based on the English side of the noisy parallel corpora. We list the automatic filtering techniques below: + +Removal of extremely short sentences Since we focus on sentence-level machine translation,4 we remove sentences containing less than three whitespace-tokenized tokens excluding numerical symbols and punctuation. Additionally, we remove pairs that contain no source or target sentences. + +Removal of non-sentences We remove sentences containing no letters, i.e., pairs that contain only numbers and symbols. + +Tokenization normalization We perform tokenization on all corpora using the detokenization script provided in the Moses (Koehn et al., 2007) toolkit5. Given that we collect data from various sources, this step is important to allow for consistent tokenization across corpora. + +Removal of sentences with high text overlap To prevent data leakage, we remove sentences with + +high text overlap. To do this, we use Levenshtein-based fuzzy string matching $^{6}$ and remove sentences that have a similarity score of over 60. Given that measuring this score against all sentences in a corpus grows quadratically with respect to corpus length, we use the following two heuristics to remove sentences with high overlap in an efficient manner: (1) scoring similarity between the 50 alphabetically-sorted previous sentences, (2): extracting the top 100K four-grams and performing the similarity score within each group of sentences containing at least one instance of a certain four-gram. + +Data Split The resulting benchmark is constructed using the data that passes our automatic filtering checks, and we further split the data into train, validation, and test for each language pair. We select 3,000 sentences with the least four-gram overlap (with the corpus) for both validation and testing while leaving the rest of the corpus to be used for training. Validation and test sentences are all further verified for quality. The resulting dataset statistics for each language pair can be seen in Table 1. + +# 2.4 Impact of Cleaning Process + +Given the non-trivial cleaning process and standardization of key components, such as tokenization/splits/data leakage, this cleaning provides a better representative corpus for the languages considered. We demonstrate this with an experiment comparing a randomly initialized English-Zulu models trained on (a) the original noisy data (including some test data leakage), (b) a model trained + +on noisy data (without data leakage) similar to the cleaning process used by Nekoto et al. (2020), and (c) a model trained on the AfroMT data. Scores for each setting are measured in BLEU on the clean test set: (a) 38.6, (b) 27.6, (c) 34.8. + +Comparing the noisy model and the AfroMT model, we find that not filtering the data for leakage leads to misleading results, unreliablely evaluating models on these LRLs. Additionally, as shown by (b) vs (c), not filtering for other artifacts hinders performance leading to unrealistically weak performance. Additional quantification of data leakage can be found in the Appendix. + +# 3 AfroBART + +Given that we aim to provide strong baselines for our benchmark, we resort to multilingual sequence-to-sequence training. However, existing pretraining techniques have often been focused on the situation where monolingual data can be found in a larger quantity than parallel data. In this section we describe our proposed multilingual sequence-to-sequence pretraining techniques developed for the novel scenario where even monolingual data is scarce. + +# 3.1 Existing Methods + +The most widely used methods for multilingual sequence-to-sequence pretraining (Song et al., 2019; Xue et al., 2020; Liu et al., 2020) make a core assumption that the amount of monolingual data in all languages exceeds the amount of parallel data. However, in the case of many African languages, digitized textual data is not widely available, leading this approach to be less effective in these scenarios as shown in Table 2. To mitigate this issue, we build on existing denoising pretraining techniques, particularly BART (Lewis et al., 2020; Liu et al., 2020) and propose two data augmentation methods using dictionaries to augment high-resource monolingual data ( $\S 3.2$ ), and leveraging pseudo monolingual data in low-resource languages ( $\S 3.3$ ). Finally, we iterate the data augmentation with the model training ( $\S 3.4$ ) as shown in Figure 2. + +# 3.2 Dictionary Augmentation + +Given that existing monolingual corpora in low-resource languages are small, we aim to increase the usage of words from the low-resource language in diverse contexts. To do so, we propose to take + +![](images/16bf8e5cf5e626fbbfd44d3a4340577a8ecaeca5f5a71a782d8db32ccf8d3b79.jpg) +Figure 1: Transforming monolingual high-resource data to augmented code-switched data using an English-Swahili bilingual dictionary + +sentences from a high-resource language, and replace the words by their corresponding translations that are available in a dictionary extracted from our parallel corpora. + +Dictionary Extraction As our data augmentation technique requires a dictionary, we propose to extract the dictionary from parallel corpora using a statistical word aligner, eflomal7 (Östling and Tiedemann, 2016). Once we produce word alignments between tokens in our parallel corpora, we simply take word alignments that appear over 20 times to produce our bilingual dictionary. + +Monolingual Data Augmentation We assume to have access to three sources of data, i.e., high-resource corpus $H = \{H_0,\dots ,H_T\}$ , lowresource corpus $L = \{L_{0},\ldots ,L_{M}\}$ , and bilingual dictionary $D = \{(D_0^h,D_0^l),\ldots ,(D_{N_d}^h,D_{N_d}^l)\}$ with $N_{d}$ pairs mapping high-resource term $D_{i}^{h}$ to low-resource term $D_{i}^{l}$ . Given this, for every highresource sentence $H_{i}$ we replace $30\%$ of the tokens that match the high-resource terms contained in $D$ to their respective low-resource terms. In the case that there exists more than one low-resource term in $D_{i}^{l}$ , we randomly select one to replace the highresource term. Notably, with the assumption that high-resource monolingual data is more diverse in its content given its greater size, this augmentation technique is an effective method to increase the coverage of words from the low-resource lexicon in diverse settings. + +![](images/199c13e98a8eca5056af50e2af578790d151e059346cb0019de9f6b78b7e0f17.jpg) +Figure 2: Iterative approach to pretraining using pseudo monolingual data and dictionaries + +# 3.3 Leveraging Pseudo-Monolingual Data + +Although leveraging dictionaries to produce code-switched monolingual data is a useful technique to introduce low-resource words in a wider variety of contexts, the code-switched sentences still lack the fluency and consistency of pure monolingual data. To further mitigate these fluency and data scarcity issues in the LRL, we propose to create fluent pseudo-monolingual data by translating the HRL monolingual data to the low-resource language using a pretrained machine translation model. + +Specifically, given a pretrained sequence-to-sequence model $M$ , we finetune $M$ for the translation from HRL to LRL on a parallel corpus, i.e., $\mathcal{D}_{ft} = \{(\mathcal{D}_0^h,\mathcal{D}_0^l),\ldots ,(\mathcal{D}_{N_{ft}}^h,\mathcal{D}_{N_{ft}}^l)\}$ , and obtain a machine translation model $M_{ft}$ . With the pretrained translation model $M_{ft}$ , we then proceed to translate sentences from high-resource corpus $H$ to our low-resource language $l$ to produce pseudo LRL monolingual corpus $\tilde{L}$ : + +$$ +\tilde {L} = M _ {f t} (H; \Theta_ {f t}) \tag {1} +$$ + +Following this, we concatenate the existing lowresource corpus $L$ with $\tilde{L}$ and continue training our pretrained sequence-to-sequence model on this new pseudo-monolingual corpora.8 + +# 3.4 Iterative Multilingual Denoising Pretraining + +Given the pseudo-monolingual data synthesis step detailed in the previous §3.3, we can simply transform this into an iterative pretraining procedure (Tran et al., 2020). That is, given the monolingual data synthesis procedure, we can leverage this procedure to produce a cycle in which a pretrained model is used to initialize an MT model to synthesize pseudo monolingual data and the produced data is used to further train the pretrained model (depicted in Figure 2). + +# 4 Experimental Setup + +In this section, we describe our experimental setup for both pretraining and finetuning strong baselines for our benchmark. Furthermore, we look to evaluate the efficacy of our proposed pretraining + +techniques and see whether they provide an impact on downstream performance on AFROMT. + +# 4.1 Pretraining + +Dataset We pretrain AfroBART on 11 languages: Afrikaans, English, French, Dutch9, Bemba, Xhosa, Zulu, Rundi, Sesotho, Swahili, and Lingala. To construct the original monolingual corpora, we use a combination of the training sets in AFROMT and data derived from CC10010 (Wenzek et al., 2020; Conneau et al., 2020). We only perform dictionary augmentation on our English monolingual data. We list monolingual and pseudo-monolingual corpora statistics in Table 1. + +Balancing data across languages As we are training on different languages with widely varying amounts of text, we use the exponential sampling technique used in Conneau and Lample (2019); Liu et al. (2020), where the text is re-sampled according to smoothing parameter $\alpha$ as shown below: + +$$ +q _ {k} = \frac {p _ {k} ^ {\alpha}}{\sum_ {j = 1} ^ {N} p _ {j} ^ {\alpha}} \tag {2} +$$ + +where $q_{k}$ refers to the re-sample probability for language $k$ , given multinomial distribution $\{q_{k}\}_{k = 1\dots N}$ with original sampling probability $p_{k}^{11}$ . As we work with many extremely low-resource languages, we choose smoothing parameter $\alpha = 0.25$ (compared with the $\alpha = 0.7$ used in mBART) to alleviate model bias towards an overwhelmingly higher proportion of data in the higher-resource languages. + +Hyperparameters We use the following setup to train our AfroBART models, utilizing the mBART implementation in the fairseq library (Ott et al., 2019). We concatenate data using Sentence-Piece (Kudo and Richardson, 2018), using a 80K subword vocabulary. We use the Transformer-base architecture of a hidden dimension of 512, feedforward size of 2048, and 6 layers for both the encoder and decoder. We set the maximum sequence length to be 512, using a batch size of 1024 for 100K iterations with 32 NVIDIA V100 + +
DirectionEn-RunEn-ZuEn-AfEn-Xh
BLEUchrFBLEUchrFBLEUchrFBLEUchrF
Random22.9251.8934.8465.5448.3368.1124.3652.91
mNMT21.5350.6231.5362.9543.3964.7322.2854.81
AfroBART Baseline24.3352.8735.5966.1449.0968.5425.6558.09
AfroBART-Dictionary24.4253.2235.4866.1649.2568.7525.7758.15
AfroBART24.6253.2435.5866.3049.8069.0325.8058.22
DirectionEn-LnEn-BemEn-StEn-Sw
BLEUchrFBLEUchrFBLEUchrFBLEUchrF
Random28.2352.6218.9645.8543.0462.6833.6158.56
mNMT27.2953.1618.5446.2040.2660.6530.5556.44
AfroBART Baseline29.1254.3120.0747.5043.7963.2234.1959.08
AfroBART-Dictionary29.1354.4020.4847.6943.7463.3334.3059.08
AfroBART29.4654.6820.6048.0043.8763.4234.3659.11
+ +Table 2: Results on AFROMT's En-XX Machine Translation + +GPUs for one day. When we continue training using pseudo-monolingual data, we use a learning rate of $7 \times 10^{-5}$ and warm up over 5K iterations and train for 35K iterations. + +# 4.2 Finetuning + +Baselines We use the following baselines for our benchmark: + +- AfroBART Baseline We pretrain a model using only the original monolingual corpora in a similar fashion to Liu et al. (2020). +- AfroBART-Dictionary We pretrain a model using the original data in addition to a dictionary augmented English monolingual corpora in Afrikaans, Bemba, Sesotho, Xhosa, Zulu, Lingala, and Swahili. +- AfroBART We continue training the dictionary augmented AfroBART model, using pseudo monolingual data produce by its finetuned counterparts. Due to computational constraints we only perform one iteration of our iterative approach. Statistics for the pseudomonolingual data can be seen in Table 1. +- Cross-Linguual Transfer (CLT) When experimenting on the effect of pretraining with various amounts of finetuning data, we use strong cross-lingual transfer models, involving training from scratch on a combination of both our low-resource data and a similar relatively high-resource language following Neubig and Hu (2018). +- Multilingual Neural Machine Translation (mNMT) We also experiment with a vanilla + +multilingual machine translation system (Dabre et al., 2020) trained on all En-XX directions. + +- Random As additional baselines, we also provide a comparison with a randomly initialized Transformer-base (Vaswani et al., 2017) models for each translation pair. + +Evaluation We evaluate our system outputs using two automatic evaluation metrics: detokenized BLEU (Papineni et al., 2002; Post, 2018) and chrF (Popović, 2015). Although BLEU is a standard metric for machine translation, being cognizant of the morphological richness of the languages in the AFROMT benchmark, we use chrF to measure performance at a character level. Both metrics are measured using the SacreBLEU library13 (Post, 2018). + +# 5 Results and Discussion + +# 5.1 Performance on En-XX Translation + +Table 2 shows the results on En-XX translation on the AFROMT benchmark comparing random initialization with various pretrained AfroBART configurations. We find that initializing with pretrained AfroBART weights results in performance gains of $\sim 1$ BLEU across all language pairs. Furthermore, we observe that augmenting our pretraining data with a dictionary results in performance gains across all pairs in terms of chrF and 6/8 pairs in terms of BLEU. The gain is especially clear on languages with fewer amounts of monolingual data + +![](images/273a115735799e3f8327e3adeeff8a9c6e472f6e1455c26effeec52b3f4291ca.jpg) +Figure 3: Visualization of results using various amounts of parallel data on English-Xhosa and English-Zulu. We compare AfroBART, random initialization and cross-lingual transfer. + +![](images/fea8b25d55bc2753c75743f745626e3ae9cd1637cf7d05db5579bb7f0ca6b741.jpg) + +such as Rundi and Bemba, demonstrating the effectiveness of our data augmentation techniques on low-resource translation. Moreover we see further improvements when augmenting with pseudo monolingual data, especially on pairs with fewer data which validates the usage of this technique. + +# 5.2 Performance vs Amount of Parallel Data + +We perform experiments to demonstrate the effect on pretraining with various amounts of parallel data (10k, 50k, and 100k pairs) on two related language pairs: English-Xhosa and English-Zulu. We compare AfroBART (with both dictionary augmentation and pseudo monolingual data) with randomly initialized models, and cross-lingual transfer models (Neubig and Hu, 2018) jointly trained with a larger amount of parallel data (full AFROMT data) in a related language. + +In Figure 3, a pretrained AfroBART model finetuned on 10K pairs can almost double the performance of other models (with a significant performance increase over random initialization of $15+$ BLEU on English-Zulu), outperforming both crosslingual transfer and randomly initialized models trained on 5x the data. Furthermore, we notice that CLT performs than Random on English-Xhosa as the data size increases. Although we do not have an exact explanation for this, we believe this has to do with the other language data adding noise rather than additional supervision as the data size increases. We detail these results in Table 3 of the Appendix. + +Comparison on convergence speed In contrast to the cross-lingual transfer baseline which involves the usage of more data, and the random initialization baseline which needs to learn from scratch, AfroBART is able to leverage the knowl + +edge gained during training for fast adaptation even with small amounts of data. For example, AfroBART converged within 1,000 iterations when finetuning on 10K pairs on English-Zulu, whereas the random initialization and cross-lingual transfer baselines converged within 2.5K and 12K iterations respectively. This is promising as it indicates that we can leverage these models quickly for other tasks where there is much fewer parallel data. + +# 5.3 Fine-grained Language Analysis + +We further provide a suite of fine-grained analysis tools to compare the baseline systems. In particular, we are interested in evaluating the translation accuracy of noun classes in the considered African languages in the Niger-Congo family, as these languages are morphologically rich and often have more than 10 classes based on the prefix of the word. For example, kitabu and vitabu in Swahili refer to book and books in English, respectively. Based on this language characteristic, our fine-grained analysis tool calculates the translation accuracy of the nouns with the top 10 most frequent prefixes in the test data. To do so, one of the challenges is to identify nouns in a sentence written in the target African language. However, there is no available part-of-speech (POS) tagger for these languages. To tackle this challenge, we propose to use a label projection method based on word alignment. Specifically, we first leverage an existing English POS tagger in the spaCy library to annotate the English source sentences. We then use the fast_align tool (Dyer et al., 2013) to train a word alignment model on the training data for the En-XX language pair, and use the alignment + +model to obtain the word-level alignment for the test data. We assign the POS tags of the source words in English to their aligned target words in the African language. We then measure the translation accuracy of the nouns in the African language by checking whether the correct nouns are included in the translated sentences by systems in comparison. Notably, our analysis tool can also measure the translation accuracy of the words in the other POS tags, (e.g. verbs, adjectives) which are often adjusted with different noun classes. + +Figure 4 compares the AfroBART and Random baseline in terms of translation accuracy of nouns in Swahili. First, we find that both systems perform worse on translating nouns with the prefix "ku-" which usually represent the infinitive form of verbs, e.g., kula for eating. Secondly, we find that AfroBART significantly improves translation accuracy for nouns with prefixes "ki- (describing man-made tools/languages, e.g., kitabu for book) and "mw- (describing a person, e.g., mwalimu for teacher). Finally, AfroBART improves the translation accuracy on average over the ten noun classes by $1.08\%$ over the Random baseline. + +We also perform this analysis on our data-constrained scenario for English-Xhosa, shown in Figure 7. It can be seen that leveraging cross-lingual transfer (trained on both Xhosa and Zulu) models improved noun class accuracy on classes such as uku (infinitive noun class), izi (plural for objects), and ama (plural for body parts) which are shared between languages. This can be contrasted with iin (plural for animals) which is only used in Xhosa, where CLT decreases performance. These analyses which require knowledge of unique grammar found in these languauges can be used for diagnosing cross-lingual transfer for these languauges. Also, we note that AfroBART almost doubles the accuracy (improvement of $16.33\%$ ) of the cross-lingual transfer baseline on these noun classes. + +# 5.4 Shortcomings of AFROMT + +Although we believe AFROMT to be an important step in the right direction, we acknowledge it is far from being the end-all-be-all. Specifically, we note the following: (1) the lack of domain diversity among many languages (being largely from religious oriented corpora) and (2) the corpora may still contain some more fine-grained forms of noise in terms of translation given its origin. Given this, in the future we look to include more diverse data + +![](images/b0d3f30ad5d721ecdec2de35eee8cac818bdb1c7e90c233e9a67031278787114.jpg) +Figure 4: Translation accuracy of the AfroBART and Random baseline systems on Swahili noun classes with top 10 most frequent 2-character prefixes. + +![](images/2a24a020ebdee6b3f5ca3deef407de0b1efbe585e90b3d3a7106cbda9b1c1c8b.jpg) +Figure 5: Translation accuracy of the AfroBART and Random baseline systems on Xhosa (10k pairs) noun classes with top 10 most frequent 3-character prefixes. + +sources and more languages and encourage the community to do so as well. + +# 6 Related Work + +Machine Translation Benchmarks Previous work in benchmarking includes the commonly used WMT (Bojar et al., 2017) and IWSLT (Federico et al., 2020) shared tasks. Recent work on MT benchmarks for low-resource languages, such as that of Guzmán et al. (2019), have been used for the purpose of studying current NMT techniques for low-resource languages. + +Multilingual Pretraining Multilingual encoder pretraining (Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020) has been demonstrated to be an effective technique for cross-lingual transfer on a variety of classification tasks (Hu et al., 2020; Artetxe et al., 2020). More recently, sequence-to-sequence pretraining has emerged as a prevalent method for achieving better performance (Lewis et al., 2020; Song et al., 2019) on generation tasks. Liu et al. (2020) proposed a mul + +tilingual approach to BART (Lewis et al., 2020) and demonstrated increased performance on MT. Building on these works, we extend this to a LRL-focused setting, developing two new techniques for improved performance given monolingual data-scarcity. In concurrent work, Liu et al. (2021); Reid and Artetxe (2021) also look at using code-switched corpora for sequence-to-sequence pretraining. + +NLP for African Languages Benchmarking machine translation for African languages was first done by Abbott and Martinus (2019) for southern African languages and Abate et al. (2018) for Ethiopian languages. Recent work in NLP for African languages has largely revolved around the grassroots translation initiative Masakhane (Orife et al., 2020; Nekoto et al., 2020). This bottom-up approach to dataset creation (Nekoto et al., 2020), while very valuable, has tended to result in datasets with somewhat disparate data splits and quality standards. In contrast, AFROMT provides a cleaner corpus for the 8 supported languages. We plan to open source the entire benchmark (splits included) to promote reproducible results in the community. + +# 7 Conclusion + +In this work we proposed a standardized, clean, and reproducible benchmark for 8 African languages, AFROMT, as well as novel pretraining strategies in the previously unexplored low-resource focused setting. Our benchmark and evaluation suite are a step towards larger, reproducible benchmarks in these languages, helping to provide insights on how current MT techniques work for these under-explored languages. We will release this benchmark, our pretrained AfroBART models, dictionaries, and pseudo monolingual data to the community to facilitate further work in this area. + +In future work we look to use similar methodology to advance in both of these directions. We look to increase the number of language pairs in AFROMT to be more representative of the African continent. Additionally, we look to scale up our pretraining approaches for increased performance. + +# Acknowledgements + +We thank Antonios Anastasopoulos and Edison Marrese-Taylor, and the anonymous reviewers for feedback and comments. We also thank Aditi + +Chaudhary and Kathleen Siminyu for helpful discussions in early stages of this work. MR is grateful to the Masason Foundation for their support. + +# References + +Solomon Teferra Abate, Michael Melese, Martha Yifiru Tachbelie, Million Meshesha, Solomon Atinafu, Wondwossen Mulugeta, Yaregal Assabie, Hafte Abera, Binyam Ephrem, Tewodros Abebe, Wondimagegnhue Tsegaye, Amanuel Lemma, Tsegaye Andargie, and Seifedin Shifaw. 2018. Parallel corpora for bi-lingual English-Ethiopian languages statistical machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3102-3111, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Jade Abbott and Laura Martinus. 2019. Benchmarking neural machine translation for Southern African languages. In Proceedings of the 2019 Workshop on Widening NLP, pages 98-101, Florence, Italy. Association for Computational Linguistics. +Ahmed Abdelali, Francisco Guzman, Hassan Sajjad, and Stephan Vogel. 2014. The AMARA corpus: Building parallel language resources for the educational domain. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1856-1862, Reykjavik, Iceland. European Language Resources Association (ELRA). +Zeljko Agić and Ivan Vulić. 2019. JW300: A wide-coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204–3210, Florence, Italy. Association for Computational Linguistics. +Naveen Arivazhagan, Ankur Bapna, Orhan First, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019. +Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, Online. Association for Computational Linguistics. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +John T. Bendor-Samuel and Rhonda L. Hartell, editors. 1989. The Niger-Congo Languages: A classification + +and description of Africa's largest language family. University Press of America, Lanham, MD. +Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169-214, Copenhagen, Denmark. Association for Computational Linguistics. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics. +Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057-7067. +Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A survey of multilingual neural machine translation. ACM Comput. Surv., 53(5). +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics. +Miquel Esplà, Mikel Forcada, Gema Ramírez-Sánchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119, Dublin, Ireland. European Association for Machine Translation. +Marcello Federico, Robert Enyedi, Roberto Barra-Chicote, Ritwik Giri, Umut Isik, Arvindh Krishnaswamy, and Hassan Sawaf. 2020. From speech-to + +speech translation to automatic dubbing. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 257-264, Online. Association for Computational Linguistics. +Mitchell A Gordon, Kevin Duh, and Jared Kaplan. 2021. Data and parameter scaling laws for neural machine translation. In ACL Rolling Review - May 2021. +Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics. +Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4411-4421. PMLR. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic. Association for Computational Linguistics. +Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28-39, Vancouver. Association for Computational Linguistics. +Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer + +Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Portož, Slovenia. European Language Resources Association (ELRA). +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8(0):726-742. +Zihan Liu, Genta Indra Winata, and Pascale Fung. 2021. Continual mixed-language pre-training for extremely low-resource neural machine translation. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. +Yuxian Meng, Xiangyuan Ren, Zijun Sun, Xiaoya Li, Arianna Yuan, Fei Wu, and Jiwei Li. 2019. Large-scale pretraining for neural machine translation with tens of billions of sentence pairs. arXiv preprint arXiv:1909.11861. +Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp Oktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2144-2160, Online. Association for Computational Linguistics. +Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages + +875-880, Brussels, Belgium. Association for Computational Linguistics. +Iroro Orife, Julia Kreutzer, Blessing Sibanda, Daniel Whitenack, Kathleen Siminyu, Laura Martinus, Jamiil Toure Ali, Jade Abbott, Vukosi Marivate, Salomon Kabongo, Musie Meressa, Espoir Murhabazi, Orevaoghene Ahia, Elan van Biljon, Arshath Ramkilowan, Adewale Akinfaderin, Alp Oktem, Wole Akin, Ghollah Kioko, Kevin Degila, Herman Kamper, Bonaventure Dossou, Chris Emezue, Kelechi Ogueji, and Abdallah Bashir. 2020. Masakhane - machine translation for africa. arXiv preprint arXiv:2003.11529. +Robert Östling and Jörg Tiedemann. 2016. Efficient word alignment with Markov Chain Monte Carlo. Prague Bulletin of Mathematical Linguistics, 106:125-146. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics. +Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Maja Popovic. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics. +Machel Reid and Mikel Artetxe. 2021. PARADISE: Exploiting parallel data for multilingual sequence-to-sequence pretraining. ArXiv, abs/2108.01887. +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, + +Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5926-5936. PMLR. +Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112. +Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA). +Chau Tran, Yuqing Tang, Xian Li, and Jiatao Gu. 2020. Cross-lingual retrieval for iterative self-supervised training. In Advances in Neural Information Processing Systems. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association. +Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934. +Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics. + +# A AFROMT + +We provide extra information — Script, Language Family, L1 and L2 speakers, Location as well as Word Order — in Table 4. + +We upload AFROMT as well as the data generated using the pseudo monolingual data synthesis $^{16}$ . + +# B Pretraining + +Data We use in addition to the monolingual data for the languages in AFROMT (shown in Table 1 of the main paper), we 14 GB of English data, and 7 GB of French and Dutch data each. + +Additional Hyperparameters We optimize the model using Adam (Kingma and Ba, 2015) using hyperparameters $\beta = (0.9, 0.98)$ and $\epsilon = 10^{-6}$ . We warm up the learning rate to a peak of $3 \times 10^{-4}$ over 10K iterations and then decay said learning rate using the polynomial schedule for 90K iterations. For regularization, we use a dropout value of 0.1 and weight decay of 0.01. + +# C Finetuning Hyperparameters + +Training from scratch When training using random initialization (or CLT), we use a batch size of $32\mathrm{K}$ (or 64K in the case of CLT) tokens and warmup the learning rate to $5 \times 10^{-4}$ over 10K iterations and decay with the inverse square root schedule. We use a dropout value of 0.3, a weight decay value of 0.01, and a label smoothing value of $\epsilon = 0.1$ . + +Finetuning from AfroBART We train using a batch size of 32K tokens, and use a smaller learning rate of $3 \times 10^{-4}$ . We use a polynomial learning rate schedule, maximizing the learning rate at 5000 iterations and finishing training after 50K iterations. We perform early stopping, stopping training if the best validation loss remains constant for over 10 epochs. We use a label smoothing value of $\epsilon = 0.2$ , a dropout value of 0.3 and weight decay of 0.01. + +# D Training Infrastructure + +For finetuning models on AFROMT we use between 1 and 8 NVIDIA V100 16GB GPUs on a DGX-1 machine running Ubuntu 16.04 on a Dual + +20-Core Intel Xeon E5-2698 v4 2.2 GHz. For pretraining we make use of a compute cluster using 8 nodes with 4 NVIDIA V100 16GB GPUs per node. + +# E Quantification of Potential Data Leakage + +In low-resource machine translation, data-leakage is a key concern given its pertinence in the mitigation of misleading results. We quantify data leakage for our benchmark. We measured the target-side train-test data leakage using the 4-gram overlap between the training/test sets. We take the most frequent 100k 4-grams from the training set and compare them with all 4-grams in the test set and obtain an average 4-gram overlap of $5.01 \pm 2.56\%$ (measured against all test-set 4-grams). To put this value in context, we ran these on other widely used low-resource datasets from IWSLT (En-Vi, Ja-En, Ar-En) and obtained $9.50\%$ , $5.49\%$ , and $5.53\%$ respectively. We believe this to be reasonable evidence of the lack of train-test data-leakage. + +Furthermore, we also show improvements on source-target leakage as follows: we compute BLEU between the source and target over all training sets, we obtain an average of $4.5 \pm 1.3$ before cleaning (indicating heavy overlap in certain source-target pairs in the corpus), and after cleaning $0.7 \pm 0.2$ indicating a significant decrease in such overlap. + +# F Parameter Count + +We keep the parameter count of 85M consistent throughout our experiments as we use the same model architecture. We ran experiments on scaling up randomly initialized models with a hidden size of 768 and feed forward dimension of 3072 with 6 layers in both the encoder and decoder on three language pairs. The results of these experiments can be seen in Table 3. + +
Lang. PairModelParam. CountBLEUchrF
En-RunRandom85M22.9251.89
Random160M22.1251.22
En-SwRandom85M33.6158.56
Random160M33.6258.65
En-LnRandom85M28.3753.65
Random160M27.5853.29
+ +Table 3: Scalability comparison + +It can be seen that increasing parameter count of AfroMT for random initialization doesn't provide + +
LanguagesISO 639-2 codeScriptLanguage FamilyPopulation
L1L2
AfrikaansAfrLatin, ArabicIndo-European: Germanic7.2M10.3M
BembaBemLatinNiger-Congo: Bantu Zone M4M2M
LingalaLinLatinNiger-Congo: Bantu Zone C20M25M
RundiRunLatinNiger-Congo: Bantu Zone D11.9M
SothoSotLatinNiger-Congo: Bantu Zone S5.6M7.9M
SwahiliSwaLatinNiger-Congo: Bantu Zone G150M90M
XhosaXhoLatinNiger-Congo: Bantu Zone S8.2M11M
ZuluZulLatinNiger-Congo: Bantu Zone S12M16M
+ +
LanguagesLocationNoun Classes Singular/Plural/TotalWord Order
AfrikaansSouth Africa, NamibiaSVO
BembaNorth-Eastern Zambia9/6/15SVO
LingalaDR Congo, Congo9/6/15SVO
RundiBurundi9/7/16SVO
SothoLesotho, South Africa, Zimbabwe6/5/11SVO
SwahiliAfrican Great Lakes region, East/Southern Africa9/9/18SVO
XhosaSouth Africa8/7/15SVO
ZuluSouth Africa, Lesotho, Eswatini6/10/16SVO
+ +Table 4: Extra information on all the languages contained within AFROMT + +an effective performance/compute tradeoff, harming performance on English-Rundi and English-Lingala, while minimally improving performance on English-Swahili. This being said, we believe that if we scale up AfroBART, given the insights from Liu et al. (2020); Gordon et al. (2021), we can provide a good initialization to allows us to scale to these model sizes for greater performance. + +# G Fine-grained morphological analysis in a data constrained regime + +![](images/bb5b4596ceeb85f442945d1a49038fe2ffbc53f019dcfb0952d54f657a0433cd.jpg) +Figure 6: Translation accuracy of the AfroBART and Random baseline systems on Zulu (10k pairs) noun classes with top 10 most frequent 2-character prefixes. + +We perform our fine grained morphological analysis (described in Section §5.3 of the main paper) on the data constrained scenario (described in Section §5.2 of the main paper). We perform the analysis on English-Xhosa and English-Zulu (10k parallel sentence pairs) side by side and visualize them in + +![](images/e93849c916b1fe0ab02b3bbc3a8e91d0a14e1982bc45ffda5e34cb2423135391.jpg) +Figure 6 and Figure 7. It can be seen that crosslingual transfer improves accuracy in this data constrained scenario over a random baseline, which is return improved upon by AfroBART. +Figure 7: Translation accuracy of the AfroBART and Random baseline systems on Xhosa (10k pairs) noun classes with top 10 most frequent 3-character prefixes. (Same as Figure 5 of the main paper) + +Additionally, we report the BLEU and chrF scores of the data constrained experiments (shown in Figure 3 of the main paper) in Table 5. + +
Lang. Pair# DataModelBLEUchrF
En-Zu10kRandom4.0628.26
CLT8.0837.9
AfroBART20.4451.35
50kRandom18.0150.55
CLT20.4151.52
AfroBART26.9558.56
100kRandom23.0955.63
CLT24.5055.81
AfroBART29.4160.81
En-Xh10kRandom2.8226.29
CLT6.3532.31
AfroBART13.9843.19
50kRandom11.9442.62
CLT10.1239.73
AfroBART18.5449.70
100kRandom16.0047.92
CLT11.6441.19
AfroBART20.4552.35
+ +Table 5: Comparing performance with various amounts of parallel data \ No newline at end of file diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/images.zip b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..daf5ef11ac9cbcf0b2719b2abb453791b90f25c7 --- /dev/null +++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c245c259139e37a31bbe7c71a3c87591e9670ab3b6c7e97dc7423b0ae885663a +size 517668 diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/layout.json b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e492695cd7d66b8989991d753b819e3a00334d55 --- /dev/null +++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d489c140dea404a990bda99b8597c697cac6b22b3bae30c0e35f3e9aa75a466f +size 408537 diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_content_list.json b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cb9d36398cf6a8eaa3765e94afffa1929a368c6c --- /dev/null +++ b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74c6854af11f1a0ac095e37016c121fc9f62f25391d23c527e4b0f172eb26a3c +size 71756 diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_model.json b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1a2c6a462df5b3cd0ae951d2d6d77e21673ef011 --- /dev/null +++ b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61838eb624049619fffa5ad552ce74107e0a397e27322f1f3fe41dffe0c4041d +size 85979 diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_origin.pdf b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..acb8b3f20911c819ce345a772fa92604d665a537 --- /dev/null +++ b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6a3287ff0d1f7a802ad5f69d75e9bc2895b9475a3d9b52cd05b02367b9a105c +size 514608 diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/full.md b/agenerativeframeworkforsimultaneousmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2ece1afcfbb890acbefd74c3f7c62bd3035e3941 --- /dev/null +++ b/agenerativeframeworkforsimultaneousmachinetranslation/full.md @@ -0,0 +1,346 @@ +# A Generative Framework for Simultaneous Machine Translation + +Yishu Miao +Imperial College London +ByteDance + +Phil Blunsom +Universitiy of Oxford +DeepMind + +Lucia Specia +Imperial College London +University of Sheffield + +ym713@ic.ac.uk phil.blunsom@cs.ox.ac.uk l.specia@ic.ac.uk + +# Abstract + +We propose a generative framework for simultaneous machine translation. Conventional approaches use a fixed number of source words to translate or learn dynamic policies for the number of source words by reinforcement learning. Here we formulate simultaneous translation as a structural sequence-to-sequence learning problem. A latent variable is introduced to model read or translate actions at every time step, which is then integrated out to consider all the possible translation policies. A re-parameterised Poisson prior is used to regularise the policies which allows the model to explicitly balance translation quality and latency. The experiments demonstrate the effectiveness and robustness of the generative framework, which achieves the best BLEU scores given different average translation latencies on benchmark datasets. + +# 1 Introduction + +The fundamental challenge of simultaneous machine translation (SiMT) is the balance between the translation quality and the latency. It is non-trivial to find an optimal translation strategy, as there is generally a rivalry between the two objectives, i.e. reading more source words before translating leads to better translation quality, but it in turn results in higher latency due to the longer time for reading. + +Conventional Wait- $k$ policies (Ma et al., 2019) put a hard limitation over the buffer size $k^1$ , which guarantees low latency but weakens flexibility and scalability when handling long and complicated language pairs. Alternatively, reinforcement learning (RL) approaches (Gu et al., 2017; Satija and Pineau, 2016; Arthur et al., 2021) learn a dynamic policy using a combined reward of a quality metric like the BLEU score and AL (average lagging) $^2$ . + +![](images/548d9aef9d5a913f67ceeefb2ee1d4400acd8325c22f5ad4a3277351e0367959.jpg) + +![](images/f1c21cc84ac35370705a8551b9a6642180ca981b76178e6943ce48213b042991.jpg) +(a) Wait-k $(\mathrm{k} = 3)$ +(c) RL method. +Figure 1: Example of translation paths of different simultaneous translation models. + +![](images/674e3f134b71d0f34f47a502170c1ade44dbf248d7b8bdf825cb64bb30a77f2f.jpg) + +![](images/681f14fdf301b2975fb265f44121248d3e572f1820d6a7f9d36306576fd93197.jpg) +(b) Adaptive wait-k $(\mathrm{k} = 3)$ . +(d) GSiMT. + +However, the poor sample efficiency makes it very difficult to learn a robust SiMT model with RL. + +In this paper we propose a generative framework with a latent variable that dynamically decides between the actions of read or translate at every time step, enabling the formulation of SiMT as a structural sequence-to-sequence learning task. Figure 1 depicts the examples of possible translation paths of different models. Wait- $k$ only explores one hypothesis, while adaptive wait- $k$ ensembles the other hypotheses with lower $k$ . However, the hypotheses of reading more than $k$ words before translating are not considered (e.g. inversion and reordering in long sequence translations). The RL models apply dynamic policies which can explore all the possible hypotheses, but the gradient estimator conditioned on discrete samples has large variance and the variance issue gets worse for long sequences. Instead, Our proposed generative simultaneous machine translation model (GSiMT) + +integrates out all the hypotheses by a dynamic programming algorithm (Algorithm 1) with the help of the introduced latent variable. It does not suffer from such large variance issue, and can be easily and efficiently learned by gradient backpropagation on GPU hardware. + +The generative model can be modelled as a neural transducer (Graves, 2012; Yu et al., 2016). However the vanilla neural transducer is not designed for SiMT. Because it is optimised by the cross-entropy of target words, it naturally prefers read actions over translate actions in order to see more contexts before translation, which intuitively can result in better translation quality but high latency. + +Here, we propose to extend the neural transducer framework to modern Transformer-based translation models (Vaswani et al., 2017), and introduce a re-parameterised Poisson distribution to regularise the latency (i.e. how many source words are read before translating a target word). Inspired by the fast-alignment work by Dyer et al. (2013), the translation model generally favors word alignments distributed close to the diagonal. We hypothesise that the optimal sequence of translate actions in SiMT is also located close to the diagonal. Thus the Poisson prior acts as context-independent regularisation on the buffer size proportional to the distance between the current position and the diagonal. This ensures that the number of read source words will not grow indefinitely without translating any target words, while the soft boundary, due to the regularisation, still allows the model to consider complicated/long simultaneous translation cases. + +To demonstrate the effectiveness of the proposed framework, we evaluate our generative models on two benchmark datasets: WMT15 (Bojar et al., 2015) for text-only SiMT and Multi30K (Elliott et al., 2016) for multimodal SiMT. Compared to a number of strong baseline models, Wait- $k$ , Adaptive Wait- $k$ and an RL-trained policy, our proposed model achieves the best performance on both BLEU scores and average lagging (AL). Our contributions can be summarised: + +- A Transformer-based neural transducer model for simultaneous machine translation. +- Poisson prior for effectively balancing the translation quality and latency. +- State-of-the-art SiMT results (BLEU & AL) on benchmark datasets, and the BLEU scores are on-par-with consecutive MT models. + +# 2 Related Work + +Conventional SiMT methods are based on heuristic waiting criteria (Cho and Esipova, 2016) or fixed buffering strategy (Ma et al., 2019) to trade off the translation quality for lower latency. Although the heuristic approaches are simple and straightforward, they lack of scalability and cannot generalise well on longer sequences. There is also a bulk of work attempting to improve the attention mechanism (Arivazhagan et al., 2019) and re-translation strategies (Niehues et al., 2018) for better translation quality. Recently, Zheng et al. (2020) extends the fixed Wait- $k$ policies into adaptive version and ensembles multiple models with lower latency to improve the performance, but one still needs to choose a hard boundary on the maximum value of $k$ . By contrast, our GSiMT model considers all the possible paths with a soft boundary modelled by Poisson distribution, which leads to a more flexible balance between quality and latency. + +RL has been explored (Gu et al., 2017) to learn an agent that dynamically decides to read or translate conditioned on different translation contexts. Arthur et al. (2021) further applies extra knowledge on word alignments as the oracle to improve the learning. However, the high variance of the estimator is still a bottleneck that hinders the applicability of RL in structural sequence-to-sequence learning. The proposed GSiMT model combines the merits of both the Wait- $k$ policies and RL. + +Deep learning with structures has been explored in many NLP tasks, especially for sequence-to-sequence learning. Kim et al. (2017) implements structural dependencies on attention networks, which gives the ability to attend to partial segmentations or subtrees without changing the sequence-to-sequence structure. Tran et al. (2016) parameterises the transition and emission probabilities of an HMM with explicit neural components, and Jiang et al. (2016) applies deep structural latent variables to implement the dependency model with valence (Klein and Manning, 2004) and integrates out all the structures in end-to-end learning. Our GSiMT model is based on neural transducer model. Previously, Graves (2012) presents an RNN-based neural transducer for phoneme recognition, and Yu et al. (2016) explores an LSTM-based neural transducer for MT. The uni-directional variant model of Yu et al. (2016) is similar to our proposed GSiMT model, however it is implemented as a vanilla neural transducer, which is not optimised for low la + +![](images/0ccd984b72f0beeecadc1f47ba9fbe2b5ac5760afbb5964d5f54204ec6853c7a.jpg) +Figure 2: During training, all the contextualised representations $\mathbf{S}_{i,j}$ will be used to compute the translation distribution $p(y_{j}|X_{i},Y_{j - 1})$ and action distribution $p(a_{i,j}|X_{i},Y_{j - 1})$ , while in testing the model takes the inputs $X$ in real-time and dynamically produces $y_{j}$ and $a_{i,j}$ until all the inputs have been read. + +![](images/cc01115e9f24662b2990f8f49b668f2a247acb5f38939488a31aabb67cdf0beb.jpg) + +tency and hence performs poorly on SiMT. Therefore, the Poisson prior for regularising the latency is the key component to enable neural transducer models work on SiMT. + +# 3 Model + +# 3.1 Generative Model + +We use $X_{:m}$ and $Y_{:n}$ to represent the source language sequence and target language sequence with lengths $m$ and $n$ . $X_{:i}$ represents the sub-sequence $\{x_1, x_2, \ldots, x_i\}$ . The structural latent variable $a_{i,j}$ (0 or 1) represents the action (read or translate). Specifically $a_{i,j} = 0$ means reading an extra source word and $a_{i,j} = 1$ means translating the target word $y_j$ . The translation position $Z$ is introduced as an auxiliary variable to simplify the equations, where $z_j = i$ denotes that there $i$ source words have been read when decoding the $j$ th word $y_j$ . Similar to neural transducer (Graves, 2012; Yu et al., 2016), the generative model can be formulated as: + +$$ +p (Y _ {: j} | X) = \sum_ {i = 1} ^ {| X |} p (y _ {j} | X _ {: i}, Y _ {: j - 1}) p (z _ {j} = i, Y _ {: j - 1} | X _ {: i}) +$$ + +Translation distribution. Given the contextualised representation $\mathbf{S}_{i,j}$ , the translation distribution of $y_{j}$ : + +$$ +p \left(y _ {j} \mid X _ {: i}, Y _ {: j - 1}\right) = \operatorname {s o f t m a x} \left(\mathbf {S} _ {i, j} \cdot \mathbf {W} _ {y} ^ {T}\right) \tag {1} +$$ + +Specifically, $\mathbf{W}_y^T$ is the projection matrix for word prediction and we leave out the bias terms for simplicity. $\mathbf{S}_{i,j}$ is the state output conditioned on + +source words $X_{:i}$ and target words $Y_{:j-1}$ : + +$$ +\mathbf {S} _ {i, j} = g \left(\operatorname {E n c} \left(X _ {: i}\right), \operatorname {D e c} \left(Y _ {: j - 1}\right)\right) \tag {2} +$$ + +where Enc and Dec are uni-directional Transformers based encoder and decoder. Different from conventional consecutive NMT model e.g. T5 (Raffel et al., 2020) where encoder is a bi-directional Transformer, the model has no access to the full input stream when translating. Figure 2 shows the training process where $S_{i,j}$ is computed for all the sub-sequences at positions $i,j$ . + +Position distribution. The position distribution jointly models the translation position $z_{j}$ , and the subsequence $Y_{:j-1}$ : + +$$ +\begin{array}{l} p \left(z _ {j} = i, Y _ {: j - 1} \mid X _ {: i}\right) \tag {3} \\ = \sum_ {i ^ {\prime} = 1} ^ {i} p (z _ {j} = i | z _ {j - 1} = i ^ {\prime}, X: i, Y: j - 1) \cdot p (Y: j - 1 | X: i ^ {\prime}) \\ \end{array} +$$ + +Here, we can recurrently decompose the position distribution into a sum of products for all the possible sub-sequence $Y_{:j-1}$ given read source sequence $X_{:i'}$ and the transitions from $z_{j-1} = i'$ to $z_j = i$ , i.e. there are $i - i'$ source words newly read before translating $y_j$ . + +Switch distribution. To model all the possible transitions from $z_{j-1} = i'$ to $z_j = i$ , we employ the switch distribution: + +$$ +\begin{array}{l} p \left(z _ {j} = i \mid z _ {j - 1} = i ^ {\prime}, X _ {: i}, Y _ {: j - 1}\right) \tag {4} \\ = \left\{ \begin{array}{l l} 0 & \text {i f} i < i ^ {\prime} \\ \alpha_ {i, j} & \text {i f} i = i ^ {\prime} \\ \alpha_ {i, j} \cdot \prod_ {k = i ^ {\prime}} ^ {i - 1} (1 - \alpha_ {k, j}) & \text {i f} i > i ^ {\prime} \end{array} \right. \\ \end{array} +$$ + +and + +$$ +\alpha_ {i, j} = p \left(a _ {i, j} = 1 \mid X: i, Y: j - 1\right) = \operatorname {s i g m o i d} \left(\mathbf {S} _ {i, j} \cdot \mathbf {W} _ {a} ^ {T}\right) +$$ + +![](images/f4414363cce4e7ae5239584a49a58da7e0c8777a6112c46e9730b391fe69f2e3.jpg) +Figure 3: An example of decomposing a position distribution into a sum of products of switch distributions and subsequence generation probabilities. + +![](images/ce421d571a9f223533906c9b4305f811a443b05040f9bc05c93815b278822bc8.jpg) + +![](images/987dc8b80d50d629ebaa5c9752ba6744e1c1fd5a7fc9b8372c63e5efa66a1deb.jpg) + +![](images/406b0635975b9b30a297cca200b998616ad5613164a472a0514a62f3b3a787a4.jpg) + +$$ +\begin{array}{c} p (z _ {4} = 3, Y _ {: 3} | X _ {: 3}) = \sum_ {i ^ {\prime} = 0} ^ {4} p (z _ {4} = 3 | z _ {3} = i ^ {\prime}, X _ {: i ^ {\prime}}, Y _ {: 3}) \cdot p (Y _ {: 3} | X _ {: i ^ {\prime}}) \\ \text {p o s i t i o n d i s t r i b u t i o n} \quad \text {s w i t c h d i s t r i b u t i o n} \quad \text {s u b s e q u e n c e} \end{array} +$$ + +where $\mathbf{W}_a^T$ is the linear projection to the action space, and $Z$ is a monotonic sequence ( $z_j \geq z_{j-1}$ ), hence for the transitions $i < i'$ , the switch probability is zero. + +Figure 3 shows a simple example for decomposing $p(Y_{:4}|X_{:3})$ into switch distributions and sub-sequence translations. For the transitions $i > i'$ , it accumulates $i - i'$ read actions plus one translate action, so the switch probability is $\alpha_{i,j} \cdot \prod_{k=i'}^{i-1}(1 - \alpha_{k,j})$ . Here, the read or translate actions are conditionally independent given the translation history. + +Objective. In SiMT, we explicitly assume the last target word $y_{n}$ is translated after reading all source words, hence the final objective can be simplified as: + +$$ +\begin{array}{l} p (Y | X) = p (y _ {n} | X _ {: m}, Y _ {: n - 1}) \cdot p (z _ {n} = m, Y _ {: n - 1} | X _ {: m}) \\ = p \left(y _ {n} \mid X: m, Y: n - 1\right) \cdot \\ \end{array} +$$ + +$$ +\sum_ {i = 1} ^ {m} p \left(z _ {n} = m \mid z _ {n - 1} = i, X: m, Y: n - 1\right) \cdot p \left(Y: n - 1 \mid X: i\right) +$$ + +One caveat is that this objective does not encourage low latency translations when optimised by maximum log-likelihood, since the model can read as many source words as possible in order to have the best translation quality. Ideally, the lowest latency means that for all the target words $y_{j}$ , the model reads one source word at every time step after translating a target word (i.e. the translation positions $z_{j} = i$ are close to the diagonal of a $m*n$ matrix as much as possible). Therefore, we need an extra regularisation to focus the probability mass of the translation positions along the diagonal. + +# 3.2 Poisson Prior + +Dyer et al. (2013) proposes a log-linear diagonal reparameterisation for fast word alignments, which helps the IBM 2 model by encouraging the probability mass to be around the diagonal. This in turn + +also notably improves efficiency over the vanilla IBM 2 model. Although SiMT is more complex than word alignment, the diagonal reparameterisation can act as a strong regularisation to favor the translate actions happening around the diagonal, which can yield balanced actions resulting in high quality and low latency. + +Therefore, we introduce a prior distribution to regularise the maximum number of source words that can be stored $(b_{j})$ when decoding the $j$ th word $(y_{j})$ . To that end, we apply Poisson distribution as it is generally used for modelling the number of events in other specified intervals such as distance, area or volume. The distance between the absolute positions $(i$ and $j)$ and the diagonal can be easily modelled as discrete values to be regularised by Poisson, where the probability decreases when the distance grows. Here we re-parameterise a Poisson distribution: + +$$ +p (b _ {j} = i; m, n) = \left\{ \begin{array}{l l} 0 & \text {i f} d (i, j) < 0 \\ \frac {e ^ {- \lambda} \lambda^ {d (i , j)}}{d (i , j) !} & \text {i f} d (i, j) \geq 0 \end{array} \right. +$$ + +and + +$$ +d (i, j) = \lfloor i - j \cdot \frac {m}{n} - \zeta \rceil \tag {5} +$$ + +where $d(i,j)$ is the distance of current position to the diagonal, which is rounded for simplicity. The free parameter $\lambda$ is the mean of Poisson distribution, and $\zeta$ is the free parameter denoting the default offset of the current position to the diagonal. Different from translate positions $z_{j} = i$ which depend on the inputs $X_{:i}$ and $Y_{:j-1}$ , $b_{j} = i$ is independent to the translation context, and is only conditioned on the absolute positions $i$ , $j$ . Therefore, we modify the position distribution: + +$$ +\begin{array}{l} p \left(z _ {j} = i, Y _ {: j - 1} \mid X _ {: i}\right) \tag {6} \\ = \sum_ {i ^ {\prime \prime} = 1} ^ {m} p (z _ {j} = i, Y _ {: j - 1} | X _ {: i}, b _ {j} = i ^ {\prime \prime}) \cdot p (b _ {j} = i ^ {\prime \prime}; m, n) \\ \end{array} +$$ + +![](images/d3568aa4cd99a184e578c4fd57643c834677484abcef6df00a275d71c243307f.jpg) + +![](images/dd67203ef7007dab14d15c51695797a9b5c81e3fa2057a2c2ddc7a10a45fb869.jpg) + +![](images/ed270521de9bcd13c533ec7ffd11839d61aa66dc378153c9d442ca75919470d2.jpg) + +![](images/ad0868e074434c022ab9b58f142338a041a20bd9288de0234953497432741d5f.jpg) +(a) $p(y_{j}|X_{:i},Y_{:j - 1})$ + +![](images/71a80aff86a70df999979ef58c969b980367377cc73976b58a7abe506d84b747.jpg) +(b) $p(a_{i,j}|X_{:i},Y_{:j - 1})$ + +![](images/adb893b4ec0eacc549a7c354f4e092a2963c780cd45497cdb6ff95245028b949.jpg) +(c) $p(z_{j},Y_{:j - 1}|X_{:i})$ + +![](images/110349865ad173eb4d38f686021bf2fd7f6ba483a39195634e76ef26363a7115.jpg) +(d) $p(b_{i,j};\lambda = 5,\zeta = 3)$ +(g) $p(z_{j},Y_{:j - 1}|X_{:i};\lambda = 5,\zeta = 3)$ +Figure 4: Visualisations of the distributions in a generative SiMT model example. The depth of color represents the probability mass in each slot. In the sub-figures (c) (g) (h) and (i), the red rectangles highlight the argmax along the 1-st dimension. In the first row, (a), (b) and (c) show the translation distribution, the translate action probability and the position distribution respectively. The indices of each slot correspond to the actual positions of $i$ and $j$ . Specifically, the position distribution (c) is generated by the vanilla GSiMT without Poisson prior distribution. In the second row, (d), (e) and (f) are the re-parameterised Poisson distribution under different $\lambda$ and $\zeta$ . In the third row, (g), (h) and (i) are the position distribution after integrating out the Poisson prior (d), (e) and (f). Compared to the original position distribution (c), the generative models notably put more emphasis on the positions along the diagonal, which acts as a flexible regularisation to balance translation quality and latency. + +![](images/5d39198fd6237b47cbde5da73ca6b3eba972a5836f5025952ede2b8709004d54.jpg) +(e) $p(b_{i,j};\lambda = 3,\zeta = 3)$ +(h) $p(z_{j},Y_{:j - 1}|X_{:i};\lambda = 3,\zeta = 3)$ + +![](images/076184c0391533e0fd429d3d042ca26ae4439516820fe54659f222538da8d1c1.jpg) +(f) $p(b_{i,j};\lambda = 3,\zeta = 0)$ +(i) $p(z_{j},Y_{:j - 1}|X_{:i};\lambda = 3,\zeta = 0)$ + +Here, we make the assumption that the number of source words that have been read $i$ cannot exceed the maximum size $b_{j}$ . Hence, for all the cases $b_{j} < i$ , the probability $p(z_{j} = i, Y_{:j-1}|X_{:i}, b_{j4. + +The checkpoints with best performance in 5 runs on development datasets are chosen for testing BLEU (Papineni et al., 2002) and AL (average lagging) (Ma et al., 2019). For GSiMT models, we empirically fix $\lambda = 3$ for all the experiments, and use $\zeta$ as the free parameter to achieve different AL. + +For Multi30K (Elliott et al., 2016), we use all three language pairs EN $\rightarrow$ FR, EN $\rightarrow$ DE and EN $\rightarrow$ CZ with the image data from Flickr30k as extra modality and flickr2016 as test dataset. We build multimodal models with the goal of testing the generalisation ability of the generative models with extra modalities. To that end, we concatenate the object detection features applied in Caglayan et al. (2020) into the state representation $S_{i,j}$ and maintain the rest of the neural network the same as the unimodal SiMT. The other models (RL, Wait-k and Adpative Wait-k) incorporate the same features as well. Here, as the size of data is small, we apply a smaller Transformers with 4 layers, 4 heads, 512 model dimension and 1024 for linear connection. + +# 4.2 Translation Quality & Latency + +Table 1 shows the SiMT performance for the benchmark models and our proposed generative models on the WMT15 DE $\rightarrow$ EN dataset. RL is our implementation of Gu et al. (2017) with policy gradient method. All the numbers for Wait- $k$ and AdaptiveWait- $k$ are quoted from Zheng et al. (2020). + +
ModelGerman-English (WMT15)
BLEU ↑AL ↓BLEU ↑AL ↓BLEU ↑AL ↓BLEU ↑AL ↓
RL (Gu et al., 2017)22.123.1623.814.6624.315.5225.226.71
Wait-k (Ma et al., 2019)25.223.7626.294.7027.425.7727.736.66
Adaptive-Wait-k (Zheng et al., 2020)26.733.6327.844.7928.415.3329.206.60
GSiMT-Possion-T528.313.7929.184.6129.595.4129.306.25
GSiMT-Poisson28.823.6429.504.4529.785.1329.636.24
GSiMT-NT29.799.75------
Consecutive NMT30.2428.58------
+ +Table 1: SiMT performance on WMT15 DE→EN. The models in the first group are the benchmark models for simultaneous machine translation. The second group is the variants of our proposed GSiMT. The third group is the consecutive NMT model, which provides the upper bound on BLEU score as it has access to the entire source stream. To fairly compare the BLEU under different AL, we apply 4 columns to limit the AL in the similar range but compare the BLEU score. The numbers of Wait-k and Adaptive-Wait-k models are achieved by training different models with $k$ from 1 to 10 and $k_{min} = 1$ , $k_{max} = 10$ (Zheng et al., 2020). For both GSiMT-Possion-T5 and GSiMT-Poisson, we apply $\zeta = 4,5,6,7$ respectively to achieve the corresponding AL scores in each block. We highlight the best performance by BLEU score with bold numbers in each block. The underlined results are from the models that are not optimised for translation latency, which are used for reference only. + +
ModelEn-Fr (Multi30k)En-Cz (Multi30k)En-Ge (Multi30k)
BLEU ↑AL ↓BLEU ↑AL ↓BLEU ↑AL ↓
RL (Gu et al., 2017)54.394.0123.302.2431.233.08
Wait-k (Ma et al., 2019)56.203.3823.313.5433.753.47
Adaptive-Wait-k (Zheng et al., 2020)57.163.3226.93.1133.682.99
DEC-OD (Caglayan et al., 2020)57.903.6528.132.8334.402.37
GSiMT-Possion-T558.453.2828.923.0636.232.58
GSiMT-Poisson58.893.1729.932.7136.112.65
GSiMT-NT58.817.3229.225.2135.786.55
Consecutive NMT59.2913.1030.6513.1036.8413.10
+ +Table 2: SiMT performance on Multi30K dataset. The models in the first group are the benchmark models for multimodal simultaneous machine translation. In addition to the models in Table 1, DEC-OD (Caglayan et al., 2020) is an RNN based model with an extra attention layer to attend to object detection features while carrying out translation. The numbers of other models in the first group are from our implementations, of which the state outputs are concatenated with the same visual features from Caglayan et al. (2020) for multimodal SiMT. For better comparison, we only report the BLEU scores with AL around 3. Similarly, the underlined results are from the models that are not optimised for translation latency, which are used for reference only. For both GSiMT-Possion-T5 and GSiMT-Poisson, we apply $\zeta = 3$ for all of the language pairs. + +![](images/a4602f523457c8d86925a287756568d15d8cb97feb6efcced4517ccdf8060bb0.jpg) +Figure 5: Overview of the performance of different models: BLEU scores versus average lagging (AL). + +GSiMT-Possion is our proposed generative model with Possion prior. GSiMT-Possion-T55 is a variant of GSiMT-Poisson which takes the top + +5 history paths during dynamic programming when decoding a new target word. It is similar to having a sparse 'attention' over the previous histories, which in turn highlights the simultaneous translation paths with higher confidence. GSiMT-NT is the vanilla neural transducer model without Poisson prior. + +According to the experimental results in Table 1, the GSiMT-Poisson obtains a good balance between the translation quality and latency. More importantly, it achieves the best BLEU given different AL scores in the same range. Especially when the AL is very low, the GSiMT-Poisson model maintains its high performance on BLEU scores. Interestingly, the performance of GSiMT-Possion-T5 is very similar to the GSiMT-Poisson model that updates all the possible translation paths instead of the top 5. It shows that the model can be further + +
ModelBLEU ↑ AL ↓BLEU ↑ AL ↓BLEU ↑ AL ↓BLEU ↑ AL ↓
Wait-k (Ma et al., 2019)k = 2k = 3k = 4k = 5
22.643.9522.964.7323.605.5624.486.41
GSiMT-Possion-T5ζ = 0ζ = 1ζ = 2ζ = 3
27.143.8827.884.3628.005.4327.836.15
GSiMT-Poisson27.204.0127.754.6928.055.5128.206.37
+ +Table 3: Test-only performance on the SiMT. For both GSiMT-Possion-T5 and GSiMT-Poisson models, we apply different offset $\zeta$ to parameterise the prior distribution. + +![](images/748a8ca36ba4f7fc0c261baf6265db843a290db38c1696a85bb68f7e4bf6d8e4.jpg) +(a) A translation example of GSiMT with $\zeta = 0$ + +![](images/7c16a02bcbc5ac35f2cc0c0bd181ad8d6bf9d7f017f8ecdc1f74da256beb7a1c.jpg) +(b) A translation example of GSiMT with prior $\zeta = 3$ +Figure 6: Visualisation of decoded sentences with different Poisson parameters. $\downarrow$ represents the state was sampled with the read action, while $\rightarrow$ represents the translate action. The decoding is carried out by the pretrained GSiMT-Poisson $(\zeta = 8)$ and apply $\zeta = 0$ , $\zeta = 3$ to generate the decoded sentences in the Test-only setup. + +optimised in terms of efficiency without much loss on the performance. As expected, GSiMT-NT is able to achieve high performance on BLEU scores (close to the upper bound BLEU score obtained by the consecutive NMT) but suboptimal AL, because it is able to read as many source words as possible. Figure 5 further compares the overall performance on BLEU and AL on the test dataset. + +Table 2 further demonstrates the good performance of the proposed generative models on multimodal SiMT. For all three language pairs, the GSiMT-Poisson model maintains the best performance. More importantly, by simply concatenating the visual features, the GSiMT models perform better than state-of-the-art multimodal SiMT model DEC-OD (Caglayan et al., 2020). + +# 4.3 Generalisation Ability + +To further verify the effectiveness of the soft boundary modelled by the Poisson distribution, we also test the performance for test-only setup. In this case, we first pretrain a GSiMT-Poisson and a GSiMT-Possion-T5 with $\zeta = 8$ as the base models. Then, we directly set up different free parameters $\zeta$ to dynamically adjust the translation latency during testing. The test-only model of Wait- $k$ (Zheng et al., 2020) pretrain a consecutive NMT as the base model and apply the Wait- $k$ policies during testing. Table 3 shows the results on the WMT15 dataset. Compared to Wait-k, both the GSiMT-Possion-T5 + +and GSiMT-Poisson have stronger generalisation ability in test-only setup. It demonstrates the great potential of adjusting the translation latency on-the-fly without much loss on translation quality given a pretrained GSiMT model. + +Figure 6 shows decoded sentences under different set of parameters. As we can see, even in the test-only setup, the generative model can effectively adjust the translation latency to decode the target sentences. Interestingly, the translation quality is not affected much when pursuing lower latency, and with less restrictive latency ( $\zeta = 3$ compared to $\zeta = 0$ ), the generative model is able re-arrange the sub-sequences and produce the word order that is more natural in the target language. + +# 5 Conclusions + +This paper proposes a generative framework for simultaneous MT, which we demonstrated achieves the best translation quality and latency to date on common datasets. The introduction of Poisson prior over the buffer size fills in the gap between simultaneous MT and structural sequence-to-sequence learning. More importantly, the overall algorithm is simple and easy to implement, which grants the ability to be massively applied for various real-world tasks. It has the potential to become the standard framework for SiMT and we will release the code to the public for future research. + +# References + +Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simultaneous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1313-1323, Florence, Italy. Association for Computational Linguistics. +Philip Arthur, Trevor Cohn, and Gholamreza Haffari. 2021. Learning coupled policies for simultaneous machine translation using imitation learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2709-2719, Online. Association for Computational Linguistics. +Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1-46, Lisbon, Portugal. Association for Computational Linguistics. +Ozan Caglayan, Julia Ive, Veneta Haralampieva, Pranava Madhyastha, Loic Barrault, and Lucia Specia. 2020. Simultaneous machine translation with visual context. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2350-2361, Online. Association for Computational Linguistics. +Kyunghyun Cho and Masha Esipova. 2016. Can neural machine translation do simultaneous translation? arXiv preprint arXiv:1606.02012. +Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics. +Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30K: Multilingual English-German image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70-74, Berlin, Germany. Association for Computational Linguistics. +Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711. +Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume + +1, Long Papers, pages 1053-1062, Valencia, Spain. +Association for Computational Linguistics. +Yong Jiang, Wenjuan Han, and Kewei Tu. 2016. Unsupervised neural dependency parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 763-771, Austin, Texas. Association for Computational Linguistics. +Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. 2017. Structured attention networks. arXiv preprint arXiv:1702.00887. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Dan Klein and Christopher Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 478-485, Barcelona, Spain. +Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025-3036, Florence, Italy. Association for Computational Linguistics. +Jan Niehues, Ngoc-Quan Pham, Thanh-Le Ha, Matthias Sperber, and Alex Waibel. 2018. Low-latency neural speech translation. In Proc. Interspeech 2018, pages 1293-1297. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67. +Harsh Satija and Joelle Pineau. 2016. Simultaneous machine translation using deep reinforcement learning. In ICML 2016 Workshop on Abstraction in Reinforcement Learning. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Ke M. Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, and Kevin Knight. 2016. Unsupervised neural hidden Markov models. In Proceedings of the Workshop on Structured Prediction for NLP, pages 63-71, Austin, TX. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. 2016. The neural noisy channel. arXiv preprint arXiv:1611.02554. +Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, and Liang Huang. 2020. Simultaneous translation policies: From fixed to adaptive. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2847-2853, Online. Association for Computational Linguistics. + +# A Top-k Strategy + +Alternative to the full paths dynamic programming computation, each step of the generative simultaneous machine translation can also be formulated as sum of top-k sub-sequence generation probabilities. The original position distribution Equation 3 can be reformed as: + +$$ +\begin{array}{l} p (z _ {j} = i, Y _ {: j - 1} | X _ {: i}) \\ = \max _ {| K | = k} \sum_ {i ^ {\prime} \in K} p (z _ {j} = i | z _ {j - 1} = i ^ {\prime}, X: i, Y: j - 1) \cdot \\ p \left(Y _ {: j - 1} \mid X _ {: i ^ {\prime}}\right) \tag {8} \\ \end{array} +$$ + +where $K$ is the set of position indices that equal or lower than $i$ . This yields to a biased estimator for the log-likelihood estimation of the target sequences. The difference is that it acts as an inductive bias to have a sparse 'attention' over the previous sub-sequence generation probabilities. According to the experiments, the top-k strategy can achieve adequate performance as the full paths strategy. \ No newline at end of file diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/images.zip b/agenerativeframeworkforsimultaneousmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..18a2844c37f424a3abdacb2fc384c7a832ab64bc --- /dev/null +++ b/agenerativeframeworkforsimultaneousmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7898b6a7aae30c26472957a085d865a929e39987e46cf5020443a8e25c4351f3 +size 595457 diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/layout.json b/agenerativeframeworkforsimultaneousmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b3383e80101d3828e16def214c1b7207c1ae368d --- /dev/null +++ b/agenerativeframeworkforsimultaneousmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce43a80a90f5c8af4ac9039af90adf46a54f1eaa785c039504b11b6cdbfa5f69 +size 426082 diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_content_list.json b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d6dfcd995c06213b9a52614a8bdae075e0ae60f3 --- /dev/null +++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0408ccb84a51dded662788f0689325b3e90beb16840c2b5b17c320996412800 +size 79885 diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_model.json b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..489a5f3d45a7ef15da2c3e73c3fd496887cf8b17 --- /dev/null +++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fba4610737b025bd37d622404cf2e659abd952d2393fc23be8fa2b4a8f007bc2 +size 100121 diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_origin.pdf b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bb336e22485f2f2b040c760fd66499c75c24fa02 --- /dev/null +++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23a64b28fd69d198274468d3581aba367f4a024185fc6d581a508cb7ac587f00 +size 435345 diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/full.md b/agraphbasedneuralmodelforendtoendframesemanticparsing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4d105d75059e15acb938be938412306d8cbd2a40 --- /dev/null +++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/full.md @@ -0,0 +1,351 @@ +# A Graph-Based Neural Model for End-to-End Frame Semantic Parsing + +Zhichao Lin $^{1}$ , Yueheng Sun $^{2}$ , Meishan Zhang $^{1*}$ + +$^{1}$ School of New Media and Communication, Tianjin University, China + +$^{2}$ College of Intelligence and Computing, Tianjin University, China + +{chaosmyth,yhs,zhangmeishan}@tju.edu.cn + +# Abstract + +Frame semantic parsing is a semantic analysis task based on FrameNet which has received great attention recently. The task usually involves three subtasks sequentially: (1) target identification, (2) frame classification and (3) semantic role labeling. The three subtasks are closely related while previous studies model them individually, which ignores their intern connections and meanwhile induces error propagation problem. In this work, we propose an end-to-end neural model to tackle the task jointly. Concretely, we exploit a graph-based method, regarding frame semantic parsing as a graph construction problem. All predicates and roles are treated as graph nodes, and their relations are taken as graph edges. Experiment results on two benchmark datasets of frame semantic parsing show that our method is highly competitive, resulting in better performance than pipeline models. + +# 1 Introduction + +Frame semantic parsing (Gildea and Jurafsky, 2002) aims to analyze all sentential predicates as well as their FrameNet roles as a whole, which has received great interest recently. This task can be helpful for a number of tasks, including information extraction (Surdeanu et al., 2003), question answering (Shen and Lapata, 2007), machine translation (Liu and Gildea, 2010) and others (Coyne et al., 2012; Chen et al., 2013; Agarwal et al., 2014). Figure 1 shows an example, where all predicates as well as their semantic frame and roles in the sentence are depicted. + +Previous studies (Das et al., 2014; Swayamdipta et al., 2017; Bastianelli et al., 2020) usually divide the task into three subtasks, including target identification, frame classification and semantic role labeling (SRL), respectively. By performing the three subtasks sequentially, the whole frame semantic parsing can be accomplished. The majority of + +![](images/e9b7058e093cfbaa8ab2549071c25d1faaefe00f60843ee16181253f7b3f5365.jpg) +Figure 1: An example involving frame semantic structures, taken from the FrameNet (Baker et al., 1998). Frame-evoking predicates are highlighted in the sentence, and corresponding frames are shown in colored blocks below. The frame-specific roles are underlined with their frames in the same row. + +works focus on either one or two of the three subtasks, treating them separately (Yang and Mitchell, 2017; Botschen et al., 2018; Swayamdipta et al., 2018; Peng et al., 2018). + +The above formalization has two weaknesses. First, the individual modeling of the three subtasks is inefficient to utilize the relationship among them. Apparently, the earlier subtasks can not exploit the information from their future subtasks. Second, the pipeline strategy can suffer from the error propagation problem, where the errors occurring in the previous subtasks can influence the later subtasks as well. To address the two weaknesses, end-to-end modeling is one promising alternative, which has been widely adopted in natural language processing (NLP) (Cai et al., 2018; He et al., 2018; Sun et al., 2019; Fu et al., 2019; Fei et al., 2020). + +In this work, we propose a novel graph-based model to tackle frame semantic parsing in an end-to-end way, using a single model to perform the three subtasks jointly. We organize all predicates and their FrameNet semantic by a graph, and then design an end-to-end neural model to construct the graph incrementally. An encoder-decoder model is presented to achieve the graph building goal, where the encoder is equipped with contextualized BERT representation (Devlin et al., 2019), and the decoder includes node generation and edge building sequentially. Our final model is elegant and easy to + +understand as a whole. + +We conduct experiments on two benchmark datasets to evaluate the effectiveness of our proposed model. First, we study our graph-based framework in two settings, the end-to-end scenario and the pipeline manner, where the node building and edge building are trained separately. Results show that end-to-end modeling is much better. Besides, we also compare our model with several other pipelines, where the similar findings can be observed. Second, we compare our graph-based framework with previous methods by the three subtasks individually, finding that the graph-based architecture is highly competitive. We can obtain the best performance in the literature, leading to a new state-of-the-art result. Further, we conduct extensive analyses to understand our method in depth. + +In summary, we make the following two major contributions in this work: + +(1) We propose a novel graph-based model for frame semantic parsing which can achieve competitive results for the end-to-end task as well as the individual subtasks. +(2) To the best of our knowledge, we present the first work of end-to-end frame semantic parsing to solve all included subtasks together in a single model. + +We will release our codes as well as experimental setting public available on https://github. com/Ch4osMy7h/FramenetParser to help result reproduction and facilitate future researches. + +# 2 Related Work + +Frame-Semantic Parsing Frame-semantic parsing has been received great interest since being released as an evaluation task of SemEval 2007 (Baker et al., 2007). The task attempts to predict semantic frame structures defined in FrameNet (Baker et al., 1998) which are composed of frame-evoking predicates, their corresponding frames and semantic roles. Most of the previous works (Das et al., 2014; Swayamdipta et al., 2017; Bastianelli et al., 2020) focus on a pipeline framework to solve the task, training target identification, frame classification and semantic role labeling models separately. In this work, to the best of our knowledge, we present the first end-to-end model to handle the task jointly. + +Among the three subtasks of frame semantic parsing, semantic role labeling has been researched + +most extensively (Kshirsagar et al., 2015; Yang and Mitchell, 2017; Peng et al., 2018; Swayamdipta et al., 2018; Marcheggiani and Titov, 2020). It is also highly related to the Propbank-style semantic role labeling (Palmer et al., 2005) as while with only differences in the frame definition. Thus the models between the two types of semantic role labeling can be mutually borrowed. There are several end-to-end Propbank-style semantic role labeling models as well (Cai et al., 2018; He et al., 2018; Li et al., 2019; Fu et al., 2019). However, these models are difficult to be applied directly for frame semantic parsing due to the additional frame classification as well as the discontinuous predicates. In this work, we present a totally-different graph construction style model to solve end-to-end frame semantic parsing elegantly. + +Graph-Based Methods Recently, graph-based methods have been widely used in a range of other tasks, such as dependency parsing (Dozat and Manning, 2016; Kiperwasser and Goldberg, 2016; Ji et al., 2019), AMR parsing (Flanigan et al., 2014; Lyu and Titov, 2018; Zhang et al., 2019a,b) and relation extraction (Sun et al., 2019; Fu et al., 2019; Dixit and Al-Onaizan, 2019). In this work, we aims for frame semantic parsing, organizing the three included subtasks by a well designed graph, converting it into graph-based parsing task naturally. + +# 3 Method + +# 3.1 Task Formulation + +The goal of frame-semantic parsing is to extract semantic predicate-argument structures from texts, where each predicate-argument structure includes a predicate by a span of words, a well-defined semantic frame to express the key roles of the predicate, and the values of these roles by word spans. Formally, given by a sentence $X$ with $n$ words $w_{1},w_{2},\ldots ,w_{n}$ , frame-semantic parsing aims to output a set of tuples $\mathcal{V} = \{(y_1,y_2,\dots ,y_k)\}_{k = 1}^K$ , where each $y_{i}$ consists of the following elements: + +- $p_i = (p_{i,1}, \ldots, p_{i,d_i})$ , where $p_{i,*}$ are word spans in $X$ and $d_i$ indicates the number of pieces of the predicate since it might be discontinuous. +- $f_{i} \in \mathcal{F}$ , where $\mathcal{F}$ is the frame set which is well defined in FrameNet. + +![](images/de84f3aaae54a4a7b198832769ca64de6dd8cfe3cc8e9bbb596852506304d36e.jpg) +Figure 2: The overall architecture of our graph-based end-to-end model. + +- $r_i = ([r_{i,1}, v_{i,1}], \ldots, [r_{i,m_k}, v_{k,m_k}])$ , where $r_{i,*}$ are frame roles derived from $f_i$ and $v_{i,*}$ are also word spans in $X$ . + +The full frame semantic parsing is usually divided into the following three subtasks: + +- Target Identification (also known as predicate identification), which is to identify all valid frame-evoking predicates from $X$ , outputting $P = \{(p_1, \dots, p_k)\}$ . +- Frame Classification, which is to predicate the concrete evoking frame $f_{i}$ of a certain predicate $p_{i} \in P$ . +- Semantic Role Labeling, which is to assign concrete values for roles $r_i$ by given a predicate frame pair $(p_i, f_i)$ . + +Previously, the majority of work of frame semantic parsing performs the three subtasks individually, ignoring their highly-related connections and also being vulnerable to the error propagation problem. Thus, we present an end-to-end graph-based model to accomplish the three subtasks by a single model. + +# 3.2 The Graph-Based Methodology + +We formalize the frame-semantic parsing task as a graph constructing problem, and further present an encoder-decoder model to perform the task in an end-to-end way. The encoder aims for representation learning of the frame semantic parsing, and the decoder constructs the semantic graph incrementally. Concretely, for the encoder, we compute the span representations since the basic processing units of our model are word spans, and for the decoder, we first generate all graph nodes, and then build edges among the graph nodes. Figure 2 shows + +the overall architecture of our method. + +# 3.2.1 Encoding + +Due to the strong capability of BERT (Devlin et al., 2019) for represent learning, we adopt it as the backbone of our model. Given a sentence $X = \{w_{1}, w_{2}, \dots, w_{n}\}$ , BERT converts each word $w_{i}$ into word pieces, and feed them into deep transformer encoders to get the piece-level representation. To obtain word-level representation, we average all piece vectors of word $w_{i}$ as its final representation $\mathbf{e}_{i}$ . + +For further feature abstraction, we exploit BiHLSTM (Srivastava et al., 2015) to compose high-level features based on word-level output $\mathbf{e}_1,\dots ,\mathbf{e}_n$ , following Swayamdipta et al. (2018): + +$$ +\mathbf {h} _ {1}, \dots , \mathbf {h} _ {n} = \operatorname {B i H L S T M} \left(\mathbf {e} _ {1}, \dots , \mathbf {e} _ {n}\right), \tag {1} +$$ + +where the gated highway connections are applied to BiLSTMs. + +Span Representation We enumerate all possible spans $S = \{s_1, s_2, \ldots, s_m\}$ in a sentence and limit the maximum span length to $L$ . Then, each span $s_i \in S$ is represented by: + +$$ +\mathbf {g} _ {i} = \left[ \mathbf {h} _ {\text {S T A R T} (i)}; \mathbf {h} _ {\text {E N D} (i)}; \mathbf {h} _ {\text {A T T N}}; \phi \left(s _ {i}\right) \right], \tag {2} +$$ + +where $\phi(g_i)$ represents the learned embeddings of span width features, $\mathbf{h}_{\mathrm{ATTN}}$ is computed by self-attention mechanism which weights the corresponding vector representations of the words in the span by normalized attention scores, and $\mathrm{START}(i)$ and $\mathrm{END}(i)$ denote start and end indices of $s_i$ . + +# 3.2.2 Node Building + +Node Generation We exploit a preliminary classification to achieve the goal of node generation. First, a span can be either a graph node or not. Further, a graph node can be a full or partial predicate node, and the node can also be a role node. Totally, we define four types for a given span: + +- FPRD: a full predicate span. +- PPRD: a partial predicate span. +- ROLE: a role span. +- NULL: a span that is not a graph node. + +The type of a span can be the full permutation of elements in set {FPRD, PPRD, ROLE} or NULL. Thus, each span can be classified into eight types (i.e., FPRD, PPRD, ROLE, FPRD-PPRD, FPRD-ROLE, PPRD-ROLE, FPRD-PPRD-ROLE, NULL). + +Given an input span $s_i$ with its vectorial representation as $\mathbf{g}_i$ , we exploit one MLP layer with softmax to classify the span type: + +$$ +\mathbf {p} _ {n} = \operatorname {s o f t m a x} \left(\operatorname {M L P} _ {n} \left(\mathbf {g} _ {i}\right)\right), \tag {3} +$$ + +where $\mathbf{p}_n$ indicates the probabilities of span types. By this classification, all non-null type spans are graph nodes, reaching the goal of node generation. + +Frame Classification of Predicate Nodes Node generation detects all graph nodes roughly, assigning each node with a single label to indicate whether it can be served as a predicate or role. Here we go further to recognize the semantic frames for all predicate nodes, which could be regarded as an in-depth analysis for node attribution. The step is corresponding to the frame classification subtask. + +Given an input span $s_i$ of a predicate node (FPRD or PPRD), assuming its representation being $\mathbf{g}_i$ , we use another MLP layer together with softmax to output the probabilities of each candidate frame for the predicate node: + +$$ +\mathbf {p} _ {c} = \operatorname {s o f t m a x} \left(\operatorname {M L P} _ {c} \left(\mathbf {g} _ {i}\right)\right), \tag {4} +$$ + +where $\mathbf{p}_c$ is the output probabilities of semantic frames. Specially, frames are constrained by the lexical units defined in FrameNet. For example, the predicate with the lexical unit "meeting" only evokes frame Social_event and Discussion. + +We also adopt the pseudo strategy following Swayamdipta et al. (2017) to optimize the classification. First, we use spacy lemmatizer (Honnibal et al., 2020) to translate an input sentence into lemmas. Then, if a word span is a predicate node, we + +treat the corresponding lemma span as the pseudo lexical unit and index the corresponding semantic frame set by it. Finally, we reduce the search space by masking frames outside the set. In our experiments, we find it is practical to apply this strategy. + +# 3.2.3 Edge Building + +After graph nodes are ready, we then build edges to accomplish frame semantic parsing accordingly. There are two types of edges in our model. + +Predicate-Predicate Edge For extracting discontinuous mentions, we build the edges between nodes which are predicate fragments (i.e., PPRD nodes). In detail, we treat it as a binary classification problem considering whether two nodes alongside the edge can form parts of a predicate or not. Formally, given two PPRD nodes with the corresponding spans $\mathbf{s}_i^p$ and $\mathbf{s}_j^p$ and their encoding representations $\mathbf{g}_i^p$ and $\mathbf{g}_j^p$ , we utilize one MLP layer to classify their edge type: + +$$ +\mathbf {p} _ {p e} = \operatorname {s o f t m a x} \left(\mathrm {M L P} _ {p e} \left(\left[ \mathbf {g} _ {i} ^ {p}, \mathbf {g} _ {j} ^ {p}, \mathbf {g} _ {i} ^ {p} * \mathbf {g} _ {j} ^ {p} \right]\right)\right), \tag {5} +$$ + +where $\mathbf{p}_{pe}$ indicates the probabilities of two types, namely Connected and NULL (i.e., cannot be connected), and the feature representation is borrowed from Zhao et al. (2020). + +Predicate-Role Edge For extracting framespecific roles, we build the edges between predicates nodes (i.e., node type by FPRD or PPRD) and role nodes (i.e., node type by ROLE). Given a predicate node $\mathbf{s}_i^p$ and a role node $\mathbf{s}_j^r$ , assuming their neural representations being $\mathbf{g}_i^p$ and $\mathbf{g}_j^r$ , respectively, we utilize another MLP layer to determine their edge type by multi-class classification: + +$$ +\mathbf {p} _ {r e} = \operatorname {s o f t m a x} \left(\mathrm {M L P} _ {r e} \left(\left[ \mathbf {g} _ {i} ^ {p}, \mathbf {g} _ {j} ^ {r}, \mathbf {g} _ {i} ^ {p} * \mathbf {g} _ {j} ^ {r} \right]\right)\right), \tag {6} +$$ + +where $\mathbf{p}_{re}$ indicates the probabilities of predicate-role edge types (i.e., frame roles as well as a NULL label indicating no relation). + +# 3.3 Joint Training + +To train the joint model, we employ the negative log-likelihood loss function for both node building and edge building step: + +$$ +\mathcal {L} _ {n} = - \sum \log \mathbf {p} _ {n} (y _ {n}) - \sum \log \mathbf {p} _ {c} (y _ {c}) +$$ + +$$ +\mathcal {L} _ {e} = - \sum \log \mathbf {p} _ {p e} \left(y _ {p e}\right) - \sum \log \mathbf {p} _ {r e} \left(y _ {r e}\right), \tag {7} +$$ + +where $y_{n}$ and $y_{c}$ are the gold labels for the text spans and predicate nodes, $y_{pe}$ and $y_{re}$ indicate the gold edge labels for the predicate-predicate and predicate-role node pairs. Further, losses from two steps are summed together, leading to the final training objective of our model: + +$$ +\mathcal {L} = \mathcal {L} _ {n} + \mathcal {L} _ {e} \tag {8} +$$ + +# 3.4 Decoding + +The decoding aims to derive frame semantic parsing results by the graph-based model. Here we describe the concrete process by the three subtasks. + +Target Identification The target identification involves both node building and edge building steps. First, all predicate nodes with type FPRD are predicates. Second, there is a small percentage of predicates composed of multiple nodes with type PPRD. If two or more such nodes are connected with predicate-predicate edges, we regard these nodes as one single valid predicate. + +Frame Classification The frame classification decoding is performed straightforwardly for single-node predicates. For multi-node predicates, there may exist conflicts from the frame classification of different nodes. Concretely, given a multi-node predicate composed of two or more nodes, the max-scored frame evoked by them might be different. Thus, to address this issue, we use the maximum operation achieved by first summing up the softmax distributions over all covered nodes and then fetching the max-scored frame. + +Semantic Role Labeling The condition of semantic role labeling is similar to frame classification. For the single-node predicates, the semantic role labeling output is determinative. For the multi-node predicates, we assign role values for the candidate roles inside its predicted frame only, and further select the concrete role node, which is the highest-probability to the covered predicate nodes. + +# 4 Experiments + +# 4.1 Setting + +Dataset We adopt the FrameNet versions 1.5 and $1.7^{2}$ (denoted by FN1.5 and FN1.7 for short, respectively) as the benchmark datasets to evaluate our models. FN1.5 is the widely used dataset in + +
DatasetTypeTrainDevTest
FN1.5# Sentence2,713326982
# Predicate16,6182,2824,427
# Role29,4494,0397,146
FN1.7# Sentence3,4133261,354
# Predicate19,3842,2706,714
# Role34,3854,02411,303
+ +Table 1: Statistics of the datasets. + +previous work and FN1.7 is the latest version used recently which involves more semantics. We follow the previous studies (Das et al., 2014; Swayamdipta et al., 2017) to divide the two datasets into the training, validation and test sets, respectively. Table 1 shows the overall data statistics. + +Evaluation We measure the performance of frame semantic parsing by its three subtasks, respectively. For target identification, we treat a predicate as correct only when all its included word spans exactly match with the gold-standard spans of the predicate. For frame classification, we use the joint performance for evaluation, regarding a classification as correct only when the predicate, as well as the frame, are both correct. For semantic role labeling, we also use the joint performance regarding the role as correct when the predicate, role span (exact match), and role type are all correct, which is treated as our major metric. + +Derived Models Following previous studies and our graph-based method, we can derive a range of basic models for comparisons: + +- Node, the node building submodel which is the first step of our decoder module mentioned in section 3.2.2. +- Edge: the edge building submodel which is the second step of our decoder module mentioned in section 3.2.3. +- Predicate, a graph-based predicate identification model, which is implemented by keeping only the predicate node generation and predicate-predicate edge building in our final graph-based model. +- Frame, our final graph-based model with only the frame classification submodel, assuming predicate nodes and their edges are given. +- Role, our graph-based model with only role node generation and predicate-role edge building, assuming predicate nodes and their frames are given. +- PredicateFrame, a joint model of Predicate + +
DataModelTargetFrameRole
PRF1PRF1PRF1
FN1.5Predicate+Frame+Role73.1775.4774.3066.0868.1567.0645.9146.7046.30
Predicate•Frame+Role74.3276.1675.2367.9168.7868.3446.8547.8947.36
Predicate+Frame•Role73.1775.4774.3067.7368.5668.1446.5149.0147.72
Predicate•Frame+Semi-CRF74.3276.1675.2367.9168.7868.3446.3350.8748.50
Node+Edge74.9975.8575.4268.0168.7968.4046.6449.3547.96
Ours-Joint75.8176.1775.9968.7269.0568.8947.7950.6049.16
FN1.7Predicate+Frame+Role78.6269.9274.0271.0563.0666.8249.3245.6047.38
Predicate•Frame+Role76.7972.9274.8169.2966.1667.6949.2146.0347.57
Predicate+Frame•Role78.6269.9274.0269.7965.0667.3449.4646.0147.67
Predicate•Frame+Semi-CRF76.7972.9274.8169.2966.1667.6949.0347.4948.24
Node+Edge76.8172.5474.6268.9966.3367.6349.5546.2847.86
Ours-Joint76.1674.9875.5669.3968.3068.8449.0948.8148.95
+ +Table 2: Main results of frame-semantic parsing on FN1.5 and FN1.7, where the pipeline and end-to-end methods are compared thoroughly. + +and Frame, which is implemented by excluding the role node generation and predicate-role edge building in our final model. + +- FrameRole, a joint model of Frame and Role, which is implemented by excluding the predicate node generation and predicate-predicate edge building in our final model. +- Semi-CRF, a span-level semi-Markov CRF (Sarawagi and Cohen, 2005) model for semantic role labeling which is borrowed from Swayamdipta et al. (2018), where the only difference is that we use BERT as the representation layer for fair comparisons. $^3$ + +Note that the above derived models are trained individually. Based on these models, we can build five pipeline systems: (1) Predicate + Frame + Role, (2) Predicate $\circ$ Frame + Role, (3) Predicate + Frame $\circ$ Role, (4) Predicate $\circ$ Frame + Semi-CRF, and (5) Node + Edge, which are exploited for comparisons with our graph-based end-to-end model. + +Hyperparameters All our codes are based on Allennlp Library (Gardner et al., 2017) and trained on a single RTX-2080ti GPU. We choose the BERT-base-cased4, which consists of 12-layer transformers with the hidden size 768 for all layers. We set all the hidden sizes of BiHLSTM to 200, and the number of layer to 6. The MLP layers are of dimension size by 150 and depth by 1, with ReLU function. We apply dropouts of 0.4 to BiHLSTM and 0.2 to MLP layers. Following Swayamdipta + +et al. (2018), we also limit the maximum length of spans to 15 for efficiency, resulting in oracle recall of $95\%$ on the development set. + +For training, we exploit online batch learning with a batch size of 8 to update the model parameters, and use the BertAdamW algorithm with the learning rate $1 \times 10^{-5}$ to finetune BERT and $1 \times 10^{-3}$ to fine-tune other parts of our model. The gradient clipping mechanism by a maximum value of 5.0 is exploited to avoid gradient explosion. The training process is stopped early if the performance does not increase by 20 epochs. + +# 4.2 Main Results + +Table 2 shows the main results on the test sets of FN1.5 and FN1.7 datasets respectively, where our end-to-end model is compared with the four strong pipeline methods mentioned in Section 4.1. We can see that the end-to-end joint model can lead to significantly better performance (p-value below $10^{-5}$ by pair-wise t-test) as a whole on both datasets. Concretely, we can obtain average improvements of $\frac{0.57 + 0.75}{2} = 0.66$ on target identification, $\frac{0.49 + 1.15}{2} = 0.82$ on frame classification, and $\frac{0.66 + 0.71}{2} = 0.69$ on semantic role labeling on the two datasets compared with the best results of the pipeline systems, respectively. + +Besides the overall advantage of the end-to-end joint model over the pipelines, we can also find that the joint of two subtasks can also outperform their counterpart baselines. Concretely, as shown in Table 2, PredicateFrame is better than Predicate + Frame, and FrameRole is better than Frame + Role. The results further indicate the effectiveness + +
ModelFN1.5FN1.7
Das et al. (2014)45.40-
Swayamdipta et al. (2017)73.2373.25
Bastianelli et al. (2020) (wo syntax)74.96-
Bastianelli et al. (2020) (w syntax)76.80-
Predicate76.0975.34
PredicateFrame76.4775.88
Ours-Joint76.9076.27
+ +Table 3: Target Identification Results. + +
ModelFN1.5FN1.7
Das et al. (2014)83.60-
Hermann et al. (2014)88.41-
Hartmann et al. (2017)87.63-
Yang and Mitchell (2017)88.20-
Swayamdipta et al. (2017)86.4086.55
Botschen et al. (2018)88.82-
Peng et al. (2018)90.0089.10
Bastianelli et al. (2020) (wo syntax)89.90-
Bastianelli et al. (2020) (w syntax)89.83-
Frame90.1690.34
Ours-Joint90.6290.64
+ +Table 4: The accuracy of the Frame Identification task based on the gold targets. + +
ModelFN1.5FN1.7
Das et al. (2014)59.10-
Kshirsagar et al. (2015)63.10-
Yang and Mitchell (2017)65.50-
Swayamdipta et al. (2017)59.4861.36
Swayamdipta et al. (2018)69.10-
Marcheggiani and Titov (2020)69.30-
Bastianelli et al. (2020) (wo syntax)72.85-
Bastianelli et al. (2020) (w syntax)75.56-
Semi-CRF73.5672.22
Ours-Joint73.2872.06
+ +Table 5: Pipeline Semantic Role Labeling results using gold targets and frames. + +of joint learning. Further, by comparisons between our graph-based Role model and the Semi-CRF one, we can see that the Semi-CRF is better. The reason could be that the Semi-CRF model can exploit higher-order features among different frame roles which are ignored by our simple edge building module. As our edge building considers all predicates and all roles together, the incorporation of such features is still with great inconveniences. + +# 4.3 Individual Subtask Evaluation + +Previous studies commonly focus on only individual subtasks of frame semantic parsing. In order to compare with these studies, we simulate the scenarios by imposing constraints with gold-standard inputs in our joint models. In this way, we show the capability of our models on individual tasks. In particular, Bastianelli et al. (2020) report the best performance of the previous studies in the literature, which is based on BERT representations. They adopt the constituency syntax which can boost the individual model performances significantly. Since our final model uses no other knowledge except BERT, we report their model performances by with syntax (denoted as w syntax) and without syntax (denoted as wo syntax) for careful comparisons. + +Target Identification We show the performance of previous studies on target identification in Table 3, and also report the results of three-related models derived from this work. First, by comparing our three models (i.e. predicate only, predicate with frame, and the full graph parsing), the results show that both frame classification and semantic role labeling can help target identification. Second, we can see that our final model can achieve the best performance among the previous work. + +Frame Classification Noted that we have Table 4 shows the result of individual frame classification tasks, where all systems assume gold-standard predicates as inputs. Similar to target identification, we can achieve better performance than all previous studies. Peng et al. (2018) did not use BERT, but they use extra datasets from FrameNet (exemplar sentences) and semantic dependency parsing, which can also benefit our task greatly. As for the comparison between our implemented two models, Frame alone and our final joint model, the results show that semantic role labeling can benefit the frame classification, which is reasonable. + +Semantic Role Labeling Table 5 shows the results of various models on the semantic role labeling task. By constraining gold-standard predicates and frames to the outputs, our model degenerates to a normal semantic role labeling model. We also give the result by using Semi-CRF. As shown, our final semantic role labeling model is highly competitive in comparison with previous studies, except + +![](images/dfa88964dd1b550cc456a9707945acded4858be5caa51f0428e2177ba5a80416.jpg) +(a) FN1.5 + +![](images/677dd19067e506e41c7d9d975b41c97ed2b7b26e58ff8afaf4e0b5d485ca7220.jpg) +(b) FN1.7 +Figure 3: F1 scores of frame metric in different predicates. Single: single-word predicate. Multi: multi-word predicate. + +
ModelNodeFrameEdge
Predicate+Frame+Role64.1768.3950.14
Predicate•Frame+Role64.3268.8950.97
Predicate+Frame•Role64.2468.9751.09
Node+Edge64.5669.3051.43
Ours-Joint65.3469.8052.13
+ +the model of Bastianelli et al. (2020) with syntax. The exception is expected, since syntax has been demonstrated highly effective before for SRL (Swayamdipta et al., 2018; Peng et al., 2018; Bastianelli et al., 2020). In addition, the Semi-CRF model is better than our method, which is consistent with the results in Table 2. + +# 4.4 Discussion + +In this subsection, we conduct detailed experimental analyses for better understanding our graph-based methods. Note that if not specified, the analyses are based on the FN1.7 dataset, which has a larger scale of annotations for exploring. + +Effectiveness on recognizing different types of predicates For frame-semantic parsing, extracting correct frame-evoking predicates is the first step that influences the later subtasks directly. Here we performed fine-grained analysis for the predicate identification, splitting the predicates into three categories, i.e., single-word predicates (Single), multiword predicates (Multi), respectively. As shown in Figure 3, our joint model can achieve consistent improvements over the pipeline models for all kinds of predicates, indicating that the information from the frame and frame-specific roles are both beneficial for target identification. In addition, the multi-word predicates are more difficult than the single-word predicates, leading to significant decreases as a whole. + +Table 6: F1 score results of the modules. + +
ModelTargetFrameRole
Predicate+Frame+Role74.8667.2847.34
Predicate•Frame+Role75.1167.7847.81
Predicate+Frame•Role74.8667.5747.93
Predicate•Frame+Semi-CRF75.1167.7848.26
Ours-Joint75.7268.8648.89
+ +Table 7: F1 score results of our proposed method ignoring discontinuous predicates. + +![](images/2d7628bb13e9ea196c298098c9f8e50b588d338ee9cdd4e2b2eb9717586dd6b7.jpg) +Figure 4: F1 scores of roles in terms of the length. + +Performance by the role length Frame-special roles are the core structures that frame-semantic parsing intends to obtain. It is obvious that roles of different lengths would affect the performance, and longer roles would be much more difficult. Here we bucket the roles into seven categories and report the F1-score of our proposed methods on them. Figure 4 shows the results. We can find that the overall curve declines as the length increases, which is consistent with our intuition, and our graph-based end-to-end model is better than the pipeline methods of all lengths. + +Performances of node, frame and edge Our graph-based model builds nodes, determines node attributes (frame), and builds edge sequentially, which is different from the standard pipelines based on target identification, frame classification and semantic role labeling. Thus, it is interesting to see the performance based on node building, frame classification $^6$ and edge building, respectively. Table 6 shows the results, where the joint model as well as four pipeline models are included. As shown, we can see that the full joint model is better than the partial joint models, and the full pipeline model gives the worst results. + +Ignoring discontinuous predicates Although in both FN1.5 and FN1.7 datasets, discontinuous + +
ModelFN1.5FN1.7
Semi-CRF1.94 sent/s1.72 sent/s
Ours-Joint15.72 sent/s16.51 sent/s
+ +Table 8: Comparison on decoding speed (sentences per second) for the semantic role labeling subtask. + +
LU LexiconText and Frames
Up Tai Hang [Road]Roadways behind Cause-way Bay is Aw Boon Haw (Tiger Balm) [Gardens]Locale_by_use.
XUp Tai Hang [Road]Roadways [behind]Locative Relation Causeway Bay is Aw Boon Haw (Tiger Balm) [Gardens]Locale_by_use.
Ground TruthUp Tai Hang [Road]Roadways behind Causeway [Bay]Natural_features is Aw Boon Haw (Tiger Balm) [Gardens]Locale_by_use.
+ +Table 9: An example for frame suggestion out-the-scope-of the predefined LU lexicon, where the blue indicates the suggested frame outside the dictionary, $\checkmark$ and $x$ represent whether inference with the dictionary, respectively. + +predicates are significantly smaller in amount than others, we keep it in this work for a more comprehensive study to demonstrate that our model can process them as well. Here we also add the results which ignore the discontinuous predicates (i.e., removing the predicate- predicate edges) to facilitate future studies. As shown in Table 7, our joint model performs better than the pipeline methods, which is consistent with the main results. + +Comparison on decoding speed Table 8 compares the computational efficiency of the strong Semi-CRF baseline and our joint model for semantic role labeling task, which is also an essential measurement of proposed approach. Experimental results are all obtained by running models on a single 2080ti GPU. We could observe that our model can reach an almost ten times faster speed in comparison to Semi-CRF. Even though the Semi-CRF implementation uses dynamic programming to optimize the time complexity, it still needs to iterate over segments of each sentence in the batch one by one, which might not take advantage of the GPU's parallel capabilities to accelerate the process. Nevertheless, our model as a whole adopts batch-based learning, which enables more efficient inference. + +Frame classification without dictionary Following Swayamdipta et al. (2017), we also adopt the Lexical Unit (LU) dictionary in our model empirically. However, according to Punyakanok et al. (2008), sometimes the dictionary might be quite limited. Therefore, we offer one eample in Table 9 to illustrate the capability of our model for frames not in the dictionary. As shown, our model could predict the appropriate frame outside the dictionary as well and might additionally enrich the gold-standard annotations (i.e., the blue texts which do not appear in the Ground Truth). + +# 5 Conclusion + +In this paper, we proposed a novel graph-based model to address the end-to-end frame semantic parsing task. The full frame semantic parsing result of one sentence is organized as a graph, and then we suggest an end-to-end neural model for the graph building. Our model first encodes the input sentence for span representations with BERT, and then constructs the graph nodes and edges incrementally. To demonstrate the effectiveness of our method, we derived several pipeline methods and used them to conduct the experiments for comparisons. Experimental results showed that our graph-based model achieved significantly better performance than various pipeline methods. In addition, in order to compare our models with previous studies in the literature, we conducted experiments in the scenarios of the individual subtasks. The results showed that our proposed models are highly competitive. + +# Acknowledgements + +We thank all reviewers for their hard work. This work is supported by grants from the National Key Research and Development Program of China (No. 2018YFC0832101) and the National Natural Science Foundation of China (No. 62176180). + +# References + +Apoory Agarwal, Sriramkumar Balasubramanian, Anup Kotalwar, Jiehan Zheng, and Owen Rambow. 2014. Frame semantic tree kernels for social network extraction from text. In Proceedings of the EACL, pages 211-219. +Collin Baker, Michael Ellsworth, and Katrin Erk. 2007. SemEval-2007 task 19: Frame semantic structure extraction. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 99–104. + +Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the ACL-COLING, pages 86-90. +Emanuele Bastianelli, Andrea Vanzo, and Oliver Lemon. 2020. Encoding syntactic constituency paths for frame-semantic parsing with graph convolutional networks. CoRR, abs/2011.13210. +Teresa Botschen, Iryna Gurevych, Jan-Christoph Klie, Hatem Mousselly-Sergieh, and Stefan Roth. 2018. Multimodal frame identification with multilingual evaluation. In Proceedings of the NAACL-HLT, pages 1481-1491. +Jiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic-agnostic over syntactic-aware? In Proceedings of the COLING, pages 2753-2765. +Y. Chen, W. Y. Wang, and A. I. Rudnicky. 2013. Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 120-125. +Bob Coyne, Alex Klapheke, Masoud Rouhizadeh, Richard Sproat, and Daniel Bauer. 2012. Annotation tools and knowledge representation for a text-toscope system. In Proceedings of the COLING 2012, pages 679-694. +Dipanjan Das, Desai Chen, Andre F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguistics, 40(1):9-56. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL-HLT. +Kalpit Dixit and Yaser Al-Onaizan. 2019. Span-level model for relation extraction. In Proceedings of the ACL, pages 5308-5314. +Timothy Dozat and Christopher D Manning. 2016. Deep bioaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734. +Hao Fei, Yafeng Ren, and Donghong Ji. 2020. High-order refining for end-to-end Chinese semantic role labeling. In Proceedings of the AACL, pages 100-105. +Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the Abstract Meaning Representation. In Proceedings of the ACL, pages 1426-1436. +Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the ACL, pages 1409-1418. + +Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform. +Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Comput. Linguist., 28(3):245-288. +Silvana Hartmann, Ilia Kuznetsov, Teresa Martin, and Iryna Gurevych. 2017. Out-of-domain FrameNet semantic role labeling. In Proceedings of the 15th EACL, pages 471-482. +Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the ACL, pages 364-369. +Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame identification with distributed word representations. In Proceedings of the ACL, pages 1448-1458. +Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python. +Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph-based dependency parsing with graph neural networks. In Proceedings of the ACL, pages 2475-2485. +Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327. +Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime Carbonell, Noah A. Smith, and Chris Dyer. 2015. Frame-semantic role labeling with heterogeneous annotations. In Proceedings of the ACLIJCNLP, pages 218-224. +Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dependency or span, end-to-end uniform semantic role labeling. In Proceedings of the AAAI, volume 33, pages 6730-6737. +Ding Liu and Daniel Gildea. 2010. Semantic role features for machine translation. In Proceedings of the COLING 2010, pages 716-724. +Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In Proceedings of the ACL, pages 397-407. +Diego Marcheggiani and Ivan Titov. 2020. Graph convolutions over constituent trees for syntax-aware semantic role labeling. In Proceedings of the EMNLP, pages 3915-3928. + +Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106. +Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learning joint semantic parsers from disjoint data. In Proceedings of the NAACL-HLT, pages 1492-1502. +Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287. +Sunita Sarawagi and William W Cohen. 2005. Semimarkov conditional random fields for information extraction. In Advances in Neural Information Processing Systems, volume 17. MIT Press. +Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of the EMNLP-CoNLL, pages 12-21, Prague, Czech Republic. Association for Computational Linguistics. +Rupesh K Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Training very deep networks. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc. +Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man Lan, Shiliang Sun, and Nan Duan. 2019. Joint type inference on entities and relations via graph convolutional networks. In Proceedings of the ACL, pages 1361-1370. +Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proceedings of the ACL, pages 8-15. +Swabha Swayamdipta, Sam Thomson, Chris Dyer, and Noah A Smith. 2017. Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold. arXiv preprint arXiv:1706.09528. +Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In Proceedings of the EMNLP, pages 3772-3782. +Bishan Yang and Tom Mitchell. 2017. A joint sequential and relational model for frame-semantic parsing. In Proceedings of the EMNLP, pages 1247-1256. Association for Computational Linguistics. +Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR parsing as sequence-tograph transduction. In Proceedings of the ACL, pages 80-94. +Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. In Proceedings of the EMNLP-IJCNLP, pages 3786-3798, Hong Kong, China. Association for Computational Linguistics. + +He Zhao, Longtao Huang, Rong Zhang, Quan Lu, and Hui Xue. 2020. SpanMlt: A span-based multi-task learning framework for pair-wise aspect and opinion terms extraction. In Proceedings of the ACL, pages 3239-3248. \ No newline at end of file diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/images.zip b/agraphbasedneuralmodelforendtoendframesemanticparsing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e69c6311fa4f4fb8edb050aff32e55a460f5e2c7 --- /dev/null +++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7cf118538472c2e01bf734735652e94b688851ed626d93290399d13102a8594 +size 570256 diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/layout.json b/agraphbasedneuralmodelforendtoendframesemanticparsing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8fa8eebb19f889649127a3ef27bc555f55f4b7fd --- /dev/null +++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf7836768813dac3a06c7c5943a25eb4ce469c31e272ae28758584660f561f7d +size 393372 diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_content_list.json b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9585c392d36e01a3213961fc72ac42dc2fc17ea3 --- /dev/null +++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:861a986ce00aa703c48f0ec526e7931846947bb5bc740157fdba9d1633024080 +size 87298 diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_model.json b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_model.json new file mode 100644 index 0000000000000000000000000000000000000000..42a6bd4ca217f76206a18e174961a03dccf566d9 --- /dev/null +++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14c29f606b21ceb402180aa516e879edf5beae1a39cee64ddb4b8025f17823a1 +size 101122 diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_origin.pdf b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f376bab02ca25bd37a32a48b2e87904dcc445020 --- /dev/null +++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd291a28022e0d813d707697212cd3110268eae020dab6687dfafa7e6b93489d +size 554314 diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/full.md b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/full.md new file mode 100644 index 0000000000000000000000000000000000000000..673164d1d22a7e841884b2cd43d729268a4b685e --- /dev/null +++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/full.md @@ -0,0 +1,285 @@ +# Agreeing to Disagree: Annotating Offensive Language Datasets with Annotators' Disagreement + +Elisa Leonardelli, Stefano Menini, Alessio Palmero Aprosio Marco Guerini, Sara Tonelli + +Fondazione Bruno Kessler, Trento, Italy + +{eleonardelli,menini,aproasio,guerini,satonelli}@fbk.eu + +# Abstract + +Since state-of-the-art approaches to offensive language detection rely on supervised learning, it is crucial to quickly adapt them to the continuously evolving scenario of social media. While several approaches have been proposed to tackle the problem from an algorithmic perspective, so to reduce the need for annotated data, less attention has been paid to the quality of these data. Following a trend that has emerged recently, we focus on the level of agreement among annotators while selecting data to create offensive language datasets, a task involving a high level of subjectivity. Our study comprises the creation of three novel datasets of English tweets covering different topics and having five crowd-sourced judgments each. We also present an extensive set of experiments showing that selecting training and test data according to different levels of annotators' agreement has a strong effect on classifiers performance and robustness. Our findings are further validated in cross-domain experiments and studied using a popular benchmark dataset. We show that such hard cases, where low agreement is present, are not necessarily due to poor-quality annotation and we advocate for a higher presence of ambiguous cases in future datasets, particularly in test sets, to better account for the different points of view expressed online. + +# 1 Introduction + +When creating benchmarks for NLP tasks through crowd-sourcing platforms, it is important to consider possible issues with inter-annotator agreement. Indeed, crowd-workers do not necessarily have a linguistic background and are not trained to perform complex tasks, thus jeopardizing benchmark quality. Furthermore, some crowd-workers try to maximize their pay by supplying quick answers that have nothing to do with the correct label. This issue has been tackled in the past by proposing approaches to control for annotators' expertise + +and reliability (Hovy et al., 2013), trying to identify spammers and mitigate their effect on annotation, or by repeating labeling on targeted examples (Sheng et al., 2008). However, not all tasks are the same: while in some cases, like for instance PoS-tagging or parsing, disagreement among annotators is more likely due to unclear annotation guidelines and can usually be reconciled through adjudication, full annotators' agreement should not be necessarily enforced in social computing tasks, whose goal is to study and manage social behavior and organizational dynamics, especially in virtual worlds built over the Internet (Wang, 2007). In these tasks – which include offensive language detection among others – subjectivity, bias and text ambiguity play an important role (Aroyo et al., 2019), and being an inherent component of the task they should be measured and analysed rather than discarded (Klenner et al., 2020; Basile, 2020). Indeed, instead of aiming for a global consensus on what constitutes verbal abuse on social media, we investigate the impact of different degrees of disagreement, how classifiers behave with ambiguous training and test data, and the role of disagreement in current shared tasks. More specifically, we first collect and annotate three datasets of English tweets covering different domains, to test if agreement among a pool of generic classifiers can be considered a proxy for annotator agreement. We then focus on how annotator agreement (both in training and test set) impacts classifiers' performance, considering domain-specific and generic classifiers as well as in-domain and out-of-domain experiments. We also show that low agreement examples – no matter how difficult they can be – still provide useful signal for training offensive language detection systems and do not represent random annotations. So "coin-flipping" or example removal seems not to be the right strategy to solve these disagreement cases. Then, we measure disagreement in the English test set of the last Offenseval shared task (Zampieri et al., 2020), + +and analyse to what extent the high performance achieved by most participating systems is related to high agreement in annotation. + +We release the new annotated datasets upon request, including more than 10k tweets covering three domains. The messages have been labeled with 50k crowd-worker judgements and annotated with agreement levels. To our knowledge, this represents the first dataset explicitly created to cover different agreement levels in a balanced way. We also advocate for the release of more datasets like the one we propose, especially for highly subjective tasks, where the need to include different points of view should be accounted for. + +NOTE: This paper contains examples of language which may be offensive to some readers. They do not represent the views of the authors. + +# 2 Related Work + +While there has been an extensive discussion on minimal standards for inter-annotator agreement to ensure data quality (Di Eugenio and Glass, 2004; Passonneau, 2004; Artstein and Poesio, 2008), recently an increasing number of works argue that disagreement is unavoidable because language is inherently ambiguous (Aroyo and Welty, 2015), proposing ways to tackle annotators' disagreement when building training sets (Dumitrache et al., 2019). Hsueh et al. (2009), for example, identify a set of criteria to select informative yet unambiguous examples for predictive modeling in a sentiment classification task. Rehbein and Ruppenhofer (2011) analyse the impact that annotation noise can have on active learning approaches. Other works along this line investigate the impact of uncertain or difficult instances on supervised classification (Peterson et al., 2019), while Beigman Klebanov and Beigman (2014) show that including hard cases in training data results in poorer classification of easy data in a word classification task. Along the same lines, Jamison and Gurevych (2015) show that filtering instances with low agreement improve classifier performance in four out of five tasks. Both works observe that the presence of such instances lead to misclassifications. + +Several approaches have been presented that implement strategies to deal with disagreement when training classifiers for diverse tasks. In most cases, disagreement has been treated as a consequence of low annotation quality, and addressed through + +methodologies aimed at minimising the effects of noisy crowdsourced data. Simpson et al. (2020), for example, present a Bayesian sequence combination approach to train a model directly from crowdsourced labels rather than aggregating them. They test their approach on tasks such as NER where disagreement is mainly due to poor annotation quality. Other works have focused instead on uncertainty in PoS-tagging, integrating annotators' agreement in the modified loss function of a structured perceptron (Plank et al., 2014). Also Rodrigues and Pereira (2018) propose an approach to automatically distinguish the good and the unreliable annotators and capture their individual biases. They propose a novel crowd layer in deep learning classifiers to train neural networks directly from the noisy labels of multiple annotators, using only backpropagation. + +Other researchers have suggested to remove hard cases from the training set (Beigman Klebanov and Beigman, 2009) because they may potentially lead to poor classification of easy cases in the test set. We argue instead that disagreement is inherent to the kind of task we are going to address (i.e. offensive language detection) and, in line with recent works, we advocate against forced harmonisation of annotators' judgements for tasks involving high levels of subjectivity (Klenner et al., 2020; Basile, 2020). Among recent proposals to embrace the uncertainty exhibited by human annotators, Gordon et al. (2021) propose a novel metric to evaluate social computing tasks that disentangles stable opinions from noise in crowd-sourced datasets. Akhtar et al. (2020), instead, divide the annotators into groups based on their polarization, so that different gold standard datasets are compiled and each used to train a different classifier. + +Compared to existing works, our contribution is different in that we are interested mainly in the process of dataset creation rather in evaluation metrics or classification strategies. Indeed, our research is guided mainly by research questions concerning the data selection process, the composition of datasets and the evaluation using controlled levels of agreement. To this purpose, we create the first dataset for offensive language detection with three levels of agreement and balanced classes, encompassing three domains. This allows us to run comparative in-domain and out-of-domain evaluations, as well as to analyse existing benchmarks like the Offenseval dataset (Zampieri et al., 2020) using the + +same approach. While few crowd-sourced datasets for toxic and abusive language detection have been released with disaggregated labels (Davidson et al., 2017), they have not been created with the goal of analysing disagreement, therefore no attention has been paid to balance the number of judgments across different dimensions, like in our case. + +# 3 Data Selection and Annotation + +In our study, we focus on three different domains, which have been very popular in online conversations in 2020: Covid-19, US Presidential elections and Black Lives Matter (BLM) movement. After an empirical analysis of online discussions, a set of hashtags and keywords for each domain are defined (e.g. #covid19, #election202, #blm). Then, using Twitter public APIs, tweets in English containing at least one of the above keywords are collected in a time span between January and November 2020 (for more details about data collection see Appendix D). From this data collection, we randomly select 400,000 tweets (around 130,000 for each domain), which we then pre-process by splitting hashtags into words using the Ekphrasis tool (Gimpel et al., 2010) and then replacing all mentions to users andurls with $\langle \text{user} \rangle$ and $\langle \text{url} \rangle$ respectively. + +# 3.1 Ensemble of classifiers to select data for annotation + +Since we do not know the real distribution of agreement levels in the data we collected, random sampling for annotation might be a sub-optimal choice. Thus, we developed a strategy to pre-evaluate the tweets, trying to optimize annotators' effort by having a balanced dataset (in fact data might be very skewed leading to over-annotation of some classes and under-annotation of others). To pre-evaluate the tweets we use a heuristic approach by creating an ensemble of 5 different classifiers, all based on the same BERT configuration and fine-tuned starting from the same abusive language dataset (Founta et al., 2018). Since the original dataset contains four classes (Spam, Normal, Abusive and Hateful), we first remove the tweets from the Spam class and map the remaining ones into a binary offensive or non-offensive label, by merging Abusive and Hateful tweets into the offensive class and mapping the Normal class into the non-offensive one. We then select 15k tweets from the Founta dataset (~100k tweets) for speeding up the process, as we are not + +interested in the overall performance of the different classifiers, but rather in their relative performances. Each classifier of the ensemble is trained using a different balance for the training and the evaluation set, so to yield slightly different predictions. In particular, all five classifiers are trained with the BERT-Base uncased model $^2$ , a max seq length of 64, a batch size of 16 and 15 epochs. One classifier has been trained using 12k tweets in the training and 3k in the validation set, a second classifier was trained using the same training instances but repeated twice (24k), while the validation set remained the same. In a third and fourth configuration, we repeat twice the offensive and the non-offensive training instances respectively. Finally, in a fifth configuration we change the proportion between training and validation set (10k for training, 5k for validation). + +The rationale for this choice is twofold: (i) since we will collect 5 crowd-annotations for each tweet, we want to have an intuitive and possible direct comparison between ensemble agreement and annotators' agreement (i.e. five votes per tweet coming from the classifiers and five from crowd-workers). (ii) The dataset in Founta et al. (2018) has been specifically created to encompass several types of offensive language. We can therefore consider it as a general prior knowledge about verbal abuse online before adapting our systems to the 3 domains of interest. + +In the following sections we will denote unanimous agreement with $A^{++}$ (i.e. agreement between 5 annotators or classifiers), mild agreement with $A^{+}$ (i.e. 4 out of 5 annotations agreeing on the same label), and weak agreement with $A^0$ (i.e. the 5 annotations include 3 of them in agreement and 2 in disagreement). When focusing also on the label we will use the same notation, representing offensive tweets as $O^{++/+/0}$ and non-offensive ones as $N^{++/+/0}$ respectively. + +The pre-evaluation through the classifier ensemble resulted in the following agreement distribution: about $92\%$ of the data was classified as $A^{++}$ . For about $5\%$ of the data, agreement among the classifiers was $A^{+}$ , while for the remaining $3\%$ of the data, they fell in the $A^0$ situation. + +# 3.2 Data Annotation with AMT + +In order to analyse the relation between automated and manual annotation with respect to agreement and disagreement, we select an equal number of tweets from each class of agreement of the ensemble $(A^{++}, A^{+}, A^{0})$ to be manually annotated. For each domain and each agreement class we select 1,300 tweets - equally divided between offensive and non-offensive predictions - for a total of 3,900 tweets per domain. + +Every tweet is annotated by 5 native speakers from the US, who we expect to be familiar with the topics, using Amazon Mechanical Turk. We follow for all domains the same annotation guidelines, aimed at collecting crowd-workers' judgements on the offensiveness of the messages using the binary labels offensive and not offensive (see Guidelines included in Appendix A). + +To ensure high-quality annotations, we select a pool of tweets from the three domains of interest and ask three expert linguists to annotate them. The tweets with perfect agreement are used as gold standard. We then include a gold standard tweet in every HIT (group of 5 tweets to be annotated). If a crowd-worker fails to evaluate the gold tweet, the HIT is discarded. Moreover, after the task completion we remove all the annotations done by workers who did not reach a minimum overall accuracy of $70\%$ with respect to the gold standard. As a consequence of this quality control, for some tweets we could not collect five annotations, and they had to be removed from the final dataset. On the other hand, it was a crucial process to minimise the possible impact of spam and low-quality annotations on disagreement – which is the focus of our analysis. The total number of tweets annotated using AMT is 10,753, including 3,472 for Covid-19, 3,490 for US elections and 3,791 for BLM. Some (slightly modified) examples of tweets judged with different levels of agreement by crowd-annotators are reported in Table 1. + +# 3.3 Annotators and Ensemble Agreement + +If we use the majority vote for crowd-annotated data, the datasets have an average distribution of $31\%$ of offensive and $69\%$ non-offensive tweets, while it is $50\%$ each according to ensemble annotation we used for sampling. This means that our classifiers tend to label more tweets as offensive compared to human annotators, as shown in the confusion matrix in Fig. 1. It is interesting to note + +that, although the tweets to be annotated were selected evenly across classifiers' agreement classes, the agreement between annotators is not uniformly distributed. + +As regards annotators' agreement, for about $43\%$ of the tweets annotated we have full consensus between annotators $(A^{+ + })$ . The vast majority of these tweets were judged unanimously as non-offensive $(34,12\% N^{+ + })$ , and only $8,05\%$ of the data were judged unanimously offensive $(O^{+ + })$ , the less represented type of agreement. For the remaining data, $29,35\%$ has mild agreement $(A^{+}, 4$ out of 5 annotators agreed) with $19\% N^{+}$ and $10,35\% O^{+}$ , and another $28,28\%$ of the data in the class $A^0$ (3 vs 2 annotators) with $15,56\% N^0$ and $12,92\% O^0$ . + +
Classifiers ensemble
N++N+N0O0O+O++Total
Crowded-annodatorsN++14346955424853691443669
13.34%6.46%5.04%4.51%3.43%1.34%34.12%
N+2993903953873731992043
2.78%3.63%3.67%3.60%3.47%1.85%19.00%
1182843253283133051673
1.10%2.64%3.02%3.05%2.91%2.84%15.56%
401732262622943941389
0.37%1.61%2.10%2.44%2.73%3.66%12.92%
O+191011331392125091113
0.18%0.94%1.24%1.29%1.97%4.73%10.35%
O++15376881119546866
0.14%0.34%0.63%0.75%1.11%5.08%8.05%
Total19251680168916821680209710753
17.90%15.62%15.71%15.64%15.62%19.50%100.00%
+ +Figure 1: Confusion matrix (raw number of tweets and percentage) between classifiers ensemble agreement (x-axis) and crowd-annotators agreement (y-axis) on offensive ("O") and non-offensive ("N") labels. + +We also compute Pearson's correlation coefficient between the agreement of the ensemble classifiers and that of annotators. It achieves a moderate correlation $(r = 0.51)$ , showing that training an ensemble of classifiers on generic data to pre-screen domain-specific tweets before manual annotation could help identifying tweets that are either unambiguous or more challenging. A similar correlation $(r = 0.50)$ was obtained on an ensemble of BiLSTM classifiers trained with the same training and development sets of the five BERT-based classifiers, suggesting that the pre-screening approach could be used also with other classifiers. + +# 3.4 Qualitative analysis of (dis)agreement + +Through a manual analysis of the tweets belonging to the $A^0$ class, we can identify few phenomena that lead to disagreement in annotation. In many cases, + +
N++Stand for something or else fall for anything. #BlackLivesMatter +Hello world! What a great day to be alive #Trump2020 #MAGA
N+Come on man! Lock'em up!!! #maga +Not the first time. You all misspelled #blacklivesmatter. Speak up! @user
N0Set fire to Fox News (metaphorically) +@user is outing #BLACK_LIVES_MATTER as a cult! HE IS CORRECT!
O0#DISGUSTING #Democrats terrorize old folks just before #elections2020 +I love this shit! #BlackLivesMatter
O+@user You're a bumbling fool #elections2020 +Elections 2020: Red Rapist v. Blue Racist
O++Y'all trending about kpop stans instead of #BlackLivesMatter big fack you +Crazy idiots. This is batshit bullshit. #elections2020
+ +Table 1: Examples of tweets with different degrees of crowd-workers' agreement. The messages have been created starting from real examples by slightly changing their wording, so to make it impossible to retrieve the original ones on Twitter. $N =$ Not offensive, $O =$ Offensive. $+ + / + / 0$ correspond to high, medium and low agreement respectively. + +tweets are ambiguous and more context would be needed to fully understand whether the user wanted to offend someone or not. These cases include the presence of deictic expressions or pronouns referring to previous tweets, see for example: + +(1) Shoulda thrown this clowns bike off the bridge! +(2) Won't work. Gangs will terrorize the city. Murder at will and maybe they'll shoot the Mayor. + +Other cases include generic expressions of anger that are not targeted against a specific person or group, or expressions of negative feelings, see for example: + +(3) Amen! Enough of this crap! + +Finally, questions, and in particular rhetorical questions, are very frequent in the $A^0$ class and their interpretation seems to represent a challenging task for crowd-workers: + +(4) if George Floyd was white would the cop have acted in the same violent, murderous way? +(5) What is it with these kids of leftist politicians? + +Overall, disagreement does not seem to stem from poor annotation of some crowd-workers, but rather from genuine differences in the interpretation of the tweets. Additionally, BLM and US American elections are recent events and annotators may have been biased by their personal opinion on the topic during annotation, an effect that has already been highlighted in Sap et al. (2019, 2020). + +# 4 Classification experiments + +After collecting information on human agreement on tweets covering three different domains, we aim at assessing the impact of (dis)agreement on classifier behaviour. + +To this end, we create several balanced configurations of the datasets, so to control for the effect of agreement level, label distribution and domain topic. We first split the data into a training and test set of $75\%$ and $25\%$ for each domain. Then, to control for the effect of training data size, we further downsample all sets to the smallest one, so that each agreement sample is equally represented $(A^{++}, A^{+}, A^{0})$ . In this way, we obtain 3 sets of training data - one per ambiguity level - containing 900 tweets each. Every set further contains 300 tweets from each domain, half for offensive label and half for non-offensive label so to control also for the effect of label distribution across domains and agreement levels. + +# 4.1 Impact of (dis)agreement in training data + +To assess the impact of agreement level in training data, we run a series of experiments by comparing two different classifiers: the first one relies on BERT directly fine-tuned on domain data, while the second foresees also an intermediate fine-tuning step using the entire dataset in Founta et al. (2018), inspired by the supplementary training approach from Phang et al. (2018). BERT is used with the same parameters of the ensemble classifiers, reported in Section 3.1. The domain data used for fine-tuning are built starting from the training data described above divided into different agreement levels $(A^{+ + },A^{+},A^{0}$ and their combinations). + +
Training splitTraining sizeAll domainsFounta + all domains
A++9000.7460.757
A+9000.7340.753
A09000.6390.683
A++/+18000.7550.756
A+/018000.7280.724
A++/018000.7230.730
A++/+/027000.7450.752
Baseline - training only on Founta et al. data: 0.667 F1
+ +Results are reported in Table 2. Note that, for training, the tweets in a given partition for all domains are merged, while they are tested on each domain separately. The reported F1 is an average of the three results (results for each domain can be found in the Appendix and are consistent with the ones reported here). We observe that, if we consider only one level of agreement, data with total agreement are the best for prediction $(A^{++})$ , up to the point that $A^{++}$ data alone provide better results than using all data available in the three splits (all), despite the different size (900 vs. 2700 instances). Additionally, the combination of high and mild agreement data $(A^{++/+})$ yields results that are in line with the best configuration obtained with two fine-tuning steps (0.755 vs 0.757). This result clearly indicates that for this kind of task it is not necessary to collect huge datasets for fine-tuning, since few data from the target domain may suffice if properly selected. Finally, the effect of using low agreement data for training is detrimental, in line with findings reported in past works (Reidsma and op den Akker, 2008; Jamison and Gurevych, 2015). This can be spotted in two results: the use of generic data alone as in our baseline is better than using low agreement in-domain data (0.667 vs. 0.639) and all configurations where $A^0$ is added to mild and high agreement data perform worse than without $A^0$ (0.734 vs 0.728 and 0.746 vs 0.723). + +# 4.2 Impact of (dis)agreement in test data + +As a next step, we investigate how classifier's performance varies as a function of annotators' agreement in the test data. To this end, we divide also our test set into subsets according to the same agreement levels $(A^{+ + },A^{+},A^{0})$ and calculate separate F1s on each of these splits. We run the classifier for + +'all domains' described in Section 4.1, i.e. trained on the three domains and tested on one of them. Results, reported in Table 3, are obtained by averaging the F1 for each domain. + +We observe a dramatic drop in performance when agreement decreases in the test set, indicating that ambiguous data are the most challenging to classify. These results highlight the need to control for ambiguity also in the test set when creating offensive language benchmarks (for example in shared tasks), in order to avoid high system performance being due to a lack of challenging examples. The best performance on ambiguous data is obtained when training on unambiguous and mildly ambiguous data $(A^{+ + / + })$ . Interestingly, adding $A^+$ data to $A^{+ + }$ data leads to the highest increase in performance exactly for $A^0$ test data (from 0.552 to 0.574). This rules out the possibility that a certain level of disagreement in the training set is more effective in classifying the same type of ambiguity in the test set (e.g. train and test on $A^0$ data), and suggests that high agreement or mild agreement training sets perform better in all cases. + +Table 2: Performance (F1) when training on data with different levels of human agreement (rows), fine-tuned either on domain data or using the dataset from Founta et al. (2018) and domain data. + +
Training splitTraining sizeTested onF1
A++/+1800A++0.860
A++/+1800A+0.768
A++/+1800A00.574
A++900A++0.847
A++900A+0.763
A++900A00.552
A0900A++0.662
A0900A+0.639
A0900A00.567
+ +Table 3: Performance on $A^{+ + / + }$ $A^{+ + }$ $A^0$ data, classified with "all domains" configuration in Table 2. + +# 4.3 Impact of (dis)agreement on out-of-domain data + +We then test the effect of cross-domain classification according to agreement levels, so to minimise the impact of possible in-domain overfitting. We repeat the experiments described in the previous section by using two domains for training and the third for testing. As an example, a classifier model was trained using $A^{++}$ data from Covid 19 and US Presidential campaign, and tested on $A^{++}$ data on BLM. This has been repeated for each domain and each agreement level. For conciseness of presentation, we report in Table 4 the F1 obtained by averaging F1 on each domain (results for each domain can be found in the Appendix and also in + +this case they are consistent with the one reported here). Results confirm that (i) the classifier yields a good performance when the training data have high agreement, even in an out-of-domain scenario, and (ii) adding $A^0$ data to the training set has a detrimental effect on performance. Finally, if we compare these results with Table 2, we observe that the effect of overfitting on in-domain data is very limited. + +
Training splitTraining sizeOut of domainFounta + out of domain
A++6000.7190.747
A+6000.6770.716
A06000.5670.658
A+/+1,2000.7320.748
A+/01,2000.6590.715
A++/01,2000.7140.722
A++/+/01,8000.7220.737
+ +Table 4: Performance (F1) in the out-of-domain setting. Results are the average F1 obtained by the classifier on each domain when trained on the other two. + +# 4.4 (Dis)agreement versus Randomness + +An additional question we want to address is whether low agreement data provide some useful information for training offensive language detection systems or if the effect of such data is no more that of random annotation. + +We therefore replicate the experiments of Table 2 by replacing the label of $A^0$ data with a random one. Since we want to obtain the same controlled distribution we assign the same probability to $N$ and $O$ labels. Results are reported in Table 5. As can be seen, when using $A^{0_{rand}}$ data the results worsen as compared to $A^0$ , indicating that the label in $A^0$ are not assigned by chance and they can contain useful signal for the classifier, albeit challenging. Consistently with previous results, the more gold and high agreement data is added to the training, the smaller the effect of $A^{0_{rand}}$ . These results show also that coin-flipping, which has been suggested in past works to resolve hard disagreement cases (Beigman Klebanov and Beigman, 2009), may not be ideal because it leads to a loss of information. + +# 5 Experiments on Offenseval dataset + +Our experiments show that when training and test data include tweets with different agreement levels, classification of offensive language is still a challenging task. Indeed, our classification results reported in Table 2 and 4 suggest that on this kind + +
Training splitTraining sizeAll domainsFounta + all domains
A09000.6390.683
A0rand9000.5050.576
A+/018000.7280.724
A+/0rand18000.6570.689
A++/018000.7230.730
A++/0rand18000.6840.703
A+++/027000.7450.752
A+++/0rand27000.7190.730
+ +Table 5: Performance (F1) when training on data with different levels of human agreement (rows) and replacing $A^0$ labels with random ones ( $A^{0_{rand}}$ ). First line of each group is reported from Table 2 for comparison. + +of balanced data, F1 with Transformer-based models is $\approx 0.75$ . However, system results reported for the last Offenseval shared task on offensive language identification in English tweets (Zampieri et al., 2020) show that the majority of submissions achieved an F1 score $>0.90$ on the binary classification task. + +We hypothesize that this delta in performance may depend on a limited presence of low agreement instances in the Offenseval dataset used for evaluation (Zampieri et al., 2019). We therefore randomly sample 1,173 tweets from the task test data (30% of the test set) and annotate them with Amazon Mechanical Turk using the same process described in the previous sections (5 annotations per tweet). We slightly modify our annotation guidelines by including the cases of profanities, which were explicitly considered offensive in Offenseval guidelines. + +Results, reported in Table 6 (left column) show that the outcome of the annotation is clear-cut: more than $90\%$ of the tweets in the sample have either a high $(A^{+})$ or very high $(A^{++})$ agreement level. Furthermore, only $6.4\%$ of the annotations (75) have a different label from the original Offenseval dataset, $50\%$ of which are accounted for by the $A^0$ class alone. So our annotation is very consistent with the official one and the distribution is very skewed towards high agreement levels, as initially hypothesized. + +To understand whether this skewness can be generalised, i.e. if this sample distribution might be representative of a population distribution, we also estimate the distribution of agreement levels in the initial pool of data (around 400k tweets) we collected using US Election, BLM and Covid-related hashtags (Section 3). The estimate of the distri + +
AgreementOffenseval400k Tweets
A++75.62%(887)68.52%(274,514)
A+14.75%(173)19.08%(76,457)
A09.63%(113)12.40%(49,694)
N++64.36%(755)65.12%(260,925)
N+5.80%(68)15.50%(62,085)
N04.60%(54)7.94%(31,813)
O05.03%(59)4.46%(17,882)
O+8.95%(105)3.59%(14,372)
O++11.25%(132)3.39%(13,589)
+ +bution for class $A^{+}$ , $A^{++}$ and $A^0$ is reported in Table 6 (right column). A comparison between the two columns shows that disagreement distribution in the Offenseval sample is in line with the distribution in the data we initially collected before balancing, providing initial evidence that this distribution – with few disagreement cases – might be a ‘natural’ one for online conversations on Twitter. + +Differences emerge when considering the ratio of offensive tweets. In Offenseval data, the percentage of offensive tweets is more than double the percentage in our data (25.23% vs. 11.44%), because the authors adopted several strategies to overrepresent offensive tweets (Zampieri et al., 2019). + +As a final analysis, we collect the runs submitted to Offenseval and compute the F1 score of each of these systems over the three levels of agreement separately. Overall, we consider all runs that in the task obtained $\mathrm{F}1 > 0.75$ , i.e. 81 runs out of 85. Results are reported in Table 7 as the average of the F1 obtained by the different systems. This last evaluation confirms our previous findings, since F1 increases when agreement level increases in test data. This finding, together with the distribution of agreement levels, shows that the high performance obtained by the best systems in the shared task is most probably influenced by the prevalence of tweets with total agreement. + +Table 6: Comparison of agreement distribution in Offenseval sample and projection on 400k tweets. + +
Offenseval 2020 - test subsetsF1StDev
A++ (887 tweets)0.915± 0.055
A+ (173 tweets)0.817± 0.075
A0 (113 tweets)0.656± 0.067
+ +Table 7: Average F1 obtained by the best systems at Offenseval ${2020} \pm$ StDev. + +# 6 Discussion and Conclusions + +We have presented a data annotation process and a thorough set of experiments for assessing the effect of (dis)agreement in training and test data for offensive language detection. We showed that an ensemble of classifiers can be employed to preliminarily select potentially unambiguous or challenging tweets. By analysing these tweets we found that they represent real cases of difficult decisions, deriving from interesting phenomena, and are usually not due to low-quality annotations. We also found that these challenging data are minimally present in a popular benchmark dataset, accounting for higher system performance. We believe that such hard cases should be more represented in benchmark datasets used for evaluation of hate speech detection systems, especially in the test sets, so to develop more robust systems and avoid overestimating classification performance. This goal can be achieved by integrating the common practice of oversampling the minority offensive class with the oversampling of minority agreement classes. + +From a multilingual perspective, we also noted that at Offenseval 2020 the best performing systems on Arabic scored 0.90 F1 with a training set of 8k tweets, 0.85 on Greek with less than 9k tweets, and 0.82 on Turkish despite having more than 32k examples for training. This shows that the amount of training data is not sufficient to ensure good classification quality, and that also in this case a study on disagreement levels could partly explain these differences (this is further corroborated by the fact that for Turkish the lowest overall inter-annotator agreement score was reported). + +As future work, we plan to develop better approaches to classify (dis)agreement, in order to ease oversampling of low agreement classes. Preliminary experiments (not reported in this paper) show that the task is not trivial, since supervised learning with LMs such as BERT does not work properly when trying to discriminate between ambiguous and not ambiguous tweets. Indeed, BERT-based classification performed poorly both in the binary task (ambiguous vs. not ambiguous) and in the three-way one (offensive vs. not offensive vs. ambiguous). This suggests that ambiguity is a complex phenomenon where lexical, semantic and pragmatic aspects are involved, which are difficult to capture through a language model. + +This corpus, together with the experiments presented in this paper, will hopefully shed light onto + +the important role played by annotators' disagreement, something that we need to understand better and to see as a novel perspective on data. Indeed, if we want to include diversity in the process of data creation and reduce both the exclusion of minorities' voices and demographic misrepresentation (Hovy and Spruit, 2016), disagreement should be seen as a signal and not as noise. + +# 7 Ethics Statement + +The tweets in this dataset have been annotated by crowd-workers using Amazon Mechanical Turk. All requirements introduced by the platform for tasks containing adult content were implemented, for example adding a warning in the task title. We further avoid to put any constraints on the minimum length of sessions or on the minimum amount of data to be labeled by each crowd-worker, therefore they were not forced to prolonged exposure to offensive content. Indeed, we observed that crowd-workers tended to annotate for short sessions, on average 20 minutes, which suggests that annotating was not their main occupation. Crowd-workers were compensated on average with 6 US$ per hour. + +Although we put in place strict quality control during data collection, we compensated the completed hits also when annotations were finally discarded because they did not reach the minimum accuracy threshold of $70\%$ w.r.t. the gold standard. We also engaged in email conversations with crowd-workers when they were blocked because of mismatches with the gold standard tweets. In several cases, we clarified with them the issue and subsequently unlocked the task. + +Concerning the annotated dataset, we support scientific reproducibility and we would like to encourage other researchers to build upon our findings. However, we are aware that ethical issues may arise related to the complexity and delicacy of judgments of offensiveness in case they are made public. Therefore, in compliance with Twitter policy, we want to make sure that our dataset will be reused for non-commercial research only4 avoiding any discriminatory purpose, event monitoring, profiling or targeting of individuals. The dataset, in the form of tweet IDs with accompanying annotation, can be obtained upon request following the process described at this link: https://github.com/dhfbk/annotators-agreement-dataset. Re + +questions will be asked to prove their compliance with Twitter policy concerning user protection and non-commercial purposes, as well as to declare that they will not use our dataset to collect any sensitive category of personal information. Also, releasing the tweet IDs instead of the text will enforce users' right to be forgotten, since it will make it impossible to retrieve tweets if their authors delete them or close their account. Although we are aware of the risks related to developing and releasing hate speech datasets, this research was carried out with the goal of improving conversational health on social media, and even exposing the limitations of binary offensive language detection. We believe that our findings confirm the context- and perspective-dependent offensiveness of a message, and we therefore avoid binary labels, stressing the importance of taking multiple points of view (in our case, five raters) into account. Following the same principle of avoiding profiling, crowd-workers' IDs are not included in the dataset, so that it will not be possible to infer annotator-based preferences or biases. + +# Acknowledgments + +Part of this work has been funded by the KID ACTIONS REC-AG project (n. 101005518) on "Kick-off preventIng and responDing to children and AdolesCenT cyberbullyIng through innovative mOnitoring and educatioNal technologieS", https://www.kidactions.eu/. + +# References + +Sohail Akhtar, Valerio Basile, and Viviana Patti. 2020. Modeling annotator perspective and polarized opinions to improve hate speech detection. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8(1):151-154. +Lora Aroyo, Lucas Dixon, Nithum Thain, Olivia Redfield, and Rachel Rosen. 2019. Crowdsourcing subjective tasks: The case study of understanding toxicity in online discussions. In *Companion Proceedings of The 2019 World Wide Web Conference*, WWW '19, page 1100-1105, New York, NY, USA. Association for Computing Machinery. +Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd truth and the seven myths of human annotation. AI Magazine, 36(1):15-24. +Ron Artstein and Massimo Poesio. 2008. Survey article: Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555-596. + +Valerio Basile. 2020. It's the end of the gold standard as we know it. on the impact of pre-aggregation on the evaluation of highly subjective tasks. In Proceedings of the AIxA 2020 Discussion Papers Workshop co-located with the 19th International Conference of the Italian Association for Artificial Intelligence (AIxA2020), Anywhere, November 27th, 2020, volume 2776 of CEUR Workshop Proceedings, pages 31-40. CEUR-WS.org. +Beata Beigman Klebanov and Eyal Beigman. 2009. From annotator agreement to noise models. Computational Linguistics, 35(4):495-503. +Beata Beigman Klebanov and Eyal Beigman. 2014. Difficult cases: From data to learning, and back. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 390-396, Baltimore, Maryland. Association for Computational Linguistics. +Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17. +Barbara Di Eugenio and Michael Glass. 2004. Squibs and discussions: The kappa statistic: A second look. Computational Linguistics, 30(1):95-101. +Anca Dumitrache, Lora Aroyo, and Chris Welty. 2019. A crowdsourced frame disambiguation corpus with ambiguity. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2164-2170, Minneapolis, Minnesota. Association for Computational Linguistics. +Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In 11th International Conference on Web and Social Media, ICWSM 2018. +Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. 2010. Part-of-speech tagging for twitter: Annotation, features, and experiments. Technical report, Carnegie-Mellon University, School of Computer Science. +Mitchell L. Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori Hashimoto, and Michael S. Bernstein. 2021. The disagreement deconvolution: Bringing machine learning performance metrics in line with reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21, New York, NY, USA. Association for Computing Machinery. + +Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130, Atlanta, Georgia. Association for Computational Linguistics. +Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591-598, Berlin, Germany. Association for Computational Linguistics. +Pei-Yun Hsueh, Prem Melville, and Vikas Sindhwani. 2009. Data quality from crowdsourcing: A study of annotation selection criteria. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 27-35, Boulder, Colorado. Association for Computational Linguistics. +Emily Jamison and Iryna Gurevych. 2015. Noise or additional information? leveraging crowdsourced annotation item agreement for natural language tasks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 291-297, Lisbon, Portugal. Association for Computational Linguistics. +Manfred Klenner, Anne Gohring, and Michael Amsler. 2020. Harmonization sometimes harms. In Proceedings of the 5th Swiss Text Analytics Conference and the 16th Conference on Natural Language Processing, SwissText/KONVENS 2020, Zurich, Switzerland, June 23-25, 2020 [online only], volume 2624 of CEUR Workshop Proceedings. CEUR-WS.org. +Rebecca J. Passonneau. 2004. Computing reliability for coreference annotation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04), Lisbon, Portugal. European Language Resources Association (ELRA). +J. Peterson, R. Battleday, T. Griffiths, and O. Russakovsky. 2019. Human uncertainty makes classification more robust. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9616-9625. +Jason Phang, Thibault Févry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. +Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 742–751, Gothenburg, Sweden. Association for Computational Linguistics. + +Ines Rehbein and Josef Ruppenhofer. 2011. Evaluating the impact of coder errors on active learning. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 43-51, Portland, Oregon, USA. Association for Computational Linguistics. +Dennis Reidsma and Rieks op den Akker. 2008. Exploiting 'subjective' annotations. In *Coling* 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, pages 8-16, Manchester, UK. Coling 2008 Organizing Committee. +Filipe Rodrigues and Francisco C. Pereira. 2018. Deep learning from crowds. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1611-1618. AAAI Press. +Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668-1678, Florence, Italy. Association for Computational Linguistics. +Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5477-5490. Association for Computational Linguistics. +Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeirotis. 2008. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '08, page 614-622, New York, NY, USA. Association for Computing Machinery. +Edwin Simpson, Jonas Pfeiffer, and Iryna Gurevych. 2020. Low resource sequence tagging with weak labels. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8862-8869. +Fei-Yue Wang. 2007. Toward a paradigm shift in social computing: The acp approach. IEEE Intelligent Systems, 22(5):65-67. +Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the Type and Target of Offensive Posts in Social Media. In Proceedings of NAACL. +Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Cagri Koltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffenseEval 2020). In Proceedings of SemEval. + +# A Annotation Guidelines for AMT + +This section contains the instructions provided to annotators on Amazon Mechanical Turk. The first part changes according to the domain: + +Covid-19: The tweets in this task have been collected during the pandemic. Would you find the content of the messages offensive? Try to judge the offensiveness of the tweets independently from your opinion but solely based on the abusive content that you may find. + +US Presidential campaign: The tweets in this task have been collected during the last US Presidential campaign. Would you find the content of the messages offensive? Try to judge the aggressiveness of the tweets independently from your political orientation but solely based on the abusive content that you may find. + +Black Lives Matter: These tweets are related to the Black Lives Matter protests. Would you find the content of the messages offensive? Try to judge the offensiveness of the tweets independently from your opinion but solely based on the abusive content that you may find. + +The second part of the task description, instead, is the same for all the domains, containing a definition of what is offensive and informing the workers that there is a quality check on the answers: + +Offensive: Profanity, strongly impolite, rude, violent or vulgar language expressed with angry, fighting or hurtful words in order to insult or debase a targeted individual or group. This language can be derogatory on the basis of attributes such as race, religion, ethnic origin, sexual orientation, disability, or gender. Also sarcastic or humorous expressions, if they are meant to offend or hurt one or more persons, are included in this category. + +Normal: tweets that do not fall in the previous category. + +Quality Check: the HIT may contain a gold standard sentence, manually annotated by three different researchers, whose outcome is in agreement. If that sentence is wrongly annotated by a worker, the HIT is automatically rejected. + +Asking annotators to label the tweets independently from their views, opinions or political orientation was inspired by recent works, showing that making explicit possible biases in the annotators contributes to reduce such bias (Sap et al., 2019). + +
TrainingTraining SizeAll domainsFounta + all domains
BLMCovidElectionBLMCovidElection
A++9000.7560.7520.7300.7680.7520.752
A+9000.7450.7240.7340.7740.7360.748
A09000.6470.6440.6260.6890.6520.707
A++/+18000.7760.7560.7320.7790.7380.750
A+/018000.7380.7380.7070.7440.6980.729
A++/018000.7320.7330.7040.7460.7230.721
A++/+/027000.7580.7360.7420.7660.7480.742
+ +Table 8: Test results on single domains using a model trained on all domains. + +
TrainingTraining SizeOut of domainFounta + out of domain
BLMCovidElectionBLMCovidElection
A++6000.6990.7340.7230.7600.7360.746
A+6000.6810.7200.6310.7180.7060.725
A06000.5570.6030.5420.6740.6290.672
A++/+18000.6960.7580.7420.7400.7710.734
A+/018000.6410.6860.6490.7330.6960.716
A++/018000.6950.7260.7200.7370.7060.722
A++/+/027000.7370.7360.6920.7340.7560.720
+ +Table 9: Results in the out-of-domain setting, testing the classifier on each domain when trained on the other two. + +# B Impact of (dis)agreement on classification - results in detail + +Table 8 displays domain-specific results related to the analysis shown in Section 4.1 of the main document, where for sake of brevity we have shown only an average between the three domains. The table confirms that also on single domains, training data with higher level of agreement improve predictions, while training data with low level of agreement are detrimental. Classification took about 2 minutes on a Titan X for the runs using only domain-specific data. Adding the intermediate fine-tuning on data from Founta et al. (2018) increases the time to 1.5 hours. + +# C Impact of (dis)agreement on out-of-domain data - results in detail + +Similar to the previous table, Table 9 displays out-of-domain results related to the analysis shown in Section 4.3 of the main document, where we report only an average between the three domains. The results are consistent with the average scores reported in the main document, i.e. that training data with high agreement improve prediction, while training data with low agreement are detrimental. Classification took about the same time of the runs in the single domain configuration. + +# D Twitter data collection + +Through its application programming interface (API), Twitter provides access to publicly available messages upon specific request. For each of the domains analysed, a set of hashtags and keywords was identified that unequivocally characterizes the domain and is collectively used. During a specific period of observation, all the tweets containing at least an item of this hashtags/keywords seed list were retrieved in real time (using "filter" as query). The most relevant entries from the Covid-19 seed list are: covid-19, coronavirus, ncov, #Wuhan, covid19, sarscov2 and covid. Data were collected in the time span between 25 January and 09 November 2020. The most relevant entries from the blm seed list are: george floyd, #blm, black lives matter. Tweets were collected between 24 May 2020 and 16 June 2020. The most relevant entries from the US Elections seed list are: #maga, #elections2020, Trump, Biden, Harris, Pence. The tweets were collected between 30 September 2020 and 04 November 2020. + +For each domain, a big bulk of data was collected in real time for each specific time span. From these about 400,000 tweets were randomly selected and evaluated with the ensemble method as described in Section 3 of the main paper. \ No newline at end of file diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/images.zip b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..44e48e3eaf3b6f1f87ba6ac82e29ea3995b5d6b2 --- /dev/null +++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a25c6b15ed58f69668e2143a6527e248323bdac3550f60cfdf315cc799074aa4 +size 390414 diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/layout.json b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..94ea47c00e56fc44c71babebc030af3daa6416ca --- /dev/null +++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4facfc770a3162673d0a65c8a0e493e9bcdf5a822a1d51b58d7c5b8a3bb4d6e6 +size 350575 diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_content_list.json b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a489acc97d29c2d8a24f612940a10a738d1f86eb --- /dev/null +++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cab24919178e61a3d05f7b644713ee3a0b8c2a7686cce291cbc10ddd1ca0b4a +size 90509 diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_model.json b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2f0b8670630cf1bf2520e292b7ab6aa4a6758c34 --- /dev/null +++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cd7cd441c85c00d6fb04f17f04f4d7bc8acbf1c97262113d90108a17ac635ca +size 108614 diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_origin.pdf b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5e5e027f5c7c404e66ae671e71da39102130267a --- /dev/null +++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c617ca2b322e839a26be5e19d7607e98935df03937bd665d06f142a231bee36 +size 4891834 diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/full.md b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..07f1dac4a20cdbec36adf02076559c854ed45bdc --- /dev/null +++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/full.md @@ -0,0 +1,374 @@ +# A Label-Aware BERT Attention Network for Zero-Shot Multi-Intent Detection in Spoken Language Understanding + +Ting-Wei Wu, Ruolin Su, Biing-Hwang Juang + +Department of Electrical and Computer Engineering + +Georgia Institute of Technology + +{waynewu, ruolinsu}@gatech.edu, juang@ece.gatech.edu + +# Abstract + +With the early success of query-answer assistants such as Alexa and Siri, research attempts to expand system capabilities of handling service automation are now abundant. However, preliminary systems have quickly found the inadequacy in relying on simple classification techniques to effectively accomplish the automation task. The main challenge is that the dialogue often involves complexity in user's intents (or purposes) which are multiproned, subject to spontaneous change, and difficult to track. Furthermore, public datasets have not considered these complications and the general semantic annotations are lacking which may result in zero-shot problem. Motivated by the above, we propose a Label-Aware BERT Attention Network (LABAN) for zero-shot multi-intent detection. We first encode input utterances with BERT and construct a label embedded space by considering embedded semantics in intent labels. An input utterance is then classified based on its projection weights on each intent embedding in this embedded space. We show that it successfully extends to few/zero-shot setting where part of intent labels are unseen in training data, by also taking account of semantics in these unseen intent labels. Experimental results show that our approach is capable of detecting many unseen intent labels correctly. It also achieves the state-of-the-art performance on five multi-intent datasets in normal cases. + +# 1 Introduction + +In spoken language understanding (SLU) of task-oriented dialog systems, each utterance is often interpreted as a kind of action being performed by the speaker, which we call speech or dialog acts (Abbeduto, 1983). These acts may commit speakers to some course of actions, like asking or acknowledging, along with a series of distinctive semantic notions involved in a task. Usually the system forms the semantic frames by identifying intents and slots to express dialog acts. For instance, + +given a sample utterance, "Are there any accidents on my route to work at 10?", the intent detection task will first identify intents, i.e., 'Get Info Traffic', 'Get Location Work' and then the slot-filling task will predict a slot such as (time:10). In such case, an 'intent label' for an utterance is defined as a purpose or a goal that clearly states user's act. + +Dominant SLU systems have adopted several techniques to predict single intents by treating it as a multi-class classification problem (Gao et al., 2018; Goo et al., 2018; Qin et al., 2019). However, in real world scenario, many utterances may have multiple intents (Li et al., 2018b; Rastogi et al., 2019) like the above example. Multi-intent SLU often requires more sophisticated reasoning on given utterances to disambiguate different intent natures. Gangadharaiah and Narayanaswamy (2019) first explored the joint multi-intent and slotfilling task by treating multi-intents as a single context vector, but not scalable to a large number of intents. Qin et al. (2020) further proposed a state-of-the-art model to consider each intent-slot interaction via adaptive graph attention. However, these approaches cannot successfully tackle more complex multi-intent scenarios when sentences may not have explicit conjunctions. + +The second challenge in SLU intent detection is intent fluidity variation, which we refer to the extent of naturalness when a dialogue progresses. In less stylized conversations, they usually contain a less bounded set of intents which may change with dialog context/states. Thus, usually some utterances' intents may not be seen during training and this problem deteriorates in the multi-intent scenario (Xia et al., 2020). Second, there is no rigorous definition of an intent annotation format or how many intents should be defined. Therefore, conventional models trained on one dataset with a fixed set of intent labels may possibly fail to detect a new in-domain intent. We refer it to the zero-shot problem. Larson et al. (2019) suggests a two-stage + +process to first classify if a query is in-scope; then to assign intents. However, it cannot scale easily to unseen intents in multi-intent scenario. + +To tackle the above two challenges, we found that leveraging embedded semantics in intent labels may be useful. In conventional intent classification, these systems usually classify an utterance to a label which is represented by an indexed ID like 0 (i.e. one-hot encoding). However, representing intents with indexed IDs fails to consider embedded semantics in the labels too. For instance, we can use words 'get' and 'direction' in an intent label 'get direction' to help with identifying semantically equivalent words in an utterance, i.e., I 'want direction' to SF. For a given set of intent labels within one domain, we can compare the semantic similarity between words in an utterance and words in these intents. Similarly in the zero-shot setting, even if some intents may not be visible during training, we could still compare the word semantics in these intents with a new utterance. + +In this paper, we propose our new framework: Label-Aware BERT Attention Network (LABAN) in Figure. 1. We first introduce BERT to capture the multi-intent natures when utterances do not have explicit conjunctions. Then, instead of treating intent labels only for indexed IDs, we use words in each intent label in training data to construct a label embedding space. After encoding an utterance and all intents in a given training set for embeddings separately, a label-aware layer will generate scores of how likely this utterance belongs to each intent. To accommodate the zero-shot case, we could additionally introduce unseen intents' embeddings too to jointly construct the embedding space. In contrast with prior works' limited predictability only on seen intents, our model unfreezes the constraint by considering semantics in intent labels to deal with new unseen labels. The code and resources are released in https://github.com/waynewu6250/LABAN. The paper has the following contributions: + +1. We extend the first use of BERT into multi- intent SLU scenario with a simple but powerful label-aware approach. +2. We successfully demonstrate LABAN's effectiveness to deal with unseen multiple intents and fast harness the intent detection task by training with few data of unseen intents. +3. We compare the LABAN's performance + +on five extended and complex multi-intent datasets that show significant improvement over previous methods and baselines by considering the contextualized information from BERT and label semantics. + +# 2 Related Work + +Multi-intent Detection Intent detection mainly aims to classify a given utterance with its intents from user inputs. Different approaches such as convolutional-LSTM and capsule network have been proposed to solve the problem (Qian, 2017; Liu et al., 2017; Xia et al., 2018). Considering intents highly associated with slot-filling, many joint models (Goo et al., 2018; Li et al., 2018a; Qin et al., 2019; E et al., 2019; Liu et al., 2019b) utilize intent information like gradients or cross-impact networks to further reinforce the slot-filling prediction. However these methods do not consider multiple intent cases. Therefore, Rychalska et al. (2018) first adopted hierarchical structures to identify multiple user intents. Gangadharajaiah and Narayanaswamy (2019) and Qin et al. (2020) further exploited interactive relations between intents and slots. Wu et al. (2021) leveraged the dialog context to better harness the joint tasks. Our model follows these models' paradigm and focuses on more complex cases: 1) Multi intents no longer exist in separate parts of the sentence which our BERT introduction can be beneficial and 2) Some testing intents are not available during training. + +Zero-shot Learning Zero-shot learning (ZSL) aims to recognize objects whose instances may not be seen during training (Lampert, 2014). Early works usually focused in the fields of computer vision (Lampert, 2014; Al-Halah et al., 2016; Norouzi et al., 2014). They adopted a two-stage approach to first identify object's attributes and estimated class posteriors based on similarity, which often suffered from domain shift between intermediate and target tasks. Recent advances in ZSL directly learned a mapping between feature and semantic spaces (Palatucci et al., 2009; Akata et al., 2016; Frome et al., 2013) or built a common intermediate space (Zhang and Saligrama, 2015; Xian et al., 2017). Similar treatment could be applied in natural language. Chen et al. (2016) proposed CDSSM to consider cosine similarity of deep semantics from utterances and intents. Xia et al. (2018) and Liu et al. (2019a) extended ZSL in user intent detection with capsule neural networks. Si + +![](images/bd0740cc57cb21bbf1d58089c1c5ad6140aae90130b28a2d1cf13e512f5737c6.jpg) +Figure 1: This figure shows the overall LABAN framework. (a) During training phase, two BERT encoders will encode both the utterance and all seen intent labels. Then the utterance embedding will be projected onto a constructed semantic embedding space $\mathcal{T}$ with projected weights as scores. (b) During testing phase, new unseen intents will also be encoded and participate in constructing $\mathcal{T}'$ to generate scores based on a new utterance. + +et al. (2021) proposed disentangled intent representations for multi-task training. We follow these works and extend to multi-intent detection cases with intent semantics and pretrained models. + +# 3 Problem Formulation + +In this section, we formally state the multi-intent detection problem in the normal and zero-shot case. Multi-Intent Detection. Given a labeled training dataset where each sample has the following format: $(x,y)$ where $x$ is an utterance and $y = (y_{1},y_{i},\dots,y_{K})\in \{0,1\}^{K}$ is a set of multiple binary intent labels. Each $y_{i}$ will belong to a set $Y^s$ of $K$ seen intents. We aim to classify an utterance $x_{seen}$ in the seen intent classes $Y^s$ . + +Zero-shot Multi-Intent Detection. Given a labeled training dataset $(x,y)$ where $y\in Y^{s}$ , in testing we aim to classify an utterance $x_{\text{unseen}}$ with its correct intent categories $y_{\text{unseen}} = (y_1,y_i,\dots,y_{K + L})\in \{0,1\}^{K + L}$ from the seen and unseen intent classes $Y = Y^{s}\cup Y^{u}$ . $Y^{u}$ will be a set of $L$ unseen intents which is given along with $Y^{s}$ as domain ontology during testing, but not visible in training. + +# 4 Approach + +# 4.1 Utterance encoder + +BERT is a multi-layer transformer-based encoder containing multi-head self-attention layers (Devlin + +et al., 2019). Models fine-tuned on BERT have achieved several benchmark results in many natural language tasks (Sun et al., 2020). Therefore, we first adopt one BERT $BERT_{u}$ to encode an input utterance $x = (w_{1},\dots,w_{T_{u}})$ . Here, we will pad it up to a max sequence length $T_{u}$ . + +$$ +h ^ {u} = B E R T _ {u} (x) \tag {1} +$$ + +where $h^u \in \mathbb{R}^{T_u \times H}$ is the token-level representations of $x$ and $H$ is the hidden size of BERT. Then, we adopt two methods to further encode them into a sentence embedding $r^u \in \mathbb{R}^H$ . First, we could take the hidden state $h_1^u$ from the first time step of [CLS] as $r^u = h_1^u$ (BERT-finetune). Or to better consider the individual word importance to the overall sentence embedding, we follow the work in Lin et al. (2017) to use a self-attentive network. + +$$ +\bar {h} _ {t} ^ {u} = W h _ {t} ^ {u} + b _ {w} \tag {2} +$$ + +$$ +\alpha_ {t} = \frac {e ^ {\bar {h} _ {t} ^ {u T} u _ {w}}}{\sum_ {t ^ {\prime}} e ^ {\bar {h} _ {t ^ {\prime}} ^ {u T} u _ {w}}} \tag {3} +$$ + +$$ +r ^ {u} = \sum_ {t} \alpha_ {t} h _ {t} ^ {u} \tag {4} +$$ + +where each $h_t^u$ in $h^u$ are fed into an affine transformation $(W, b_w)$ and output $\bar{h}_t^u$ . Then $\{\alpha_t\}$ represents the similarity scores between each $h_t^u$ and $K$ heads of learnable context vectors $u_w$ as the global sentence views; for each head, we can get + +a sentence representation $r_h^u$ . Finally we will concatenate all heads for the final representation $r^u$ . + +# 4.2 Adaptive label-aware attentive layer + +Inspired by few-shot learning works (Snell et al., 2017; Reimers and Gurevych, 2019), instead of classifying utterance into a predefined set of intents, we instead leverage the linear approximation idea (del Pino and Galaz, 1995) to help us determine the intents of an utterance. The linear approximation problem states that let $S$ be a Hilbert space and $\mathcal{T}$ be a subspace of $S$ , given a vector $z \in S$ , we would like to find the closest point $\hat{z} \in \mathcal{T}$ to $z$ . It turns out that the solution of $\hat{z} = \sum_{k=1}^{N} \beta_k v_k$ will be a linear combination of a basis $v_1, \ldots, v_N$ for $\mathcal{T}$ of $N$ dimension. $\beta = \mathbf{G}^{-1}\mathbf{b}$ where an element in the Gram matrix $G_{k,n} = \langle v_n, v_k \rangle$ and $b_n = \langle z, v_n \rangle$ . + +To transform the above idea into a multi-intent detection setting, we first construct an intent embedding subspace $\mathcal{T}$ with a basis $\{r_1^l,\dots,r_K^l\}$ given a set of $K$ intents $Y^{s}$ . To obtain $\{r_1^l,\dots,r_K^l\}$ , we adopt another BERT $BERT_{l}$ to encode $K$ intents. Namely, for every intent $y_{i}$ in a given set $Y^{s}$ , which could be expressed as a word sequence $(w_{1},\dots,w_{T_{l}})$ , we similarly use another BERT $BERT_{l}$ with the self-attentive layer mentioned in section 4.1 to encode it into an intent embedding $r_i^l$ . The reason to use a different BERT from $BERT_{u}$ is that intents often have very different syntactic structures (i.e. no subjects) compared to the utterances. + +By such intent encoding, we will obtain $K$ intent embeddings as our basis $\{r_1^l,\dots,r_K^l\}$ to construct an intent embedding space $\mathcal{T}$ . Then shown in Figure. 1, for an utterance $r^u$ , we can project it onto $\mathcal{T}$ to obtain its linear approximation $\hat{r}^u = \sum_{i=1}^{K} w_i r_i^l$ , where $\mathbf{w} \in \mathbb{R}^K$ could be computed as $\mathbf{w} = \sqrt{H} \mathbf{G}^{-1} \mathbf{b}$ . And the Gram matrix $\mathbf{G}$ and $\mathbf{b}$ are the followings: + +$$ +\mathbf {G} = \left[ \begin{array}{c c c} \langle r _ {1} ^ {l}, r _ {1} ^ {l} \rangle & \dots & \langle r _ {K} ^ {l}, r _ {1} ^ {l} \rangle \\ \vdots & \ddots & \vdots \\ \langle r _ {1} ^ {l}, r _ {K} ^ {l} \rangle & \dots & \langle r _ {K} ^ {l}, r _ {K} ^ {l} \rangle \end{array} \right] \tag {5} +$$ + +$$ +\mathbf {b} = \left[ \begin{array}{c} \langle r ^ {u}, r _ {1} ^ {l} \rangle \\ \vdots \\ \langle r ^ {u}, r _ {K} ^ {l} \rangle \end{array} \right] \tag {6} +$$ + +To note, we assume $\{r_1^l,\dots,r_K^l\}$ are linearly independent since each vector represents the concept of an intent which should not be a linear combination of other intent vectors. Hence, $\mathbf{G}$ is guaranteed + +positive definite and will have an inverse. Here we further time a scaling factor $\sqrt{H}$ to compute $\mathbf{w}$ for empirical consideration since $\mathbf{G}^{-1}$ tends to lead overall product into small values. + +After obtaining $\mathbf{w}$ , these projection weights can be viewed as scores of how likely an utterance $x$ belong to each intent $y_{i}$ . We can follow Qin et al. (2020) to treat it as a multi-label classification task and generate the logits $\hat{y} = \sigma (\mathbf{w})$ by sending $\mathbf{w}$ into a sigmoid function $\sigma$ . Finally we can have the intent detection objective as a binary cross entropy loss where $N$ is number of samples: + +$$ +\begin{array}{l} \mathcal {L} := - \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {K} \left(y _ {j} ^ {(i)} \log \left(\hat {y} _ {j} ^ {(i)}\right) \right. \\ + (1 - y _ {j} ^ {(i)}) l o g (1 - (\hat {y} _ {j} ^ {(i)})) \qquad (7) \\ \end{array} +$$ + +During testing, after obtaining $\hat{y} \in \mathbb{R}^K$ as probabilities of the utterance belong to each intent, we can set a threshold $t$ where $0 < t < 1.0$ as a hyperparameter to select the final predicted intents. For instance, if we have $\hat{y} = \{0.3, 0.6, 0.9, 0.1, 0.4\}$ and $t = 0.5$ , the intents are predicted as $\{2, 3\}$ . + +# 4.3 Zero-shot setting + +For normal multi-intent detection, after training, for a given $K$ seen intent set $Y^{s}$ , we could use the method in section 4.2 to calculate the scores of a new utterance $x_{seen}$ with respect to each intent. Similarly, we could easily extend it into the zero-shot setting. First we will train $BERT_{u}, BERT_{l}$ with the training data of a given $K$ seen intent set $Y^{s}$ . Then, during testing, given a new $L$ unseen intent set $Y^{u}$ , we could also encode these intents into intent embeddings $\{r_1^l, \dots, r_L^l\}$ with the trained $BERT_{l}$ too. Finally, plus the seen intent set $Y^{s}$ , we could construct an extended intent subspace $\mathcal{T}'$ with a basis of $\{r_1^l, \dots, r_K^l, r_{K+1}^l, \dots, r_{K+L}^l\}$ and similarly generate scores for each seen and unseen intents with a new utterance $x_{unseen}$ . + +# 5 Experimental Setting + +# 5.1 Datasets + +We use three widely used public multi-intent single-sentence datasets: MixATIS, MixSNIPS (Qin et al., 2020; Hemphill et al., 1990; Coucke et al., 2018) and Facebook Semantic Parsing System (FSPS) dataset (Gupta et al., 2018) and two multi-intent dialogue datasets: Microsoft dialogue challenge dataset (MDC) (Li et al., 2018b) and Schema-Guided Dialogue dataset (SGD) (Rastogi et al., + +
DatasetData Typetrain/val/testTotal Labels
MixATISsingle18k/1k/1k17
MixSNIPSsingle45k/2.5k/2.5k7
FSPSsingle31k/4.4k/9k24
MDCdialogue45k/15k/15k11
SGDdialogue198k/66k/66k18
+ +Table 1: Dataset statistics + +2019) for our experiments. For FSPS, we focus on predicting all intents regardless of their positions for each utterance. For MDC and SGD, we treat each utterance as an individual sample with multiple user and system acts as intents for experiments. We use all datasets for normal and zero-shot multi-intent detection and include single intent detection results with ATIS (Hemphill et al., 1990) and SNIPS datasets (Coucke et al., 2018). The detailed data statistics is shown in Table. 1. + +For zero-shot task, we use single sentence datasets MixATIS, MixSNIPS and FSPS for experiments. We subsample each dataset 5 times with the same train-valid/test number and report the average results of 5 random splits. In each split, we simulate the situation where training data only contain a part of intent labels and test will have all intent labels. For instance, MixATIS has totally 17 labels, we maintain $\mathrm{K} < 17$ possible intents seen in training set and the testing set has all 17 intents. In experiments, we set 4 possible values of $\mathrm{K}$ in each three datasets. For few-shot task, we add $5\%$ and $10\%$ testing data into the training data and predict the rest testing data performance. We also replace BERT with two variations: ALBERT, TOD-BERT as our utterance encoder for additional baselines. + +# 5.2 Baselines + +We compare the normal multi-intent detection results with three competitive baseline models: + +1. Stack-Prop which uses two stacked encoder-decoder structures for joint intent and slot filling tasks (Qin et al., 2019). +2. Joint MID-SF which first considers multi-intent detection task in use of BiLSTMs (Gangadharaiah and Narayanaswamy, 2019). +3. AGIF uses graph interactive framework to consider fine-grained information (Qin et al., 2020). + +We also compare zero-shot multi-intent detection results with seven competitive baselines: + +1. BERT-finetune uses BERT as the encoder and increases the total output size of the final fully-connected layer on top of it (Devlin et al., 2019). +2. Zero-shot LSTM uses two LSTM encoders to + +encode utterances and intents; then acquires scores with dot product (Kumar et al., 2017). + +3.CDSSM uses convolutional deep structured model to calculate cosine similarities between embeddings (Chen et al., 2016). +4. Zero-shot BERT uses BERT as the encoder for Zero-shot LSTM (Kumar et al., 2017) instead. +5. CDSSM BERT uses BERT as the encoder for CDSSM (Chen et al., 2016) instead. +6. ALBERT-LA uses ALBERT as encoder along with our label-aware layer (Lan et al., 2020). +7. TOD-BERT-LA uses TOD-BERT, a pretrained encoder for task-oriented dialogs, along with our label-aware attentive layer (Wu et al., 2020). + +# 5.3 Experimental setting + +We use the pretrained BERT with 12 hidden layers of 768 units and 12 self-attention heads. The model is trained for 50 epochs and saved with the best performance on the validation set. For zero/few-shot setting, we randomly pick a number of intents to be unseen in the training set, run experiments for 5 different splits and report the average. We set the threshold $t$ as 0.5 for multi-label classification. We follow the metrics used in Qin et al. (2020) for intent accuracy and F1 score. + +# 6 Main Results + +# 6.1 Multi-intent detection + +Table. 2 shows the normal multi-intent detection results on all five datasets. We can observe that LABAN outperforms the baselines substantially in the multi-intent detection especially in MixATIS and FSPS. It proves the usefulness of our fine-tuning BERT to capture more precise contextualized information for the downstream task. LABAN also considers the semantics in intent labels where the improvement enlarges when the number of intents increases, i.e. larger increase in MixATIS with 17 intents compared to MixSNIPS with only 7 intents. For datasets that do not have explicit conjunction words between the sentence like FSPS, MDC, SGD, we can observe a huge increase in accuracy in our model. Second, not only in multi-intent detection, in Table. 4, we can also see LABAN outperforms other baselines dealing with just one intent. + +# 6.2 Zero-shot Multi-intent detection + +To further justify our model's main contribution in zero-shot cases, we compare LABAN with several competitive baselines. As shown in Table. 3, + +
DatasetMixATISMixSNIPSFSPSMDCSGD
ModelF1AccF1AccF1AccF1AccF1Acc
Stack-Prop0.7900.7190.9760.9460.9110.7230.8770.7800.9190.891
Joint MID-SF0.8060.7310.9800.9510.8770.7800.8550.7540.9070.850
AGIF0.8120.7580.9850.9610.9140.7490.9070.7410.9240.761
LABAN0.958†0.889†0.9850.9630.948†0.913†0.8980.814†0.950†0.928†
+ +Table 2: Normal multi-intent detection results on five datasets. We report accuracy (Acc) for all intents exact match and F1 scores based on individual intent calculation. $\dagger$ indicates the significant improvement of $p$ -value $< 0.05$ compared to the previous state-of-the-art model AGIF. + +
DatasetFSPSMixATISMixSNIPS
ModelF1-aF1-sF1-uF1-aF1-sF1-uF1-aF1-sF1-u
BERT-finetune0.3650.4790.0000.5920.8360.0000.4900.6530.000
Zero-shot LSTM0.3410.4940.0290.5330.7280.0550.4750.5460.264
CDSSM0.4960.4400.3940.5920.8270.0600.5910.6590.432
Zero-shot BERT0.5170.4610.3730.4630.5760.1620.4720.4640.370
CDSSM BERT0.4940.4860.3480.4910.6140.0410.4810.4810.402
ALBERT-LA0.3910.4250.2280.5950.7390.3620.5670.5740.466
TOD-BERT-LA0.4190.3690.4050.7020.7820.4590.6420.6410.559†
BERT-LA (LABAN)0.5440.4710.451†0.6960.8080.518†0.6400.6220.526
+ +Table 3: Performance of the zero-shot multi-intent detection compared with several competitive baselines. Here we choose the train/test label ratio to be FSPS 17/24, MixATIS 14/17, MixSNIPS 5/7. F1-a, F1-s, F1-u are F1 scores evaluated on data with all/seen/unseen intent labels. $\dagger$ indicates the significant improvement of $p$ -value $< 0.05$ on F1-u results compared with CDSSM. + +
ModelATISSNIPS
Stack-Propagation0.9690.980
Joint Mul ID-SF0.9540.972
AGIF0.9710.981
LABAN0.9780.982
+ +Table 4: Single intent detection accuracy results on two single-intent datasets compared with baseline models. + +BERT-finetune by simply enlarging the neurons for unseen intents is not capable of predicting any unseen intent utterances, causing 0.00 F1-u scores. Non-BERT approaches like Zero-shot LSTM and CDSSM using dot product or cosine similarity can show improved but limited unseen intent predictability. By leveraging pretraining power, zero-shot BERT can better associate unseen and seen intents with higher F1 score; while the performance of CDSSM BERT with more complex structures degrades with model overfitting. Finally, we discover that in all datasets (FSPS, MixATIS, MixSNIPS), with our label-aware attentive layer, three models (ALBERT-LA, TOD-BERT-LA, LABAN) with a strong pretrained power successfully outperform baselines in predicting unseen labels by associating their relations with input sequences, even if these intents are never seen in training phase. + +We also observe that ALBERT has relatively inferior performance among BERT-based models, which possibly results from a light version of BERT and a different pretraining objective from + +the conversation-oriented version: TOD-BERT. To note, the original BERT model has slightly better F1 score for seen intents. It is reasonable since it avoids the error to predict utterances with unseen labels by searching over only the seen intents. However, without sacrificing much, models with the label-aware attentive layer could significantly boost the overall F1 scores in all three datasets. + +Then we comprehensively evaluate LABAN's performance in zero/few-shot setting with different seen/unseen intent ratios in Figure. 2. We mainly have four discoveries. (1) LABAN can predict unseen intents around average half correctly. (2) When the number of seen intents decreases, F1 score reduces both for seen and unseen intent labels with model's poorer knowledge of seen intents. (3) In utterances with both seen and unseen intents, F1 score for seen intents is lower than utterances with only seen intents. The fewer seen intents are trained, the more inclined the model will predict the utterance as unseen intents frequently. (4) In the few-shot setting, with little data of unseen intents trained, both seen and unseen intent accuracy boost by a large margin especially in MixSNIPS. It indicates the fact that regardless of scarce training data with some unseen labels, LABAN could fully exploit the use of pretrained linguistic knowledge on label semantics to match the most relevant intents in current criteria. + +![](images/a31653da714a92948bcba9896a4baffeee1e1c2686921754eabe3abdce1dc26f.jpg) + +![](images/7ebfb6acac328ec82abc35b44e599bb09cfe570e21e7605bace166b959d6e03c.jpg) + +![](images/d4afe4e4419bcc9c931754df3f730dbad0e1b1e23a06bf0499098e6265eec66b.jpg) + +![](images/9d92f4c199b928f17d19b60ed52b35ef0af0ebc3e142370e52c6476b6ee8e9b9.jpg) + +![](images/67493c88e739c53fada60b25c3e7782c76b414e6afe32a39035c814516459887.jpg) + +![](images/233e2b09ab15978321c578061de5b418600b1748d99ae2040d937cecd3dcf55d.jpg) + +![](images/3e907acb3308588070c8cf883bab528d2ba978811a6bc467d26b14e42783fea9.jpg) +Figure 2: Zero-shot/Few-shot results of LABAN for FSPS, MixATIS, MixSNIPS datasets with varying Seen Labels, the number of seen labels during training. FSPS, MixATIS, MixSNIPS have total 24, 17, 7 intents. F1-a, F1-s, F1-u are F1 scores evaluated on data with all/seen/unseen intent labels. + +![](images/6155fee7a2439382fcf0a77ac137b1b88b711eaa5e33ee5013f10de65c08c701.jpg) + +![](images/d4cd9af78fa3ab4a62ccee5da305ec6e8e7d03f3fd89b5ab4b7e6fae15bf9266.jpg) + +
DatasetMixATISMixSNIPSFSPSMDCSGD
ModelF1AccF1AccF1AccF1AccF1Acc
BERT-finetune0.9520.8790.9820.9540.9380.9010.8970.8140.9490.926
BERT-attn0.9630.8930.9840.9610.9420.9030.8970.8160.9500.927
LABAN0.9580.8890.9850.9630.9480.9130.8980.8140.9500.928
+ +Table 5: Ablation analysis of different components in LABAN for normal multi-intent detection results on five datasets. We report accuracy (Acc) for all intents exact match and F1 scores based on individual intent calculation. + +# 6.3 Ablation Analysis + +To better understand the effectiveness of LABAN's components on multi-intent detection, we conduct the ablation analysis by reporting two different baseline variations of our model: BERT-finetune and BERT-attn. BERT-finetune refers to using the hidden state of [CLS] head from BERT without the extra label-aware layer; BERT-attn refers to adding a self-attentive layer to encode the sentence embeddings without the label-aware layer too. And finally, LABAN refers to our final model as the BERT with the self-attentive layer and adaptive label-aware attentive layer. + +In experimental results shown in Table. 5, we can first observe that BERT with the additional self-attentive layer has increased performances on all five datasets, especially in MixATIS and FSPS. + +When the number of total intents increases, the self-attentive layer is beneficial in understanding each word importance to the overall intent prediction. After introducing the label-aware layer, we could see a further increase, especially in FSPS which contains the maximum number of intents (24). It does help LABAN to better match the utterance and different intent semantics, particularly in the case when intent options are more complicated. Although the increase seems subtle when the label sources are abundant, it can cause huge assistance of tackling unseen labels, without sacrificing much performance in normal cases. + +# 6.4 Error Analysis + +We demonstrate a few cases in Table. 6 to analyze some error cases of LABAN. For simplicity, we abbreviate each dataset as MixATIS: MA, MixSNIPS: + +
MI IDSentencePredict labelsReal labels
MA1At the charlotte airport, how many different types of aircraft are there for US air and St. Paul to Kansas city friday night.atisquantity (7)atis_Aircraft (4),atis_airport (12)
MS1Play the album Journeyman.play_music (0)search 创建_work (2)
FS1Is traffic always heavy at this stretch of highway?get_location (3), get_info_social (6)unsupported_navigation (2)
FS2How's the traffic ahead?get_info_social (6)get_info_social (6),get_location (3)
ZS IDSentencePredict labelsReal labels
MA2Show me the lowest priced fare from Dallas to Baltimore.atis_airfare (16),atis_airport (9),atis cheapest (14)atis_airfare (16)
MS2Play music from 2015 and then I am giving this current novel 1 out of 6 stars.rate_book (4),search.Screening_event (5),book Restaurant (6)rate_book (4), play_music (3)
FS3I want to be at my daughters by 8am what time should I leave?get_location_home (4),get_estimated Arrival (2),get Directions (16),update Directions (19)get_location_home (4),get_estimateddeparture (15)
+ +Table 6: Example of multi-intent (MI) and zero-shot (ZS) prediction errors. Each example will have an id referring its dataset (MixATIS: MA, MixSNIPS: MS, FSPS: FS). intent indicates that it is the same both in prediction and real. And the number behind intents are the corresponding label id. + +MS, and FSPS: FS in the table. + +First, we found that some words in the utterances may obfuscate LABAN's prediction. For instance, in case MA1, LABAN may predict 'atis quantity' based on the keyword 'how many' by comparing the sentence and label semantics. In case MS1, the 'play' keyword also induces the model to predict the intent 'play music', where it actually means to search and play an album list. In such sense, 'creative work' may be less relevant to 'album' for our model's sentence-label pairing. + +For FSPS, we found that most errors occur when real labels are 'unsupported navigation', 'unsupported event' or 'unsupported' such as case FS1. This may be hard for the model without an external ontology to identify unsupported events (out-of-scope). Therefore, in most cases, the model will just identify 'get info traffic' and 'get location' as the closest intents. In FS2 case, the model fails to predict 'get location' correctly. Without including contexts, it may be hard for the model to associate 'ahead' with 'get location'. + +Then, we show the errors in zero-shot setting. Here, the model only sees 12/17 intents in MixATIS, 3/7 intents in MixSNIPS and 14/24 intents in FSPS during training. We found two distinctive phenomena: (1) The model tends to predict more labels like in case MA2 if it is uncertain with unseen intents, resulting in lower precision. (2) We found that the model can predict seen intents well regardless of other existence of unseen intents in the same sentence. For unseen intent errors, the model tends to categorize them more into other unseen classes than seen classes, which indicates that the model has a basic knowledge of what seen intents should be. Mechanisms for explicit semantic pairing may + +be one of reasons and show ability of separating known and unknown classes confidently. + +In case MA2, 'atis cheapest' and 'atis airfare' are not seen in training phase. However, the model is still capable of predicting 'atis airfare' accurately. Moreover, 'lowest' keyword is matched with the predicted label 'atis cheapest', benefiting from our label-aware attentive layer. For case MS2, all of predicted and real labels are unseen during training. We found the model still accurately predicts 'rate book' correctly based on keyword 'stars'. And the model predicts 'search screening event' or 'search creative work' instead 'play music', which actually happen frequently in other predictions. In FSPS like FS3 case, the model tends to predict lots of unseen intents without matching any of true intents. In FS3 case, it has only seen the intent 'get estimated arrival' during training which makes it erroneously predicts the sentence to 'arrival' rather than 'departure'. The effect could be possibly alleviated by introducing external knowledge embeddings for keyword 'leave' related to 'departure', which human usually associates with. + +# 6.5 Visualization + +To better understand the classification results of LABAN, shown in Figure. 3, we perform TSNE visualization (van der Maaten and Hinton, 2008) on the projected embeddings $\hat{r}^u = \sum_{i=1}^{K} w_i r_i^l$ of each utterance onto the intent subspace $\mathcal{T}$ . Here we also plot each intent embedding $r_i^l$ with their intent numbers. We can observe that numerous clusters are formed with close semantic distances. And most of intent embeddings like id 0, 6, 9, 12 are close to their respective clusters. It indicates that LABAN successfully constructs an intent embedding space + +![](images/a480fca54d73fc73c4d4a005131935d5b7a499cfa304219b2a9424e5581df282.jpg) +TSNE visualization of intent clusters +Figure 3: This figure shows the visualization of utterance embeddings with their intent labels (color) in FSPS test set. Each number $i$ indicates an intent embedding $r_i^{l}$ 's location and its intent class. + +that illustrates the semantic relation between each of intents and helps with classification of a projected utterance embedding. To note, since some of utterances have more than one intent, to simply the graph, we randomly pick one of intents in these utterances for visualization. Therefore, we can see some of clusters like id 8 actually have two dominant sub clusters. And some of utterances on the right sub cluster have other intents like id 3, 4, 12, 17. Hence, they may be semantically close to these intent embeddings (3, 4, 12, 17) on the graph. + +# 7 Conclusion + +In this paper, we propose the extension of finetuning BERT and label-aware semantic interactions into the multi-intent detection task in SLU. It successfully provides the solution to zero/few-shot setting where there are unseen labels in new utterances. By considering the label semantics, we can generate scores of how likely new utterances belong to these unseen intents. We compare the performance of our approach with previous methods and obtain significant improvements over baselines. It sheds the light that constructing a label semantic space could help the model to distinguish seen and + +unseen intents in utterances better. It provides the guidance in the work of improving SLU zero-shot multi-intent detection by considering dialogue contexts and external knowledge learning, or a more challenging task of detecting out-of-domain (OOD) detection where unseen intents are not available. + +# Acknowledgements + +We are grateful for the insightful comments from the anonymous reviewers and the computing resources supported from Department of Electrical and Computer Engineering at Georgia Institute of Technology. + +# Ethical Consideration and Impact + +The work aims to unfreeze the limitation of intent granularity defined in task-oriented dialogue training datasets, which is often ill-posed in the context of modeling precise and multiple intents in many previous works (Qin et al., 2019; Goo et al., 2018). Multi-intent detection could be applied to a wide range of applications in many industries where the scenario requires a broader understanding of user requests. For example, customer service automation often solicits clear intent identification at each utterance for flexible answer policy, where identifying single intents may increase redundant and ambiguous dialogue turns. Second, zero-shot work has long been studied to unfreeze the limitation of deep learning models requesting large amount of data. It could be applied to multiple domains where intent labels are significantly lacking and may cause time-consuming labeling. By transferring the knowledge from existing labels, the model shall be more robust in dealing with unseen labels as humans have approached new things, which will be very beneficial in dialogue system design where many of data are unlabeled. + +In ethical aspect, naturalness of dialog structure heavily defines the scope of intent detection and usually changes during the dialog state transition. How to capture adequate intents from user is somehow critical in SLU and the following tasks like dialog state tracking. Wrong interpretation of intents may offend users and cause unsatisfactory answers. And we should also avoid predicting sensitive labels regarding user privacy. In such sense, we mainly test our model in all public released datasets which have been widely justified as unbiased in multiple domains and are not sensitive in revealing specific user information. + +Overall, we see great opportunities for research applying LABAN to investigate interactions between utterance and their latent intents. It gives good intuition how the model understands the underlying human acts and improves the transparency in decision-critical applications. To mitigate the risks associated with our model, we aim to anonymize user sensitive information in training data and focus on extracting domain-agnostic knowledge for better generalization and interpretability. + +# References + +Leonard Abbeduto. 1983. Linguistic communication and speech acts. kent bach, robert m. harnish. cambridge: M.i.t. press, 1979, pp. xvii 327. Applied Psycholinguistics, 4(4):397-407. +Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. 2016. Label-embedding for image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7):1425-1438. +Ziad Al-Halah, Makarand Tapaswi, and Rainer Stiefelhagen. 2016. Recovering the missing link: Predicting class-attribute associations for unsupervised zero-shot learning. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 5975-5984. IEEE Computer Society. +Yun-Nung Chen, Dilek Hakkani-Tür, and Xiaodong He. 2016. Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, March 20-25, 2016, pages 6045-6049. IEEE. +Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, Maël Primet, and Joseph Dureau. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. +Guido E. del Pino and Hector Galaz. 1995. Statistical applications of the inverse gram matrix: A revisitation. Brazilian Journal of Probability and Statistics, 9(2):177-196. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +Haihong E, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5467-5471, Florence, Italy. Association for Computational Linguistics. +Andrea Frome, Gregory S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc'Aurelio Ranzato, and Tomás Mikolov. 2013. Devise: A deep visual-semantic embedding model. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2121-2129. +Rashmi Gangadharaiiah and Balakrishnan Narayanaswamy. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 564-569, Minneapolis, Minnesota. Association for Computational Linguistics. +Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018, pages 1371-1374. ACM. +Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 753–757, New Orleans, Louisiana. Association for Computational Linguistics. +Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787-2792, Brussels, Belgium. Association for Computational Linguistics. +Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. +Anjishnu Kumar, Pavankumar Reddy Muddireddy, Markus Dreyer, and Bjorn Hoffmeister. 2017. Zero-shot learning across heterogeneous overlapping domains. In INTERSPEECH. +Christoph H Lampert. 2014. Attribute-based classification for zero-shot visual object categorization. + +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1311-1316, Hong Kong, China. Association for Computational Linguistics. +Changliang Li, Liang Li, and Ji Qi. 2018a. A self-attentive model with gate mechanism for spoken language understanding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3824–3833, Brussels, Belgium. Association for Computational Linguistics. +Xiujun Li, Sarah Panda, Jingjing Liu, and Jianfeng Gao. 2018b. Microsoft dialogue challenge: Building end-to-end task-completion dialogue systems. arXiv preprint arXiv:1807.11125. +Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Han Liu, Xiaotong Zhang, Lu Fan, Xuandi Fu, Qimai Li, Xiao-Ming Wu, and Albert Y.S. Lam. 2019a. Reconstructing capsule networks for zero-shot intent classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4799-4809, Hong Kong, China. Association for Computational Linguistics. +Ting Liu, Xiao DING, Yue QIAN, and Yiheng CHEN. 2017. Identification method of user's travel consumption intention in chatting robot. SCIENTIA SINICA Informationis, 47:997. +Yijin Liu, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, and Jinan Xu. 2019b. CM-net: A novel collaborative memory network for spoken language understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1051-1060, Hong Kong, China. Association for Computational Linguistics. + +Mohammad Norouzi, Tomás Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg Corrado, and Jeffrey Dean. 2014. Zero-shot learning by convex combination of semantic embeddings. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. +Mark Palatucci, Dean Pomerleau, Geoffrey E. Hinton, and Tom M. Mitchell. 2009. Zero-shot learning with semantic output codes. In Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada, pages 1410-1418. Curran Associates, Inc. +Yue Qian. 2017. Research on the identification method of users' travel consumption intent in chat robot. +Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019. A stack-propagation framework with token-level intent detection for spoken language understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2078-2087, Hong Kong, China. Association for Computational Linguistics. +Libo Qin, Xiao Xu, Wanxiang Che, and Ting Liu. 2020. AGIF: An adaptive graph-interactive framework for joint multiple intent detection and slot filling. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1807–1816, Online. Association for Computational Linguistics. +Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855. +Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics. +B. Rychalska, H. Glabska, and A. Wroblewska. 2018. Multi-intent hierarchical natural language understanding for chatbots. In 2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS), pages 256-259. +Qingyi Si, Yuanxin Liu, Peng Fu, Zheng Lin, Jiangnan Li, and Weiping Wang. 2021. Learning class-transductive intent representations for zero-shot intent detection. In IJCAI. +Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. + +In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4077-4087. +Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2020. How to fine-tune bert for text classification? +Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579-2605. +Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher, and Caiming Xiong. 2020. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 917-929, Online. Association for Computational Linguistics. +Ting-Wei Wu, Ruolin Su, and Biing-Hwang Juang. 2021. A Context-Aware Hierarchical BERT Fusion Network for Multi-Turn Dialog Act Detection. In Proc. Interspeech 2021, pages 1239-1243. +Congying Xia, Caiming Xiong, Philip Yu, and Richard Socher. 2020. Composed variational natural language generation for few-shot intents. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3379-3388, Online. Association for Computational Linguistics. +Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip Yu. 2018. Zero-shot user intent detection via capsule neural networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3090-3099, Brussels, Belgium. Association for Computational Linguistics. +Yongqin Xian, Bernt Schiele, and Zeynep Akata. 2017. Zero-shot learning - the good, the bad and the ugly. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 3077-3086. IEEE Computer Society. +Ziming Zhang and Venkatesh Saligrama. 2015. Zero-shot learning via semantic similarity embedding. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 4166-4174. IEEE Computer Society. + +# A Appendix + +# A.1 Linear Approximation in a Hilbert Space + +Let $S$ be a Hilbert space with inner product $\langle \cdot ,\cdot \rangle$ and induced norm $||\cdot ||$ , and let $\mathcal{T}$ be a subspace of $S$ . Given a vector $z\in S$ , we would like to find the closest point $\hat{z}\in \mathcal{T}$ to $z$ . Namely, we would like to solve the following optimization program: + +$$ +\min _ {x \in \mathcal {T}} | | z - x | | \tag {8} +$$ + +Given an arbitrary $z \in S$ , we know there exists exactly one point $\hat{z} \in \mathcal{T}$ that obeys + +$$ +z - \hat {z} \perp \mathcal {T} \tag {9} +$$ + +meaning $\langle z - \hat{z},y\rangle = 0$ for all $y\in \mathcal{T}$ and this point $\hat{z}$ is the unique minimizer of Equation 8. We can further construct $\hat{z}$ as the following: + +$$ +\hat {z} = \sum_ {k = 1} ^ {N} \beta_ {k} v _ {k} \tag {10} +$$ + +where $N$ is the dimension of $\mathcal{T}$ and $v_{1},\ldots ,v_{N}$ is a basis for $\mathcal{T}$ . Then we can transform our problem as finding coefficients $\beta_{1},\dots,\beta_{N}\in \mathbb{C}$ . + +From Equation 9, we know $\langle z - \hat{z}, v_n \rangle = 0$ for $n = 1, \dots, N$ . This means by plugging Equation 10, $\beta_n$ must obey $\langle z - \sum_{k=1}^{N} \beta_k v_k, v_n \rangle = 0$ for $n = 1, \dots, N$ . We can then obtain the following equation: + +$$ +\langle z, v _ {n} \rangle = \sum_ {k = 1} ^ {N} \beta_ {k} \langle v _ {k}, v _ {n} \rangle \tag {11} +$$ + +Since $z$ and the $\{v_{n}\}$ are given, we know both the $\langle z,v_n\rangle$ and $\langle v_k,v_n\rangle$ . We can write down the matrix form: + +$$ +\mathbf {G} \boldsymbol {\beta} = \mathbf {b} \tag {12} +$$ + +where $\beta \in \mathbb{C}^N$ , $b_{n} = \langle z,v_{n}\rangle$ and $G_{k,n} = \langle v_n,v_k\rangle$ . Or in the complete form: + +$$ +\mathbf {G} = \left[ \begin{array}{c c c} \langle v _ {1}, v _ {1} \rangle & \dots & \langle v _ {N}, v _ {1} \rangle \\ \vdots & \ddots & \vdots \\ \langle v _ {1}, v _ {N} \rangle & \dots & \langle v _ {N}, v _ {N} \rangle \end{array} \right] \tag {13} +$$ + +$$ +\mathbf {b} = \left[ \begin{array}{c} \langle z, v _ {1} \rangle \\ \vdots \\ \langle z, v _ {N} \rangle \end{array} \right] \tag {14} +$$ + +We can then solve the problem by finding $\beta = \mathbf{G}^{-1}\mathbf{b}$ where $\mathbf{G}$ is guaranteed invertible since $\{v_{n}\}$ is linear independent. + +# A.2 Dataset + +Here are some more detailed descriptions about datasets we used: + +MixATIS (Qin et al., 2020; Hemphill et al., 1990) ATIS (Airline Travel Information System) dataset is a standard benchmark dataset in the airline domain widely used as an intent classification. MixATIS is further synthesized based on ATIS by concatenating single utterances only with the conjunction word 'and'. + +MixSNIPS (Qin et al., 2020; Coucke et al., 2018) MixSNIPS dataset is collected from the SNIPS personal voice assistant and has the ratio of sentences with 1-3 intents at [0.3, 0.5, 0.2]. It also concatenates SNIPS utterances with the conjunction word 'and'. + +FSPS (Gupta et al., 2018) Facebook Semantic Parsing System (FSPS) dataset is a large dataset of 44k requests annotated with a hierarchical semantic representation for task oriented dialog systems. Intentions are prefixed with 'IN:' an slots with 'SL:'. Each utterance may contain one or more embedded intents and slots. + +MDC (Li et al., 2018b) Microsoft dialogue challenge dataset (MDC) is a well-annotated dataset for three task-completion domains: movie-ticket booking, restaurant reservation and taxi ordering. It was first released for SLT 2018 special session and contains information of dialogue acts and slots for each utterance. + +SGD (Rastogi et al., 2019) Schema-Guided Dialogue dataset (SGD) is a large dialogue dataset with over $20\mathrm{k}$ annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains. It could be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning or language generation. \ No newline at end of file diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/images.zip b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..370f57ca372130eaa27581280d2a53d8b474a28a --- /dev/null +++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f9a06704c4155568de4b5e17e8058c88f11e5ccea8944b4e290dbf39fdea512 +size 594931 diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/layout.json b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b7717b8dbb968b0200f6562aca7b6d88103f626e --- /dev/null +++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e99b118fb53960a76d52c5d6bdcf7e32e691bb4671f5812c914a36b29363bfff +size 472556 diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_content_list.json b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..97dd14293ca287d236283ff305ccbe1b47b0fb2f --- /dev/null +++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb6ff622ce857a7d3dbb87bb1e791ec5f0bb6776241f0edde2deb34952a37f95 +size 106728 diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_model.json b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..27680d1faa72236b7448371d71ec73421f559f98 --- /dev/null +++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91028adfdae9c7917e783e51767044d77fef46096a0b0d6903175b5a97fa9af3 +size 125681 diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_origin.pdf b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..97c94fd5fd8f8d960ec66c42143db7a4115b2c91 --- /dev/null +++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8494a4047ebb021740ab3bf5eb20cbac8e9539b716b66375b6d01a3831340c4f +size 536720 diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/full.md b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9b313c4bb796cf63a93bf345d35ef84c847f3c37 --- /dev/null +++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/full.md @@ -0,0 +1,434 @@ +# A Language Model-based Generative Classifier for Sentence-level Discourse Parsing + +Ying Zhang, Hidetakaka Kamigaito and Manabu Okumura + +Tokyo Institute of Technology + +{zhang, kamigaito, oku}@lr.pi.titech.ac.jp + +# Abstract + +Discourse segmentation and sentence-level discourse parsing play important roles for various NLP tasks to consider textual coherence. Despite recent achievements in both tasks, there is still room for improvement due to the scarcity of labeled data. To solve the problem, we propose a language model-based generative classifier (LMGC) for using more information from labels by treating the labels as an input while enhancing label representations by embedding descriptions for each label. Moreover, since this enables LMGC to make ready the representations for labels, unseen in the pre-training step, we can effectively use a pretrained language model in LMGC. Experimental results on the RST-DT dataset show that our LMGC achieved the state-of-the-art $\mathrm{F}_1$ score of 96.72 in discourse segmentation. It further achieved the state-of-the-art relation $\mathrm{F}_1$ scores of 84.69 with gold EDU boundaries and 81.18 with automatically segmented boundaries, respectively, in sentence-level discourse parsing. + +# 1 Introduction + +Textual coherence is essential for writing a natural language text that is comprehensible to readers. To recognize the coherent structure of a natural language text, Rhetorical Structure Theory (RST) is applied to describe an internal discourse structure for the text as a constituent tree (Mann and Thompson, 1988). A discourse tree in RST consists of elementary discourse units (EDUs), spans that describe recursive connections between EDUs, and nuclearity and relation labels that describe relationships for each connection. + +Figure 1 (a) shows an example RST discourse tree. A span including one or more EDUs is a node of the tree. Given two adjacent non-overlapping spans, their nuclearity can be either nucleus or satellite, denoted by N and S, where the nucleus represents a more salient or essential piece of information than the satellite. Furthermore, a relation + +![](images/66454131567c8ec2bf64f3f4fed25a71735c2ec971056a71ab9deadd4770918e.jpg) +(b) Linearized Discourse Tree +Figure 1: An example discourse tree structure. + +(Attribution (Elaboration (N (N We've got a lot) + +(S to do, )S)N Elaboration (She acknowledged .)S)Attribution + +label, such as Attribution and Elaboration, is used to describe the relation between the given spans (Mann and Thompson, 1988; Carlson and Marcu, 2001). To build such trees, RST parsing consists of discourse segmentation, a task to detect EDU boundaries in a given text, and discourse parsing, a task to link spans for detected EDUs. + +In this paper, we focus on discourse segmentation and sentence-level discourse parsing, which are indispensable in RST parsing (Joty et al., 2013; Feng and Hirst, 2014a; Joty et al., 2015; Wang et al., 2017; Kobayashi et al., 2020) and are applicable to many downstream tasks, such as machine translation (Guzmán et al., 2014; Joty et al., 2017) and sentence compression (Sporleder and Lapata, 2005). + +In discourse segmentation, Carlson et al. (2001) proposed a method for using lexical information and syntactic parsing results. Many researchers (Fisher and Roark, 2007; Xuan Bach et al., 2012; Feng and Hirst, 2014b) utilized these clues as features in a classifier although automatic parsing errors degraded segmentation performance. To avoid this problem, Wang et al. (2018b) used BiLSTM-CRF (Huang et al., 2015) to handle an input without these clues in an end-to-end manner. Lin et al. (2019) jointly performed discourse segmentation and sentence-level discourse parsing in their pointer-network-based model. They also intro + +duced multi-task learning for both tasks and reported the state-of-the-art results for discourse segmentation and sentence-level discourse parsing in terms of $\mathrm{F}_1$ scores. Despite these achievements, there is still room for improvement for both tasks due to the scarcity of labeled data. It is important to extract more potential information from the current dataset for further performance improvement. + +Under this motivation, in this research, we propose a language model-based generative classifier (LMGC) as a reranker for both discourse segmentation and sentence-level discourse parsing. LMGC can jointly predict text and label probabilities by treating a text and labels as a single sequence, like Figure 1 (b). Therefore, different from conventional methods, LMGC can use more information from labels by treating the labels as an input. Furthermore, LMGC can enhance label representations by embedding descriptions of each label defined in the annotation manual (Carlson and Marcu, 2001), that allows us to use a pre-trained language model such as MPNet (Song et al., 2020) effectively, since we can already have the representations for labels, that were unseen in the pre-training step. + +Experimental results on the RST-DT dataset (Carlson et al., 2002) show that LMGC can achieve the state-of-the-art scores in both discourse segmentation and sentence-level discourse parsing. LMGC utilizing our enhanced label embeddings achieves the best $\mathrm{F}_1$ score of 96.72 in discourse segmentation. Furthermore, in sentence-level discourse parsing, LMGC utilizing our enhanced relation label embeddings achieves the best relation $\mathrm{F}_1$ scores of 84.69 with gold EDU boundaries and 81.18 with automatically segmented boundaries, respectively. + +# 2 Related Work + +Discourse segmentation is a fundamental task for building an RST discourse tree from a text. Carlson et al. (2001) proposed a method for using lexical information and syntactic parsing results for detecting EDU boundaries in a sentence. Fisher and Roark (2007); Xuan Bach et al. (2012); Feng and Hirst (2014b) utilized these clues as features in a classifier, while Wang et al. (2018b) utilized BiLSTM-CRF (Huang et al., 2015) in an end-to-end manner to avoid performance degradation caused by syntactic parsing errors. + +Sentence-level discourse parsing is also an important task for parsing an RST discourse tree, as used in many RST parsers (Joty et al., 2013; + +Feng and Hirst, 2014a; Joty et al., 2015; Wang et al., 2017; Kobayashi et al., 2020). Recently, Lin et al. (2019) tried to jointly perform discourse segmentation and sentence-level discourse parsing with pointer-networks and achieved the state-of-the-art $\mathrm{F}_1$ scores in both discourse segmentation and sentence-level discourse parsing. + +In spite of the performance improvement of these models, a restricted number of labeled RST discourse trees is still a problem. In the discourse segmentation and parsing tasks, most prior work is on the basis of discriminative models, which learn mapping from input texts to predicted labels. Thus, there still remains room for improving model performance by considering mapping from predictable labels to input texts to exploit more label information. To consider such information in a model, Mabona et al. (2019) introduced a generative model-based parser, RNNG (Dyer et al., 2016), to document-level RST discourse parsing. Different from our LMGC, this model unidirectionally predicts action sequences. + +In this research, we model LMGC for the discourse segmentation and sentence-level discourse parsing tasks. LMGC utilizes a BERT-style bidirectional Transformer encoder (Devlin et al., 2019) to avoid prediction bias caused by using different decoding directions. Since LMGC is on the basis of generative models, it can jointly consider an input text and its predictable labels, and map the embeddings of both input tokens and labels onto the same space. Due to this characteristic, LMGC can effectively use the label information through constructing label embeddings from the description of a label definition (Carlson and Marcu, 2001). Furthermore, recent strong pre-trained models such as MPNet (Song et al., 2020) are available for any input tokens in LMGC. + +# 3 Base Models + +Our LMGC reranks the results from a conventional discourse segmenter and parser, which can be constructed as discriminative models. In this section, we explain these base models and introduce our mathematical notations. + +# 3.1 Discourse Segmenter + +In discourse segmentation, given an input text $\pmb{x} = \{x_{1},\dots ,x_{n}\}$ , where $x_{i}$ is a word, a segmenter detects EDUs $\pmb {e} = \{e_1,\dots ,e_m\}$ from $\pmb{x}$ . Since there is no overlap or gap between EDUs, + +![](images/c6bb3d60767558688e466d239af2dd1db8d4fa0ec22f0e6ebd2fca90416fc88f.jpg) +Figure 2: Overview of our Language Model-based Generative Classifier (LMGC). + +discourse segmentation can be considered as a kind of sequential labeling task, which assigns labels $l = \{l_1,\dots ,l_n\}$ , where each $l_{i}\in \{0,1\}$ indicates whether the word is the start of an EDU or not. By using a discriminative model, such as BiLSTM-CRF (Wang et al., 2018b) and pointer-networks (Lin et al., 2019), the probability of predicting EDUs from $x$ can be $P(l|x)$ or $P(e|x)$ . Because of its simple structure and extensibility, we choose BiLSTM-CRF as our base model for discourse segmentation. In BiLSTM-CRF, $P(l|x)$ is formulated as follows: + +$$ +P (\boldsymbol {l} | \boldsymbol {x}) = \frac {\prod_ {t = 1} ^ {n} \psi_ {t} \left(l _ {t} , l _ {t - 1} , h\right)}{\sum_ {l ^ {\prime} \in Y} \prod_ {t = 1} ^ {n} \psi_ {t} \left(l _ {t} ^ {\prime} , l _ {t - 1} ^ {\prime} , h\right)}, \tag {1} +$$ + +where $\psi_t(l_t, l_{t-1}, h) = \exp(W^T h_t + b)$ is the potential function, $h_t$ is the hidden state at time step $t$ , $W$ is a weight matrix, $b$ is a bias term, and $Y$ is the set of possible label sequences. + +We inherit top- $k$ Viterbi results of BiLSTM-CRF, scored by Eq.(1), to our LMGC, as described in Section 4. + +# 3.2 Discourse Parser + +In discourse parsing, given an input text $\pmb{x}$ and its EDUs $e$ , we can build a binary tree $p = \{p_1, \dots, p_{2n-1}\}$ , where each node $p_i \in p$ has three kinds of labels: span $s_i$ , nuclearity $u_i$ , and relation $r_i$ . The sequences of span $s$ and nuclearity $u$ can be predicted simultaneously, as in 2-stage Parser (Wang et al., 2017), or span $s$ can be predicted in advance for labeling nuclearity $u$ and relation $r$ , as in pointer-networks (Lin et al., 2019) and span-based Parser (Kobayashi et al., 2020). Because of its better performance, we choose 2-stage Parser as our base model for sentence-level discourse parsing. 2-stage Parser extracts several features and does classification with SVMs in two stages. In the first stage, it identifies the span and nuclearity simultaneously to construct a tree based on the transition-based system with four types of actions: Shift, Reduce-NN, Reduce-NS, and Reduce-SN. In the second stage, for a given node $p_i$ , $r_i$ is + +predicted as the relation between the left and right children nodes of $p_i$ by using features extracted from $p_i$ and its children nodes. In spite of its limited features, it achieves the best results compared with pointer-networks and span-based Parser. Since 2-stage Parser utilizes SVMs, we normalize the action scores and inherit top- $k$ beam search results of 2-stage Parser for LMGC to perform discourse parsing. + +# 4 Language Model-based Generative Classifier (LMGC) + +In this section, we introduce our generative classifier, LMGC, that utilizes a masked and permuted language model to compute sequence probabilities in both discourse segmentation and sentence-level discourse parsing tasks. More specifically, as we mention in Section 5, we can utilize our LMGC in three tasks, (a) discourse segmentation, (b) sentence-level discourse parsing with gold segmentation, and (c) sentence-level discourse parsing with automatic segmentation. Figure 2 shows the overview of our LMGC for the whole task (c). As shown in the figure, the prediction process in LMGC is the following. We assume that, in task (c), discourse segmentation and sentence-level discourse parsing are performed in a pipeline manner with models trained for tasks (a) and (b). + +1. Predict top- $k_{s}$ EDU segmentations $\{e_1,\dots ,e_{k_s}\}$ from a given sentence $\pmb{x}$ with the base discourse segmenter, described in Section 3.1. +2. Compute joint probability $P(\pmb {x},\pmb {e}_i)$ and select the best segmentation $\pmb{e}$ from $\{\pmb {e}_1,\dots ,\pmb{e}_{k_s}\}$ with a language model, as we describe below. +3. Parse and rank top- $k_{p}$ trees $\{\pmb{p}_{1},\dots ,\pmb{p}_{k_{p}}\}$ from $x$ and best segmentation $\pmb{e}$ with the base discourse parser, described in Section 3.2. +4. Compute joint probability $P(\pmb{x}, \pmb{e}, \pmb{p}_j)$ to select the best tree from $\{\pmb{p}_1, \dots, \pmb{p}_{k_p}\}$ with a language model, as we describe below. + +In task (a), we apply Step 2 to predict the best segmentation after Step 1. In task (b), we skip Steps 1 and 2, and apply just Steps 3 and 4 for gold segmentation to yield the best parse tree. + +# 4.1 Tree Representations + +To calculate joint probabilities for a discourse tree with a language model, we need to represent a tree as a linear form, like Figure 1 (b). Since there are several predictable label sets in discourse segmentation and parsing tasks, as shown in Figure 3, we prepare linearized forms for each label set. + +In discourse segmentation, we can consider joint probability $P(\pmb{x}, \pmb{e})$ for a sequence with inserting a symbol, [EDU], at an EDU boundary (Figure 3 (a)). In discourse parsing, a discourse tree is represented as a sequence with several kinds of label sets: span labels $s$ , nuclearity labels $u$ including span labels, and relation labels $r$ including span and nuclearity labels (Figures 3 (b)-(d)). To investigate the effectiveness of each label set in the reranking step, we consider $P(\pmb{x}, \pmb{e}, s)$ , $P(\pmb{x}, \pmb{e}, \pmb{u})$ , and $P(\pmb{x}, \pmb{e}, r)$ for each label set to represent $P(\pmb{x}, \pmb{e}, \pmb{p})$ in this paper. To build a sequence, we combine each label in a tree with brackets to imply the boundary for the label. For example, "N" and ")N" stand for the start and end of a nucleus EDU. For a node $p_i$ of the tree, $r_i$ describes the relation between its children nodes, leading to $r_i$ of leaf nodes being "Null". When the child nodes of $p_i$ are nucleus and satellite, we assign label "Span" to the nucleus child node of $p_i$ and label $r_i$ to the satellite child node of $p_i$ , respectively. When the child nodes of $p_i$ are both nucleus, we assign label $r_i$ to both child nodes of $p_i$ . + +For simpler illustration, in Figure 1 (b), we show the linearized discourse tree only with nuclearity and relation labels, since the nuclearity labels can also show span and EDU boundary labels. "Null" labels for leaf nodes are also omitted in the figure. + +# 4.2 Joint Probabilities + +To calculate joint probabilities in the last subsection with a language model, we consider probability $P(\pmb{z})$ for a sequence $\pmb{z} = (z_{1},\dots ,z_{a})$ , which corresponds to the probabilities for the sequential representations $P(\pmb{x},\pmb{e})$ , $P(\pmb{x},\pmb{e},\pmb{s})$ , $P(\pmb{x},\pmb{e},\pmb{u})$ , and $P(\pmb{x},\pmb{e},\pmb{r})$ . + +According to Song et al. (2020), masked and permuted language modeling (MPNet) takes the advantages of both masked language modeling and permuted language modeling while overcoming their issues. Compared with Bert (Devlin et al., 2019) and XLNet (Yang et al., 2019), MPNet considered more information about tokens and positions, and achieved better results for several downstream tasks (GLUE, SQuAD, etc). Taking into account its better performance, we choose pre-trained MPNet (Song et al., 2020) as our language model. Because considering all possible inter-dependence between $z_{t}$ is intractable, we follow the decomposition of pseudo-log-likelihood scores (PLL) (Salazar et al., 2020) in the model. Thus, we decompose and calculate logarithmic $P(z)$ as follows: + +$$ +\log P (z; \theta) \tag {2} +$$ + +$$ +\approx P L L (z; \theta) = \sum_ {t = 1} ^ {a} \log P \left(z _ {t} \mid z _ {< t}, z _ {> t}, M _ {t}; \theta\right), +$$ + +where $z_{< t}$ is the first sub-sequence $(z_{1},\dots ,z_{t - 1})$ in $\pmb{z}$ and $z_{>t}$ is the latter sub-sequence $(z_{t + 1},\dots ,z_{a})$ in $\pmb{z}$ . $M_t$ denotes the mask token [MASK] at position $t$ . $P(z_{t} \mid z_{< t},z_{>t},M_{t};\theta)$ is computed by two-stream self-attention (Yang et al., 2019). In inference, we select $\pmb{z}$ based on $\frac{1}{a} PLL(\pmb{z};\theta)$ . + +This model converts $\mathbf{z}$ into continuous vectors $\mathbf{w} = \{w_1, \dots, w_a\}$ through the embedding layer. Multi-head attention layers further transform the vectors to predict each $z_t$ in the softmax layer. + +Since pre-trained MPNet does not consider EDU, span, nuclearity, and relation labels in the pretraining step, we need to construct vectors $\boldsymbol{w}$ for these labels from the pre-trained parameters to enhance the prediction performance. We describe the details of this method in the next subsection. + +# 4.3 Label Embeddings + +In LMGC, we embed input text tokens and labels in the same vector space (Wang et al., 2018a) of the embedding layer. Under the setting, to deal with unseen labels in the pre-trained model, we compute the label embeddings by utilizing token embeddings in the pre-trained model. + +We try to combine the input text with four kinds of labels, EDU, span, nuclearity, and relation labels, which were defined and clearly described in the annotation document (Carlson and Marcu, 2001) (See Appendix B for the descriptions). In taking into account the descriptions for the labels as additional + +(a) Sentence with EDU boundary labels + +$$ +e _ {1} \_ [ E D U ] \_ e _ {2} \_ [ E D U ] \_ e _ {3} \_ [ E D U ] +$$ + +(b) Sentence with span labels + +$$ +\begin{array}{l} (\text {S p a n \_ (S p a n \_ e _ {1 \_}) _ {S p a n \_ (S p a n \_ e _ {2 \_}) _ {S p a n \_}) _ {S p a n}} \\ \text {\_ (S p a n \_ e _ {3 \_}) _ {S p a n}} \end{array} +$$ + +(c) Sentence with nuclearity labels + +$$ +(\mathrm {N} _ {-} (\mathrm {N} _ {-} e _ {1 -}) _ {\mathrm {N} -} (\mathrm {S} _ {-} e _ {2 -}) _ {\mathrm {S} -}) _ {\mathrm {N} -} (\mathrm {S} _ {-} e _ {3 -}) _ {\mathrm {S}} +$$ + +(d) Sentence with relation labels + +$$ +\begin{array}{l} (\text {S p a n \_ (S p a n \_ e _ {1 \_}) S p a n \_ (E l a b o r a t i o n \_ e _ {2 \_}} \\ \text {E l a b o r a t i o n \_) S p a n \_ (A t t r i b u t i o n \_ e _ {3 \_}) A t t r i b u t i o n} \end{array} +$$ + +Figure 3: Example joint representations of an input text and labels for sentence We've got a lot to do, he acknowledged. $e_i$ represents the corresponding EDU, and "_" is whitespace. + +information, we adopt two different methods, Average and Concatenate, for representing the label embeddings. + +Average: We average the embeddings of tokens that appear in the definition of a label and assign the averaged embedding to the label. + +Concatenate: We concatenate a label name with its definition and insert the concatenated text to the end of sequence $z$ ,2 so that the label embedding can be captured by self-attention mechanisms (Vaswani et al., 2017). Note that we do not try it in the parsing task, because the length of a sequence increases in proportion to the increase of the number of labels, that causes a shortage of memory space. + +# 4.4 Objective Function + +Because the search space for sequences of a text and its labels is exponentially large, instead of considering all possible sequences $Z(\pmb{x})$ for $\pmb{x}$ , we assume $Z^{\prime}(\pmb{x})$ as a subset of sequences based on top- $k$ results from the base model. We denote $z_{g} \in Z(\pmb{x})$ as the correct label sequence of $\pmb{x}$ . To keep pre-trained information in MPNet, we continue masking and permutation for training model parameter $\theta$ . Assuming that $O_{a}$ lists all permutations of set $\{1, 2, \dots, a\}$ , the number of elements in $O_{a}$ satisfies $|O_{a}| = a!$ . For $z \in Z^{\prime}(\pmb{x}) \cup \{z_{g}\}$ , we train the model parameter $\theta$ in LMGC by maximizing the following expectation over all permuta + +tions: + +$$ +\begin{array}{l} \mathbb {E} _ {\boldsymbol {o} \in O _ {a}} \sum_ {t = c + 1} ^ {a} \left[ I _ {\boldsymbol {z}} \log P \left(z _ {o _ {t}} \mid z _ {o _ {< t}}, M _ {o _ {> c}}; \theta\right) \right. \\ + \left(1 - I _ {\mathbf {z}}\right) \log \left(1 - P \left(z _ {o _ {t}} \mid z _ {o _ {< t}}, M _ {o _ {> c}}; \theta\right)\right), \tag {3} \\ \end{array} +$$ + +where $I_{z}$ is the indicator function, defined as follows: + +$$ +I _ {z} := \left\{ \begin{array}{l l} 1 & \text {i f} z = z _ {g} \\ 0 & \text {i f} z \neq z _ {g} \end{array} . \right. \tag {4} +$$ + +$c$ , denoting the number of non-predicted tokens $z_{o_{<}=c}$ , is set manually. $o_{c}}$ denotes the mask tokens [MASK] at position $o_{>c}$ . $P(z_{o_t} \mid z_{o_{c}}; \theta)$ is computed by two-stream self-attention (Yang et al., 2019). + +# 5 Experiments + +In this section, we present our experiments in three tasks, (a) discourse segmentation, (b) sentence-level discourse parsing with gold segmentation, and (c) sentence-level discourse parsing with automatic segmentation. + +# 5.1 Experimental Settings + +# 5.1.1 Datasets + +Following previous studies (Wang et al., 2017, 2018b; Lin et al., 2019), we used the RST Discourse Treebank (RST-DT) corpus (Carlson et al., 2002) as our dataset. This corpus contains 347 and 38 documents for training and test datasets, respectively. We divided the training dataset into two parts, following the module RSTFinder3 (Heilman and Sagae, 2015), where 307 documents were used to train models and the remaining 40 documents were used as the validation dataset. + +We split the documents into sentences while ignoring footnote sentences, as in Joty et al. (2012). There happens two possible problematic cases for the splitted sentences: (1) The sentence consists of exactly an EDU, and so it has no tree structure. (2) The tree structure of the sentence goes across to other sentences. Following the setting of Lin et al. (2019), we did not filter any sentences in task (a). In task (b), we filtered sentences of both cases. In task (c), we filtered sentences of case (2). Table 1 shows the number of available sentences for the three different tasks. + +3https://github.com/ + +EducationalTestingService/rstfinder + +
TaskTrainValidTest
(a) Segmentation6,768905991
(b) Parsing w/ gold segmentation4,524636602
(c) Parsing w/ auto segmentation-861951
+ +Table 1: The number of sentences for each task. + +# 5.1.2 Evaluation Metrics + +In task (a), we evaluated the segmentation in micro-averaged precision, recall, and $\mathrm{F_1}$ score with respect to the start position of each EDU. The position at the beginning of a sentence was ignored. In task (b), we evaluated the parsing in microaveraged $\mathrm{F_1}$ score with respect to span, nuclearity, and relation. In task (c) for parsing with automatic segmentation, we evaluated both the segmentation and parsing in micro-averaged $\mathrm{F_1}$ score. + +We used the paired bootstrap resampling (Koehn, 2004) for the significance test in all tasks when comparing two systems. + +# 5.1.3 Compared Methods + +As our proposed methods, we used $\mathrm{LMGC}_e$ , $\mathrm{LMGC}_s$ , $\mathrm{LMGC}_u$ , and $\mathrm{LMGC}_r$ , which respectively model probability $P(\boldsymbol{x}, \boldsymbol{e})$ , $P(\boldsymbol{x}, \boldsymbol{e}, s)$ , $P(\boldsymbol{x}, \boldsymbol{e}, u)$ , and $P(\boldsymbol{x}, \boldsymbol{e}, r)$ with initialized label embeddings. We represent LMGC with Average and Concatenate label embeddings as Enhance and Extend, respectively. + +We used the base discourse segmenter and parser described in Section 3 as our baseline. We reproduced the base discourse segmenter BiLSTM-CRF $^4$ (Wang et al., 2018b). Because BiLSTM-CRF adopted the hidden states of ELMo (Peters et al., 2018) as word embeddings, we also tried the last hidden state of MPNet as the word embeddings for BiLSTM-CRF for fairness. We retrained the segmenter in five runs, and the experimental results are showed in Appendix C. The publicly shared BiLSTM-CRF by Wang et al. (2018b) is our base segmenter in the following experiments. + +As for the base parser, we retrained two models, 2-stage $\mathsf{Parser}^5$ (Wang et al., 2017) and span-based $\mathsf{Parser}^6$ (Kobayashi et al., 2020). Different from the setting of Lin et al. (2019), we retrained 2-stage Parser in the sentence-level rather than in the document-level. Since the experimental re + +sults show our retrained 2-stage Parser achieved the highest $\mathrm{F}_1$ scores among several parsers (See Appendix C), we selected it as our base parser in the following experiments. + +Furthermore, for comparing LMGC with an unidirectional generative model (Mabona et al., 2019), we constructed another baseline method which utilizes a GPT-2 (Radford et al., 2019) based reranker. This method follows an unidirectional language model-based generative parser (Choe and Charniak, 2016), and considers top- $k$ results from the base model by an add-1 version of infinilog loss (Ding et al., 2020) during training. We denote this baseline as GPT2LM hereafter. GPT2LM models $P(\boldsymbol{x}, \boldsymbol{e})$ for task (a) and $P(\boldsymbol{x}, \boldsymbol{e}, \boldsymbol{r})$ for tasks (b) and (c), respectively. Both LMGC and GPT2LM are the ensemble of 5 models with different random seeds. See Appendix D for a complete list of hyperparameter settings. + +# 5.1.4 Number of Candidates + +As described in Section 4, LMGC requires parameters $k_{s}$ and $k_{p}$ for the number of candidates in the steps for different tasks. We tuned $k_{s}$ and $k_{p}$ based on the performance on the validation dataset. + +In task (a), $k_{s}$ was set to 20 and 5 for training and prediction, respectively. In task (b), $k_{p}$ was set to 20 and 5 for training and prediction, respectively. In task(c), $k_{s}$ and $k_{p}$ were both set to 5 for prediction. The set of parameters was similarly tuned for GPT2LM on the validation dataset. We list all of them in Appendix E. + +# 5.2 Results + +# 5.2.1 Discourse Segmentation + +Table 2 shows the experimental results for the discourse segmentation task. Oracle indicates the upper bound score that can be achieved with candidates generated by the base model. To compute the Oracle score, if the candidades by the base model include the correct answer, we assume the prediction is correct. + +$\mathrm{LMGC}_e$ significantly outperformed $\mathrm{GPT2LM}_e$ . We think the reason is similar to what Zhu et al. (2020) reported: BERT-based bidirectional Transformer encoders encode more rhetorical features than GPT2-based unidirectional Transformer en + +
ModelPrecisionRecallF1
Oracle97.7398.6798.20
Pointer-networks*93.3497.8895.55
Base segmenter92.2295.3593.76
GPT2LMe94.0595.7294.88
LMGCe95.3197.5696.43†
Enhancee95.5497.9396.72†
Extendc95.0597.8696.44†
+ +Table 2: Results for the discourse segmentation task. * indicates the reported score by Lin et al. (2019). The best score in each metric among the models is indicated in bold. † indicates that the score is significantly superior to GPT2LM with a p-value < 0.01. + +
ModelSpanNuclearityRelation
Oracle98.6795.8890.07
Pointer-networks*97.4491.3481.70
Base parser97.9292.0782.06
GPT2LMr96.3588.1177.86
LMGCs98.23‡92.3182.22
Enhances98.27‡92.3982.42
LMGCu98.31‡94.00†83.63†
Enhanced98.31†93.88†83.56†
LMGCr98.0093.09†83.99†
Enhanced98.1293.13†84.69†
+ +Table 3: Results for the sentence-level discourse parsing task with gold segmentation. * indicates the reported score by Lin et al. (2019). The best score in each metric among the models is indicated in **bold. † and ‡ indicate that the score is significantly superior to the base parser with a p-value < 0.01 and < 0.05, respectively. + +coders. Using Average label embeddings is more helpful than using Concatenate label embeddings for $\mathrm{LMGC}_e$ . Enhance $_e$ achieved the state-of-the-art $\mathrm{F}_1$ score of 96.72, which outperformed both the base segmenter and the pointer-networks. + +# 5.2.2 Sentence-level Discourse Parsing + +Gold Segmentation: Table 3, Figures 4 and 5 show the experimental results for the sentence-level discourse parsing task with gold segmentation. In Table 3, $\mathrm{LMGC}_u$ achieved the highest span and nuclearity $F_1$ scores of 98.31 and 94.00, respectively. Enhancer achieved the state-of-the-art relation $F_1$ score of 84.69, which is significantly superior to the base parser. Although using Average label embeddings improved $\mathrm{LMGC}_r$ , it can provide no or only limited improvement for $\mathrm{LMGC}_u$ and $\mathrm{LMGC}_s$ . We + +![](images/95de5f1e20ba662d8e92efb1c736a736419f1c943d1495dc0df4ac9b81087403.jpg) +Figure 4: Performance of 2-stage parser and Enhancer in the sentence-level discourse parsing task with gold segmentation. The hollow bar denotes the number of different gold labels in the training dataset. Blue and red lines indicate the $\mathrm{F}_1$ scores of Enhancer and 2-stage parser, respectively, for each relation label. + +![](images/6137451489277d8ccc7c52e26db61175a24d2693d839eab5fd39f9682cc92753.jpg) +Figure 5: Confusion matrix for Enhancer in the sentence-level discourse parsing task with gold segmentation. We show the ratio of the number of instances with predicted labels (for a column) to the number of instances with gold labels (for a row) in the corresponding cell. + +guess that this difference is caused by the number of different kinds of labels in span, nuclearity, and relation. The performance of $\mathrm{GPT2LM}_r$ is even worse than the base parser. We think this is because we added the relation labels to the vocabulary of GPT-2 and resized the pre-trained word embeddings. + +Figure 4 shows the comparison between the base parser and Enhancer with respect to each ralation label. In most relation labels, Enhancer outperformed 2-stage Parser except for the labels Explanation, Evaluation, and Topic-Comment. 2-stage Parser achieved the $\mathrm{F_1}$ score of 17.14 for label Temporal while Enhancer achieved the $\mathrm{F_1}$ score of 44.44 by reranking the parsing results from 2-stage Parser. Such great improvement with Enhancer can also be found for labels such as Contrast, Back + +![](images/f84060426c6517d5612a64d8ce70c95a9c31b06fed4c2c176e0c55170fed8d1e.jpg) +(a) $\mathrm{LMGC}_r$ + +![](images/98b251c0f7f361d14f37324dd63504a411333fd90ec2d79e066e02694d7ce1cd.jpg) +(b) Enhancer +Figure 6: t-SNE plot of relation label embeddings trained in $\mathrm{LMGC}_r$ and Enhancer. + +ground, and Cause. Obviously, Enhancer tends to improve the performance for labels whose training data is limited. + +Figure 5 shows a confusion matrix of $\mathrm{Enhancer}_r$ for each relation label. It shows that the relation labels Comparison, Cause, and Temporal were often predicted wrongly as Contrast, Joint, and Joint or Background, respectively, by Enhancer, even though these labels have at least 100 training data. We guess this might be due to some similarities between those labels. + +By using the t-SNE plot (Van der Maaten and Hinton, 2008), we visualize the trained relation label embeddings of $\mathrm{LMGC}_r$ and Enhancer. Figures 6a and 6b show the results. Figure 6a shows a clearer diagonal that divides labels with parenthesis + +
ModelSegSpanParseNuclearityRelation
Pointer-networks*-91.7586.3877.52
\( Oracle_{seg} \)98.24---
Base segmenter93.92---
\( GPT2LM_e \)95.03---
\( LMGC_e \)96.51---
\( Enhanc_e \)96.79---
\( Extend_e \)96.48---
Oracle-93.9591.2585.93
Base parser-93.5388.0878.75
\( GPT2LM_r \)-92.0284.2074.49
\( LMGC_s \)-93.96‡88.4679.25
\( Enhanc_s \)-94.00†88.5079.33
\( LMGC_u \)-93.96†89.90†80.33†
\( Enhanc_u \)-93.92‡89.74†80.22†
\( LMGC_r \)-93.6589.08†80.57†
\( Enhanc_r \)-93.7389.16†81.18†
+ +Table 4: Results for the sentence-level discourse parsing task with automatic segmentation. * indicates the reported score by Lin et al. (2019). The best score in each metric among the models for each block is indicated in bold. We used the discourse segmentation results of Enhance $_e$ as the input of the discourse parsing stage for all models, for fair comparison of sentence-level discourse parsing. † and ‡ indicate that the score is significantly superior to the base parser with a p-value < 0.01 and < 0.05, respectively. + +"from the ones with"), while Figure 6b shows more distinct divisions between labels. + +Automatic Segmentation: Table 4 shows the experimental results for the sentence-level discourse parsing task with automatic segmentation. The second and third blocks in the table show the results for the first and second stages, discourse segmentation and sentence-level discourse parsing, respectively.[9] + +Enhanced achieved the highest relation $\mathrm{F_1}$ score of 81.18, which is a significant improvement of 2.43 points compared to the base parser. Enhances and $\mathrm{LMGC}_u$ achieved the highest span and nuclearity $\mathrm{F_1}$ scores of 94.00 and 89.90, respectively. Since $\mathrm{LMGC}_*$ and Enhances were the models trained in task (b), and Enhances achieved the $\mathrm{F_1}$ score of 96.79 in discourse segmentation, it is not surprising to find that the tendency of those results is similar to that in sentence-level discourse parsing with gold segmentation. + +# 6 Conclusion + +In this research, we proposed a language model-based generative classifier, LMGC. Given the top + +$k$ discourse segmentations or parses from the base model, as a reranker, LMGC achieved the state-of-the-art performances in both discourse segmentation and sentence-level discourse parsing. The experimental results also showed the potential of constructing label embeddings from token embeddings by using label descriptions in the manual. In the future, we plan to apply LMGC to other diverse classification tasks. + +# References + +Lynn Carlson and Daniel Marcu. 2001. Discourse tagging reference manual. *ISI Technical Report* ISI-TR-545. +Lynn Carlson, Daniel Marcu, and Ellen Okuowski +Mary. 2002. Rst discourse treebank ldc2002t07. +Philadelphia: Linguistic Data Consortium. +Lynn Carlson, Daniel Marcu, and Mary Ellen Okurovsky. 2001. Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue. +Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2331-2336, Austin, Texas. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Xiaoan Ding, Tianyu Liu, Baobao Chang, Zhifang Sui, and Kevin Gimpel. 2020. Discriminatively-Tuned Generative Classifiers for Robust Natural Language Inference. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8189-8202, Online. Association for Computational Linguistics. +Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209, San Diego, California. Association for Computational Linguistics. +Vanessa Wei Feng and Graeme Hirst. 2014a. A linear-time bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational + +Linguistics (Volume 1: Long Papers), pages 511-521, Baltimore, Maryland. Association for Computational Linguistics. +Vanessa Wei Feng and Graeme Hirst. 2014b. Two-pass discourse segmentation with pairing and global features. +Seeger Fisher and Brian Roark. 2007. The utility of parse-derived features for automatic discourse segmentation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 488-495, Prague, Czech Republic. Association for Computational Linguistics. +Francisco Guzmán, Shafiq Joty, Lluis Márquez, and Preslav Nakov. 2014. Using discourse structure improves machine translation evaluation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 687-698, Baltimore, Maryland. Association for Computational Linguistics. +Michael Heilman and Kenji Sagae. 2015. Fast rhetorical structure theory discourse parsing. arXiv preprint arXiv:1505.02425. +Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-crf models for sequence tagging. +Shafiq Joty, Giuseppe Carenini, and Raymond Ng. 2012. A novel discriminative framework for sentence-level discourse analysis. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 904-915, Jeju Island, Korea. Association for Computational Linguistics. +Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra- and multisentential rhetorical parsing for document-level discourse analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 486-496, Sofia, Bulgaria. Association for Computational Linguistics. +Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2015. CODRA: A novel discriminative framework for rhetorical analysis. Computational Linguistics, 41(3):385-435. +Shafiq Joty, Francisco Guzmán, Lluis Márquez, and Preslav Nakov. 2017. Discourse structure in machine translation evaluation. Computational Linguistics, 43(4):683-722. +Dan Jurafsky. 2000. Speech & language processing. Pearson Education India. +Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, and Masaaki Nagata. 2020. Top-down rst parsing utilizing granularity levels in documents. volume 34, pages 8099-8106. + +Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics. +Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, and M Saiful Bari. 2019. A unified linear-time framework for sentence-level discourse parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4190-4200, Florence, Italy. Association for Computational Linguistics. +Amandla Mabona, Laura Rimell, Stephen Clark, and Andreas Vlachos. 2019. Neural generative rhetorical structure parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2284-2295, Hong Kong, China. Association for Computational Linguistics. +William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699-2712, Online. Association for Computational Linguistics. +Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics. + +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. Mpnet: Masked and permuted pretraining for language understanding. arXiv preprint arXiv:2004.09297. +Caroline Sporleder and Mirella Lapata. 2005. *Discourse chunking and its application to sentence compression.* In *Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing*, pages 257-264, Vancouver, British Columbia, Canada. Association for Computational Linguistics. +Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11). +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. +Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018a. Joint embedding of words and labels for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2321-2331, Melbourne, Australia. Association for Computational Linguistics. +Yizhong Wang, Sujian Li, and Houfeng Wang. 2017. A two-stage parsing method for text-level discourse analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 184-188, Vancouver, Canada. Association for Computational Linguistics. +Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018b. Toward fast and accurate neural discourse segmentation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 962-967, Brussels, Belgium. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Ngo Xuan Bach, Nguyen Le Minh, and Akira Shimazu. 2012. A reranking model for discourse segmentation using subtree features. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 160-168, Seoul, South Korea. Association for Computational Linguistics. + +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. + +Zining Zhu, Chuer Pan, Mohamed Abdalla, and Frank Rudzicz. 2020. Examining the rhetorical capacities of neural language models. In Proceedings of the Third Blackbox NLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 16-32. + +# A Experimental Results of LMGC with Tree + +Since the raw s-expression-style tree is longer than our joint representations with span, nuclearity and relation, we transformed the raw tree into a sequence as Figure 7 shows, where the nuclearity and relation labels are connected together by the colons. To construct the label embedding for $P(\boldsymbol{x}, \boldsymbol{e}, \boldsymbol{p})$ , we combined the descriptions of the nuclearity and relation (see descriptions in Appendix B), and assigned the combination to the corresponding node. For example, the description of "(Attribution:S)" is the start of a supporting or background piece of information attribution, attribution represents both direct and indirect instances of reported speech. + +(Span:N_ (Span:N_ $e_1\_$ )_{Span:N_} (Elaboration:S_ $e_2\_$ )_{Elaboration:S\_})_{Span:N_} (Attribution:S_ $e_3\_$ )_{Attribution:S} + +Figure 7: Example joint representation of an input text with all tree labels for sentence We've got a lot to do, he acknowledged. $e_i$ represents the corresponding EDU, and "\_ " is whitespace. + +$\mathrm{LMGC}_p$ models the joint probability $P(x, e, p)$ with initialized label embedding. The experimental results of $\mathrm{LMGC}_p$ and $\mathrm{Enhanced}_p$ for the sentence-level discourse parsing task with gold segmentation are showed in Table 5. $\mathrm{LMGC}_p$ and $\mathrm{Enhanced}_p$ are the ensemble of 5 models with different random seed, although the training loss of $\mathrm{Enhanced}_p$ in 2 of 5 models did not decrease. + +
ModelSpanNuclearityRelation
LMGCp97.8492.9084.11
Enhanced98.0492.7484.18
+ +Table 5: Performances of $\mathrm{LMGC}_p$ and Enhancer in the sentence-level discourse parsing task with gold segmentation. + +# B Label Descriptions + +We list our extracted label descriptions from Carlson and Marcu (2001) in Table 6. For parsing symbols with brackets "(" and ")'' like "(N" and ")_N", we inserted the position phrase, the start of and the end of, to the beginning of their label definitions. So the description of ")_N" is the end of a more salient or essential piece of information. + +# C Experiment Results of Reproduced Base Model + +Table 7 shows the experimental results of BiLSTM-CRF in discourse segmentation, where the results of our reproduced BiLSTM-CRF are averaged in five runs. Table 8 shows the experimental results of different parsers in the sentence-level discourse parsing task with gold segmentation. + +# D Hyperparameters + +For LMGC, we used the source code shared in the public github10 of Song et al. (2020). We used the uploaded pre-trained MPNet and same setup as illustrated in Table 9. $15\%$ tokens as the predicted tokens were masked by replacement strategy 8:1:1. Relative positional embedding mechanism (Shaw et al., 2018) was utilized. Since the vocab we used is same as the one of BERT (Devlin et al., 2019), we used the symbol [SEP] to represent [EDU] and symbol [unused#] starting from 0 to represent parsing labels such as "(N" and "Attribution". + +For GPT2LM, we used the source code shared in the public github $^{11}$ (Ott et al., 2019). Following the steps in Choe and Charniak (2016), we utilized Eq (5) (Jurafsky, 2000) to compute the joint distribution, + +$$ +\begin{array}{l} P (\boldsymbol {x}, \boldsymbol {y}) = P (\boldsymbol {z}) = P (z _ {1}, \dots , z _ {a}) \tag {5} \\ = \prod_ {t = 1} ^ {a} P (z _ {t} | z _ {1}, \dots , z _ {t - 1}), \\ \end{array} +$$ + +where $P(z_{t}|z_{1},\ldots ,z_{t - 1})$ was computed by GPT2 (Radford et al., 2019). And in inference, we selected $\pmb{z}$ based on $\frac{1}{a}\log P(z)$ . An add-1 version of infinitilog loss (Ding et al., 2020) was utilized for training GPT2LM as follows: + +$$ +- \log f (z) + \log [ 1 + \sum_ {z ^ {\prime} \in Z ^ {\prime} (x), z ^ {\prime} \neq z} f \left(z ^ {\prime}\right) ], \tag {6} +$$ + +
LabelDefinition
[EOS]elementary discourse units are the minimal building blocks of a discourse tree
Spanspan
Nucleusa more salient or essential piece of information
Satellitea supporting or background piece of information
Attributionattribution, attribution represents both direct and indirect instances of reported speech
Backgroundbackground or circumstance
Causecause or result
Comparisoncomparison, preference, analogy or proportion
Conditioncondition, hypothetical, contingency or otherwise
Contrastcontrast relation, spans contrast with each other along some dimension. Typically, it includes a contrastive discourse cue, such as but, however, while.
Elaborationelaboration, elaboration provides specific information or details to help define a very general concept
Enablementenablement, enablement presents action to increase the chances of the unreal-ized situation being realized.
Evaluationevaluation, interpretation, conclusion or comment
Explanationevidence, explanation or reason
Jointlist, list contains some sort of parallel structure or similar fashion between the units
Manner-Meansexplaining or specifying a method, mechanism, instrument, channel or conduit for accomplishing some goal
Topic-Commentproblem solution, question answer, statement response, topic comment or rhetorical question
Summarysummary or restatement
Temporalsituations with temporal order, before, after or at the same time
Topic changetopic change
Textual-organizationlinks that are marked by schemata labels
Same-unitlinks between two non-adjacent parts when separated by an intervening relative clause or parenthetical
+ +Table 6: Extracted label definitions. + +
ModelPrecisionRecallF1
Reported*92.0494.4193.21
Shared92.2295.3593.76
Reproduced (ELMo)93.1696.2694.68
Reproduced (MPNet)92.8495.6394.21
+ +Table 7: Performances of BiLSTM-CRF (Wang et al., 2018b) in the discourse segmentation task. The best score in each metric among the models is indicated in bold. * indicates the reported score by Lin et al. (2019). Shared is the publicly shared model by Wang et al. (2018b). Reproduced (ELMo) and Reproduced (MPNet) are our reproduced models with different word embeddings. + +
ModelSpanNuclearityRelation
2-Stage Parser*95.6087.8077.60
Pointer-networks*97.4491.3481.70
Span-based Parser96.6790.2374.76
2-Stage Parser97.9292.0782.06
+ +where + +$$ +f (\boldsymbol {z}) = \frac {\exp \left(\frac {1}{a} \log P (\boldsymbol {z})\right)}{\sum_ {\boldsymbol {z} ^ {\prime} \in Z ^ {\prime} (\boldsymbol {x})} \exp \left(\frac {1}{a ^ {\prime}} \log P \left(\boldsymbol {z} ^ {\prime}\right)\right)}. \tag {7} +$$ + +We used the uploaded pretrained "gpt2" model (Wolf et al., 2020) and same setup as illustrated in Table 10. We used symbol $\equiv \equiv \equiv \equiv$ in vocab to represent the symbol [EDU]. Because the vocab of GPT-2 has no available symbol for representing an unseen symbol, we added and our relation symbols to the vocab of GPT-2 and resized the pre-trained word embeddings. + +# E Setting of Candidates + +Table 11 shows the setting of candidates for different tasks. As described in Section 4.4, we do data augmentation by using additional top- $k$ results generated by a base method, a larger $k$ during training is expected to bring more promotion for LMGC. However, a larger $k$ during prediction step introduces more candidates and may make the prediction more difficult. Taking this into consideration, we tuned $k_{s}$ and $k_{p}$ for training and prediction separately based on the performance on the validation dataset. + +Table 8: Performance of retrained parsers in the sentence-level discourse parsing task with gold segmentation. The best score in each metric among the models is indicated in **bold**. * indicates the reported score by Lin et al. (2019). + +
HyperparameterValue
Optimizeradam
Adam β10.9
Adam β20.98
Adam ε1e-6
weight decay0.01
Learning rate0.00009
Batch size8192 tokens
Warm up steps2.4 epoch
Epoch30
Attention layer12
Attention head12
dropout0.1
attention dropout0.1
Hidden size768
Vocab size30527
TokenizerByte pair encoder
Max sentence length512
+ +Table 9: List of used hyperparameters for LMGC. + +
HyperparameterValue
Optimizeradam
Adam β10.9
Adam β20.98
Adam ε1e-6
weight decay0.01
Learning rate0.0001
Batch size512 gold tokens + candidate tokens
Warm up steps2.4 epoch
Epoch30
Attention layer12
Attention head12
dropout0.1
attention dropout0.1
Hidden size768
Vocab size50257+ added tokens
TokenizerByte pair encoder
Max sentence length512
+ +Table 10: List of used hyperparameters for GPT2LM. + +In task (a), we used the Viterbi-topk algorithm for the base segmenter to select top- $k_{s}$ segmentations. We tuned $k_{s} \in \{0, 10, 20\}$ for training while $k_{s}$ for prediction was fixed as 5. Note that we used only gold segmentations for training when $k_{s}$ was set to 0. Table 12 shows the experimental results, where both $\mathrm{LMGC}_{e}$ and $\mathrm{GP2TLM}_{e}$ are the ensemble of 5 models. Then we tuned $k_{s} \in \{5, 10, 20\}$ for prediction by using the $\mathrm{LMGC}_{e}$ and $\mathrm{GP2TLM}_{e}$ trained with top-20 candidates, Table 13 shows the results. + +In task (b), we utilized beam search in each stage + +
TaskDataSegmentation ksParsing# of data
1st stage2rd stagekp
(a)Training20---140924
Prediction5----
(b)Trainingw/ span or nuclearity-2012060742
Trainingw/ relation or all-372095004
Prediction-555-
(c)Prediction5555-
+ +of the base parser and after two stages we computed the perplexity to keep top- $k_{p}$ parsings. We tuned $k_{p} \in \{0, 10, 20\}$ for training while $k_{p}$ for prediction was fixed as 5. Note that we used only gold parsings for training when $k_{p}$ was set to 0. Table 14 shows the experimental results, where both $\mathrm{LMGC}_r$ and $\mathrm{GPT2LM}_r$ are the ensemble of 5 models. Then we tuned $k_{p} \in \{5, 10, 20\}$ for prediction by using the $\mathrm{LMGC}_r$ and $\mathrm{GPT2LM}_r$ trained with top-20 candidates, Table 15 shows the results. + +In task (c), same as in task (a), we tuned $k_{s} \in \{5, 10, 20\}$ for predicting discourse segmentation by using the $\mathrm{LMGC}_e$ and $\mathrm{GP2TLM}_e$ trained with top-20 candidates for task (a), Table 16 shows the result. We utilized $\mathrm{LMGC}_e$ to select the best segmentation from top-5 segmentations for following discourse parsing. Then same as in task (b), we tuned $k_{p} \in \{5, 10, 20\}$ for predicting discourse parsing by using the $\mathrm{LMGC}_r$ and $\mathrm{GPT2LM}_r$ trained with top-20 candidates for task (b), Table 17 shows the result. + +In tasks (b) and (c), $\mathrm{LMGC}_s$ and Enhances cannot distinguish the candidates with the same span labels but different nularity or relation labels, $\mathrm{LMGC}_u$ and Enhance cannot distinguish the candidates with the same nularity labels but different relation labels. Under this condition, the indistinguishable parses would be ranked by the base parser. And in task (b), for training data with span or nuclearity labels, we used the beam sizes 20 and 1 in the first and second stages of the base parser, respectively. + +Table 11: Setting of top candidates for different tasks. The Prediction data denotes the validation and test dataset. + +
Modelksfor trainingPrecisionRecallF1
LMGCE087.7695.7291.57
1097.6797.7397.70
2097.9997.8697.92
GPT2LME081.7296.1888.36
1096.6796.0596.36
2096.9396.0596.48
+ +Table 12: Results of tuning $k_{s}$ for training in task (a). The best score in each metric among different $k_{s}$ for training is indicated in bold. + +
Modelksfor predictionPrecisionRecallF1
Oracle599.9499.6899.81
1099.9499.6899.81
2099.9499.6899.81
LMGCe597.9997.8697.92
1097.4797.5497.51
2097.4197.6097.51
GPT2LMe596.9396.0596.48
1096.4795.5996.03
2095.7695.1495.45
+ +Table 13: Results of tuning $k_{s}$ for prediction in task (a). The best score in each metric among different $k_{s}$ for prediction is indicated in bold. + +
Modelkpfor trainingSpanNuclearityRelation
LMGCR097.2592.2183.37
1097.4692.7183.23
2097.5093.0283.44
GPT2LMr097.3692.0779.11
1096.9390.8080.76
2096.7990.6680.94
+ +Table 14: Results of tuning $k_{p}$ for training in task (b). The best score in each metric among different $k_{p}$ for training is indicated in bold. + +
Modelkpfor predictionSpanNuclearityRelation
Oracle598.6696.4192.11
1099.3098.0394.43
2099.4798.4895.42
LMGCR597.5093.0283.44
1097.5092.4683.30
2097.2992.2583.30
GPT2LMr596.7990.6680.94
1094.2681.0870.82
2093.2777.2066.67
+ +Table 15: Results of tuning $k_{p}$ for prediction in task (b). The best score in each metric among different $k_{p}$ for prediction is indicated in bold. + +
Modelksfor predictionPrecisionRecallF1
Oracle599.9399.6599.79
1099.9399.6599.79
2099.9399.6599.79
LMGCe597.9697.7497.85
1097.3297.3997.36
2097.3397.5397.43
GPT2LMe596.9495.9196.42
1096.4595.6396.04
2095.7595.3595.55
+ +Table 16: Results of tuning $k_{s}$ for prediction in task (c). The best score in each metric among different $k_{s}$ for prediciton is indicated in bold. + +
Modelkpfor predictionSpanNuclearityRelation
Oracle595.0592.9589.02
1095.9394.7391.25
2096.2195.3692.45
LMGCR594.3990.1280.88
1094.3989.4580.74
2094.1889.2480.63
GPT2LMr593.6587.8078.59
1091.1878.5568.99
2090.3074.9665.19
+ +Table 17: Results of tuning $k_{p}$ for prediction in task (c). The best score in each metric among different $k_{p}$ for prediciton is indicated in bold. \ No newline at end of file diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/images.zip b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..11e02d17c15969df16335cbf4aeab8da468c31d5 --- /dev/null +++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb2aa97c2aefe873cae0b5a3d32d912561c23f03fec796d1c205e7f6bbacbdf9 +size 901790 diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/layout.json b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..360c336be4b40d332307f936c886654251a892e8 --- /dev/null +++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e4c28ef10177f85c6a323b0e57530fdc00a81667c8f15805105f4244553ce53 +size 596554 diff --git a/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_content_list.json b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f1bb9f2085791ced3a2347b4e86d89b839eafb13 --- /dev/null +++ b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3492ff0c6e996c6fbb964db60e347d32c387b2f8ee0594b439aba43ee708f1fe +size 83880 diff --git a/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_model.json b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8979810c4fc4ee4a7802eb6930b997094462ef36 --- /dev/null +++ b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3928903f8dcc4bb55f94dedb2ac2b086e949b34aa5328557e72251618856349b +size 97568 diff --git a/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_origin.pdf b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ee53e83f17d25f20703a41bae0a076e86b13122e --- /dev/null +++ b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7a2745b7118ab56a1f5c3633a7a151a103ac5c7486f5c3d019af851a65c1c02 +size 5757545 diff --git a/alargescaledatasetforempatheticresponsegeneration/full.md b/alargescaledatasetforempatheticresponsegeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0f533e3e72927cce04d275475605f7d51182725e --- /dev/null +++ b/alargescaledatasetforempatheticresponsegeneration/full.md @@ -0,0 +1,279 @@ +# A Large-Scale Dataset for Empathetic Response Generation + +Anuradha Welivita, Yubo Xie and Pearl Pu +School of Computer and Communication Sciences +École Polytechnique Fédérale de Lausanne +Switzerland +{kalpani.welivita,yubo.xie,pearl.pu}@epfl.ch + +# Abstract + +Recent development in NLP shows a strong trend towards refining pre-trained models with a domain-specific dataset. This is especially the case for response generation where emotion plays an important role. However, existing empathetic datasets remain small, delaying research efforts in this area, for example, the development of emotion-aware chatbots. One main technical challenge has been the cost of manually annotating dialogues with the right emotion labels. In this paper, we describe a large-scale silver dataset consisting of 1M dialogues annotated with 32 fine-grained emotions, eight empathetic response intents, and the Neutral category. To achieve this goal, we have developed a novel data curation pipeline starting with a small seed of manually annotated data and eventually scaling it to a satisfactory size. We compare its quality against a state-of-the-art gold dataset using offline experiments and visual validation methods. The resultant procedure can be used to create similar datasets in the same domain as well as in other domains. + +# 1 Introduction + +Researchers are increasingly inclined towards refining pre-trained language models with domain-specific datasets to achieve certain tasks (Devlin et al., 2019; Liu et al., 2019; Rashkin et al., 2018). One such area is the development of empathetic conversational agents that can understand human emotions and respond appropriately. The aim of the empathetic response generation task is to generate syntactically correct, contextually relevant, and more importantly emotionally appropriate responses following previous dialogue turns. Such tasks require the creation and availability of large dialogue datasets, in which each utterance is annotated with the correct intents and emotions. Though + +many such datasets have been developed in the past (Busso et al., 2008; Poria et al., 2019; Li et al., 2017; Rashkin et al., 2018), due to the cost of manual labor, they are limited in size, thus insufficient to train robust conversational agents. Since collecting and manually annotating such gold standard data is expensive, replacing them with automatically annotated silver standard data has become a rising interest (Filannino and Di Bari, 2015). We show how such a large-scale silver standard dataset with sufficient quality can be curated and used to fine-tune pre-trained language models for the generation of empathetic responses. + +Emotions revealed in social chitchat are rather complex. It has many categories of emotions to distinguish due to subtle variations present in human emotion. For example, Sadness and Disappointment are pursued and dealt with differently in human conversations even though both of them are negative emotions. Also, the listener's reaction to emotion is not always a straightforward mirroring effect of the speaker's emotion. Rather it can be more neutral and convey a specific intent, as is evident from the dialogue example in Table 1. + +
Speaker:I've been hearing some strange noises around the house at night. (Afraid)
Listener:oh no! That's scary! What do you think it is? (Neutral: Acknowledging; Questioning)
Speaker:I don't know, that's what's making me anxious. (Anxious)
Listener:I'm sorry to hear that. (Neutral: Sympathizing)
+ +Table 1: An example showing the listener's reactions to emotions do not always mirror the speaker's emotions. + +Welivita and Pu (2020) have analyzed listener responses in the EmpatheticDialogues dataset (Rashkin et al., 2018) and discovered eight listener specific empathetic response intents contained in emotional dialogues: Questioning; Agreeing; Acknowledging; Sympathizing; Encouraging; Consoling; Suggesting; and Wishing. They have annotated the EmpatheticDialogues dataset with 32 fine-grained emotions, eight empathetic response + +![](images/f47c08a73b3a8cf8d344f676567b96538b63fef543b7015473bf9766d1a4ff3e.jpg) +Figure 1: Steps for curating the EDOS dataset. + +intent, and the Neutral category, and discovered frequent emotion-intent exchange patterns in empathetic conversations. They observe that this type of dataset tagged with fine-grained emotions and intents can be used to train neural chatbots to generate empathetically appropriate responses. But for this purpose, a large-scale emotion and intent labeled dataset is even more desirable. Curating such a dataset is technically challenging since 1) annotating such a large-scale dataset require costly human labor, and 2) given the fine-granularity of the emotion and intent labels, the human labeling task is more difficult and error-prone compared to the more coarse grained Angry-Happy-Sad emotion categories. As a result, existing manually labeled emotional dialogue datasets such as IEMOCAP (Busso et al., 2008), MELD (Poria et al., 2019), and DailyDialogue (Li et al., 2017) are smaller in scale and contain only a limited set of emotions (emotions derived from basic emotion models such as the Ekman's). Most importantly, existing datasets fail to distinguish between Neutral and Questioning, or any of the other eight empathetic response intents. They combine everything into a big label Neutral or Other when the utterance is not emotional. But Questioning, Agreeing, Acknowledging, Sympathizing, Encouraging, Consoling, Suggesting, and Wishing are important details in constructing empathetic dialogues. These eight response intents, which we call the plus categories, are novel in our work and contribute to the model's learning of important response patterns in the data. + +To fill the above gap, we curate a novel large-scale silver dialogue dataset, EDOS (Emotional Dialogues in OpenSubtitles), containing 1M emotional dialogues from movie subtitles, in which each dialogue turn is automatically annotated with 32 fine-grained emotions, eight plus categories as + +well as the Neutral category. Movie subtitles are extensively used for emotion analysis in text in earlier and recent research (Kayhani et al., 2020; Merdivan et al., 2020; Giannakopoulos et al., 2009). The Nature article "How movies mirror our mimicry" (Ball, 2011) states "screenwriters mine everyday discourse to make dialogues appear authentic" and "audiences use language devices in movies to shape their own discourse". Hence, it can be one of the major sources to train chatbots and learn emotional variations and corresponding response strategies in dialogues. To reduce the cost of human labeling and the complexity of labeling dialogues with fine-grained emotions and intents, we devised a semi-automated human computation task to collect fine-grained emotion and intent labels for a small set of movie dialogues (9K). We then followed automatic data augmentation techniques to expand the labeled data and trained a dialogue emotion classifier to automatically annotate 1M emotional dialogues. + +The process of curating the dataset involved several stages. First, we applied automatic turn and dialogue segmentation methods, data cleaning and removal of duplicates on movie subtitles in the OpenSubtitles (OS) corpus (Lison et al., 2019) and obtained close to 4M dialogues. Then, we applied a weak labeler (a BERT-based sentence-level classifier) trained on the EmpatheticDialogues dataset (Rashkin et al., 2018), to label utterances in OS dialogues and filtered 1M emotional dialogues (EDOS initial). Thereafter, we applied data augmentation techniques on a small set of human-annotated data and used the manually annotated and extended labels to train a strong labeler that is used to annotate dialogues in EDOS initial and obtained the final 1M EDOS dataset. We evaluated the quality of the resultant dataset by comparing it against the + +
DatasetLabelsNo. of dialoguesNo. of utterancesPublicly available
IEMOCAP (Busso et al., 2008)Joy, Sadness, Anger, Frustrated, Excited, and Neutral1517,433
MELD (Poria et al., 2019)Joy, Surprise, Sadness, Anger, Disgust, Fear, and Neutral1,43313,708
DailyDialogue (Li et al., 2017)Joy, Surprise, Sadness, Anger, Disgust, Fear, and Neutral12,218103,607
EmotionLines (Hsu et al., 2018)Joy, Surprise, Sadness, Anger, Disgust, Fear, and Neutral1,00014,503
EmoContext (Chatterjee et al., 2019)Joy, Sadness, Anger, and Other38,421115,263
Twitter customer support (Herzig et al., 2016)Customer emotions: Confusion; Frustration; Anger; Sadness; Happiness; Hopefulness; Disappointment; Gratitude; Politeness; and Agent emotional techniques: Empathy; Gratitude; Apology; Cheerfulness2,413≈14,078
Empathetic Dialogues (Rashkin et al., 2018; We-livita and Pu, 2020)32 fine-grained emotions (positive and negative), Neutral, and 8 empa-thetic response intents: Questioning; Agreeing; Acknowledging; Sympa-thizing; Encouraging; Consoling; Suggesting; and Wishing.24,850107,220
EDOS32 fine-grained emotions, 8 empathetic response intents, and Neutral.1M3,488,300
+ +Table 2: Comparison of emotion annotated dialogue datasets available in the literature against EDOS. + +EmpatheticDialogues dataset by means of offline experiments and visual validation methods. Figure 1 summarizes the process of creating EDOS. The data curation pipeline we followed substantially reduced the cost of human labor while ensuring quality annotations. + +Our contributions in this paper are three-fold. 1) We curate a large-scale dialogue dataset, EDOS, containing 1M emotional dialogues labeled with 32 fine-grained emotions, eight empathetic response intents (the plus categories), and Neutral. Compared to existing dialogue datasets tagged with emotions, EDOS is significantly larger ( $\approx$ 40 times larger than EmpatheticDialogues), and contains more fine-grained emotions and empathetic response strategies. 2) We outline the complex pipeline used to derive this dataset. 3) We evaluate the quality of the dataset compared to a state-of-the-art gold standard dataset using offline experiments and visual validation methods. + +# 2 Literature review + +IEMOCAP (Busso et al., 2008), MELD (Poria et al., 2019), DailyDialogue (Li et al., 2017), EmotionLines (Hsu et al., 2018), and EmoContext (Chatterjee et al., 2019) are some existing state-of-the-art dialogue datasets with emotion labels. However, these datasets are limited in size and are labeled with only a small set of emotions without any response strategies. Table 2 shows a summary of the size and the labels in these datasets. All the datasets compared here are in the English language. + +Herzig et al. (2016) detected customer emotions and agent emotional techniques (e.g., Apology, Empathy) in customer support dialogues. They curated + +a dialogue dataset from two customer support Twitter accounts and manually annotated the customer turns with one of 9 emotions and the agent turns with one of 4 emotional techniques. But emotions expressed by customers in social media service dialogues are mainly negative (e.g. anger, frustration), and the customer service agents also respond in a restricted manner, which limits the utility of this dataset, in addition to its small size. + +The EmpatheticDialogues dataset (Rashkin et al., 2018) contains 25K open-domain dialogues grounded on 32 emotions. The 32 emotions range from basic emotions derived from biological responses (Ekman, 1992; Plutchik, 1984) to larger sets of subtle emotions derived from contextual situations (Skerry and Saxe, 2015). Welivita and Pu (2020) manually analyzed a subset of the listener turns in EmpatheticDialogues and identified eight listener-specific response intents. They developed a sentence-level weak labeler using which they annotated the entire dataset with 32 emotions, eight empathetic response intents, and the Neutral category. However, due to the limited size of EmpatheticDialogues, it is difficult to be used for data-intensive applications. To address the above limitations, we curate EDOS containing 1M movie dialogues. We label each dialogue turn with 32 emotions, eight empathetic response intents, and Neutral using our own dialogue emotion and intent classifier. Table 2 compares EDOS to state-of-the-art emotion annotated dialogue datasets. + +# 3 Methodology + +This section describes the dialogue selection process, the design of the human annotation task, + +the data augmentation techniques used to expand human-labeled dialogues, and the development of a strong labeler to annotate the dataset. + +# 3.1 Dialogue curation from movie subtitles + +The OpenSubtitles 2018 corpus consists of 3.7M movie and TV subtitles. It comprises 3.4B sentences and 22.2B tokens. It is an excellent source to learn emotional variations in dialogue and corresponding response mechanisms. But due to the absence of speaker markers, movie subtitles do not contain an explicit dialogue turn structure (who speaks what) and specific indicators where one dialogue ends and the next dialogue begins. To overcome the first issue, we reproduced the work by Lison and Meena (2016) to build an SVM-based classifier that determines if two consecutive sentences are part of the same dialogue turn. Our classifier achieved a segmentation accuracy of $76.69\%$ , which is close to the accuracy of $78\%$ that the authors claim. The set of features that gave the best turn segmentation accuracy are: 1) unigram and bi-gram features of adjacent sentences after lemmatization; 2) first and final tokens of adjacent sentences; 3) first and final bi-grams of adjacent sentences; 4) whether the two sentences belong to the same subtitle block or not (boolean); 5) genre of the movie (Drama, Crime, Musical etc.); 6) sentence density of the subtitles file (no. of sentences/subtitle duration); and 7) quadratic combinations of the above features with itself and the rest. + +After performing turn segmentation on the OpenSubtitles corpus, we divided the turns into separate dialogues based on a simple heuristic. If the difference between the end time of the previous turn and the start time of the current turn is more than 5 seconds, we take these two turns as belonging to 2 different dialogues. An exception occurs if this timestamp information is missing in at least one of the turns. In this case, we assume that these two turns appear in the same subtitle block and consider them as belonging to the same dialogue. This way, we formed 9M dialogues from the OpenSubtitles corpus altogether. The choice of 5 sec.s to separate dialogues is explained in Appendix C. + +To further clean the dialogues, we removed character names, the repetitive dialogue turns, turns that start with "previous on..." (monologue at the beginning of TV episodes), turns with character length less than 2 or greater than 100, turns with + +an alphabetic proportion less than $60\%$ , and turns with a lot of repetitive tokens. When a dialogue turn was removed, all the turns following that turn were also removed from the dialogue to maintain consistency. After that, all the dialogues left with only one turn were removed from the corpus. We removed dialogues from movies of the genre 'Documentary' since they do not correspond to actual dialogues. This resulted in a cleaned OS dialogue dataset consisting of 4M dialogues. + +To filter out dialogues containing emotional statements and empathetic responses from the cleaned OS dialogues dataset, we employed a weak labeler, (a BERT transformer-based sentence level classifier) trained on 25K situation descriptions from EmpatheticDialogues (Rashkin et al., 2018) tagged with 32 emotion classes, and 7K listener utterances tagged with eight empathetic response intents and the Neutral category (Welivita and Pu, 2020). The classifier had a high top-1 classification accuracy of $65.88\%$ . We call it a weak labeler since it predicts emotion or intent only at the sentence level and is trained on a different dataset other than OS. We filtered the top 1M dialogues having the highest label confidence as predicted by this classifier to form the 1M EDOS (initial) dataset. The statistics of the EDOS dataset are given in Table 3. More detailed statistics including the number of dialogues per emotion are included in Appendix D. + +
CriteriaStatistics
Total no. of dialogues1,000,000
Total no. of turns2,829,426
Total no. of tokens39,469,825
Avg. no. of turns per dialogue2.83
Avg. no. of tokens per dialogue39.47
Avg. no. of tokens per turn13.95
+ +Table 3: Statistics of the EDOS dataset. + +# 3.2 Human computation + +To train a dialogue emotion classifier that can identify both fine-grained emotions and empathetic response intents, we devised an Amazon Mechanical Turk (AMT) experiment to collect an initial set of ground truth labels for OS dialogues. But annotating dialogue turns with one of 41 labels is a daunting task. To make the task less exhaustive, we devised a semi-automated approach using our weak labeler. By applying the weak labeler on each turn of the cleaned OS dialogue dataset, we filtered out the turns having prediction confidence $\geq 0.9$ along with their dialogue history. Next, we ranked these dialogues according to their readability and selected the highest readable dialogues from each + +class to be labeled. This is to reduce the time spent by the workers in having to read long and complicated dialogues. The steps followed in computing dialogues' readability are included in Appendix A. Workers had to select a label from the top-3 predictions made by the weak labeler. If none of the top-3 predictions matched, they could manually specify the correct class. The main purpose of incorporating a weak labeler here was to make the task less daunting for the crowd worker. Otherwise, having to choose a label out of 41 labels may lead to even worse results due to the complicated nature of the task. The risk of reduced data reliability is avoided by taking only the labels with the majority vote. The AMT task's user interface design is included in Appendix B. + +After ranking the dialogues according to readability, we selected the top 250 dialogues in each category for the AMT task. We bundled 15 dialogues in a HIT with 5 quiz questions that served as checkpoints to evaluate the crowd workers' quality. Situation descriptions from the Empathetic-Dialogues dataset for which we already knew the emotion labels were used to formulate the quiz questions. Finally, we obtained dialogues where we had 2 out of 3 worker agreements, which resulted in 8,913 dialogues altogether. Table 4 shows the results of the AMT task. + +
DescriptionStatistics
Total no. of dialogues10,250
# dialogues labeled with majority vote8,913(86.96%)
Inter-annotator agreement (Fleiss’ Kappa)0.46 (moderate agreement)
% of times workers got 3/5 quiz questions correct77.75%
# dialogues in which the workers manually specified the label425
+ +Table 4: AMT task results. + +# 3.3 Data augmentation and annotation + +To scale up the training data obtained from the AMT task, we utilized a distant learning technique using dialogue embeddings (Reimers and Gurevych, 2019) and self-labeling (Triguero et al., 2015), a semi-supervised learning technique. The first approach we used is using Sentence-BERT (SBERT) proposed by Reimers and Gurevych (2019), which uses siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. Using this approach, we obtained semantically similar dialogues to those annotated + +by crowd workers and tagged them with the same class label. Among several models the authors have proposed, we used the roberta-base-nli-stsb-mean-tokens model, fine-tuned on the NLI (Bowman et al., 2015) and STS benchmark (STSb) (Cer et al., 2017) datasets, since it has reported a high Spearman's rank correlation of $84.79 \pm 0.38$ between the cosine-similarity of the sentence embeddings and the gold labels in the STS benchmark test set outperforming the existing state-of-the-art. It is also more efficient to use than roberta-large. Before proceeding, we left out $20\%$ of the crowd-annotated dialogues, balanced across all class labels, as testing data. Then, we followed the following steps in extending the rest of the dialogues using SBERT. + +1) Using the SBERT model, first, we computed dialogue turn embeddings (each with a vector representation of 768 dimensionalities) for all the turns $(\approx 19\mathrm{M})$ in the cleaned OS dataset. 2) Then, we calculated dialogue embeddings for human-annotated and unlabeled dialogues from the cleaned OS dialogues dataset. For this, we applied a decaying weight starting from the last turn and took the weighted average of the turn embeddings of each dialogue. We used half decaying, i.e., if we have a dialogue with turn embeddings $v_{1}, v_{2}$ , and $v_{3}$ , the final dialogue embedding would be $(4/7)v_{3} + (2/7)v_{2} + (1/7)v_{1}$ . 3) Next, we calculated the cosine similarity between annotated and unlabeled dialogue embeddings and ranked the results. 4) Finally, we applied a similarity threshold and obtained all the unlabeled dialogues with a cosine similarity that exceeds this threshold and tagged them with the same crowd annotated class label. Here, we used a threshold of 0.92 after manually inspecting a random subset of the results obtained for a range of thresholds (Examples from this stage are denoted in Appendix C). + +We extended the original crowd annotated dialogue dataset by 3,196 more dialogues with distantly annotated class labels using the above method. Thereafter, using the crowd-annotated and extended labels, we trained an initial classifier that we used to annotate the rest of the dialogues and add more labels to our dataset that had annotation confidence over 0.9. This method is termed self-labeling (Triguero et al., 2015), a semi-supervised learning technique that can be used to grow labeled data. With this, we were able to extend the labeled data by 4,100 more dialogues. Next, we again + +applied SBERT over the self-labeled data and extended them by 2,118 more dialogues. Finally, we were able to have $\approx 14K$ labeled dialogues altogether. We used this data to train a final dialogue emotion classifier to annotate the rest of the unlabeled data. This resulted in a classifier with precision $64.11\%$ , recall $64.59\%$ , macro F1-score $63.86\%$ , and accuracy $65.00\%$ , which is comparable with the state-of-the-art dialogue emotion classifiers (as denoted in Table 5). The design of the dialogue emotion classifier we utilized to annotate the dataset is explained in section 3.3.1. + +# 3.3.1 Design of the dialogue emotion classifier + +Our dialogue emotion classifier consists of a representation network that uses the BERT architecture, an attention layer that aggregates all hidden states at each time step, a hidden layer, and a softmax layer. We used the BERT-base architecture with 12 layers, 768 dimensions, 12 heads, and 110M parameters as the representation network. It was initialized with weights from RoBERTa (Liu et al., 2019). We fed in a dialogue turn along with the preceding context in the reverse order as input to the representation network. To give more importance to the dialogue turn for which prediction has to be made and the turns that immediately precede it, we multiplied the token embeddings belonging to each turn by a decreasing weight factor. Its input representation is constructed by summing the corresponding token embedding multiplied by the weighting factor and its position embedding. More details including the hyper-parameters used are included in the Appendix C. + +# 4 EDOS quality analysis and comparison with the state-of-the-art gold standard + +Table 6 shows some example dialogues taken from the EDOS dataset along with annotations and confidence scores. By observing the examples, it could be noticed that even for less confident predictions, the label quite accurately describes the emotion or intent of the corresponding dialogue turn. + +We also conducted a qualitative comparison of the annotations in the EDOS dataset with EmpatheticDialogues (Rashkin et al., 2018; Welivita and Pu, 2020), a state-of-the-art gold standard dataset for empathetic conversations. Figure 2 compares the distributions of emotions and intents in the two datasets. It is observed that in both datasets, intent categories take prominence over individual emotion classes. This is in par with observations of + +Welivita and Pu (2020), where they notice that one or more intents from the taxonomy of empathetic intents are mostly utilized when responding to emotions in dialogue, rather than similar or opposite emotions. Especially, the intent Questioning takes the highest percentage among the annotations in EmpatheticDialogues and EDOS. We also computed the KL-divergence $(\geq 0)$ of the emotion and intent distribution of EDOS with respect to that of EmpatheticDialogues, which measures how one probability distribution is different from a second, reference probability distribution (Kullback and Leibler, 1951). It resulted in a KL-divergence value of 0.2447, which indicates a considerable similarity between the two distributions (the lower the KL divergence, the more similar the distributions are). + +Figure 3 compares the emotion-intent flow patterns in EmpatheticDialogues and EDOS. In the visualization corresponding to EmpatheticDialogues, the $1^{\text{st}}$ and $3^{\text{rd}}$ dialogue turns correspond to the speaker and the $2^{\text{nd}}$ and $4^{\text{th}}$ dialogue turns correspond to the listener. However, in EDOS, we cannot distinguish the dialogue turns as speaker and listener turns due to the absence of speaker annotations. Though this is the case, we could still observe some conversational dynamics present in EmpatheticDialogues are preserved in EDOS. For example, in both datasets, the speaker mostly starts the conversation with some emotional statement and in the subsequent turn, the response tends to be of the intent Questioning. In both datasets, intents Agreeing and Acknowledging follow emotions seen in the first turn irrespective of whether they are positive or negative. As the dialogues proceed, it could be seen in both datasets the emotions deescalate as more empathetic response intents emerge. + +# 5 Experimental baselines + +We propose some experimental baselines using the curated dataset for empathetic response generation and compare the performance against a dialogue model trained on the EmpatheticDialogues dataset. For this purpose, we trained a transformer (Vaswani et al., 2017) model with various training settings. Specifically, the following datasets were involved: 1) OS dialogues (As described in Section 3.1, these dialogues were obtained by segmenting the movie subtitles. Note that for the purpose of pre-training, we excluded the EADOS dialogues, resulting in around 3M dialogues.); 2) EDOS (1M dialogues); and 3) EmpatheticDialogues (25K dialogues). All + +![](images/d35e959515d56f1044d48646f94520e2abbb9e4b011d0e05718ace2b88ed7d72.jpg) +(a) EmpatheticDialogues + +![](images/889443d39bdca8fc8a9452890eef02b6e7b34e979327633fd3eb5855263f35ee.jpg) +(b) EDOS + +![](images/53027c55cec209c034159f2ba43683a9e1a4fd50adcb4183cb7c863ce52ca508.jpg) +Figure 2: Comparison of distribution of emotions and intents in the EmpatheticDialogues and EDOS datasets. + +![](images/4e556ded24f9190d25faa2c526f57e85544f6c209b3da174c67a8f410419b45e.jpg) +(a) EmpatheticDialogues dataset +(b) EDOS dataset +Figure 3: Comparison of emotion-intent flow patterns in the EmpatheticDialogues and EDOS datasets. For simplicity, only the first four dialogue turns are visualized. + +
ClassifierDatasetNo. of labelsF1Acc.
AR (Khosla, 2018)EmotionLines dataset (Hsu et al., 2018)4 Emotion labels-Friends: 62.50 EmotionPush: 62.48
CMN (Hazarika et al., 2018b)IEMOCAP dataset (Busso et al., 2008)6 Emotion labels56.1356.56
ICON (Hazarika et al., 2018a)57.9058.30
IAAN (Yeh et al., 2019)-64.70
Dialog-RNN (Majumder et al., 2019)IEMOCAP (Busso et al., 2008) and AVEC (Schuller et al., 2012) datasetsIEMOCAP: 4 Emotion labels; AVEC: 4 dimensional emotion labels62.7563.40
Dialog-GCN (Ghosal et al., 2019)IEMOCAP (Busso et al., 2008), AVEC (Schuller et al., 2012), and MELD (Poria et al., 2019) datasetsIEMOCAP: 4 Emotion labels; AVEC: 4 dimensional emotion labels; MELD: 7 Emotion labels64.1865.25
OursOS dialogue dataset32 Emotions + 8 Intents + Neutral63.8665.00
+ +Table 5: Comparison of the performance of the dialogue emotion classifier used for annotation with performance of the state-of-the-art dialogue emotion classifiers. F1-score reported here is the macro-F1 score. + +
Dialogue #1:
Turn 1(Excited, 0.98) The concert will start soon.
Turn 2(Questioning, 0.01) Are you excited?
Turn 3(Proud, 0.99) I am. Because one of my friends made his efforts to make the concert happen. He wanted to fulfill a promise he made to his first love.
Turn 4(Sentimental, 0.99) I like their story very much. I want to dedicate this concert to everyone who has truly loved someone.
Dialogue #2:
Turn 1(Apprehensive, 0.89) Staying here might not be safe.
Turn 2(Questioning, 0.41) Take the earliest flight tomorrow?
Turn 3(Caring, 0.94) Take Josie to mother. My home is where you are.
Turn 4(Faithful, 0.86) We're not leaving.
+ +Table 6: Example dialogues from the EADOS dataset along with annotations and confidence scores. + +three datasets were split into a training $(80\%)$ , validation $(10\%)$ , and test $(10\%)$ sets. Based on the training strategies, we have the following models: 1) Pre-trained—to take advantage of transfer learning, we pre-trained the transformer model on the 3M OS dialogues. The large scale of this training set is expected to provide a good starting point for fine-tuning; 2) Fine-tuned—we took the pre-trained transformer and then fine-tuned it on EDOS and EmpatheticDialogues datasets respectively. All the models have 4 layers, 6 multi-heads, and a hidden size of 300, and were trained until the minimum validation loss was reached. For inference, we used beam search with beam size 32 and 4-gram repeats blocking. + +To evaluate the performance of the dialogue models, we adopted the following metrics: 1) perplexity; 2) distinct-1 and -2 metrics (Li et al., 2016), which measure the diversity of the generated responses; 3) sentence embedding similarity—we used SBERT (Reimers and Gurevych, 2019) to obtain an embedding for the generated response as well as the ground-truth and then calculated the cosine similarity between the two embeddings. The performance of the dialogue models was tested in held-out and zero-shot settings. The evaluation results are shown in Table 7. + +In the held-out setting, where the model is evaluated on data from the same domain as the training data, all three models achieved good performance, and the perplexity values are much lower compared with the zero-shot setting, where the model is evaluated on data from a different domain. We also observe that the model fine-tuned on OS and EDOS dialogues achieves much higher Distinct-1 and -2 scores, even in the zero-shot setting when evaluated on EmpatheticDialogues. This indicates that by training on our curated OpenSubtitles dialogues, the model gains more diversity in the generated responses. It might be due to the larger size of the datasets containing many diverse responses. Out of the two, EDOS performs the best in terms of diversity, which reflects the quality of dialogues filtered from OpenSubtitles. + +# 6 Discussion and conclusion + +In this work, we curated a large-scale dialogue dataset, EDOS, comprising of 1M emotional dialogues from movie subtitles. This dataset is significantly larger in size and contains more fine-grained emotion categories and empathetic response intents than the existing emotional dialogue datasets. To facilitate annotation, we utilized data augmentation techniques to extend a small set of manually annotated data and trained a dialogue emotion classifier having comparable accuracy to the state-of-the-art. The data augmentation and automatic annotation procedure we employed significantly reduced the manual annotation cost and time. + +Obtaining a large dataset is important only if the quality can be assured. The qualitative comparison conducted between EDOS and the state-of-the-art EmpatheticDialogues dataset by means of visual validation was one way to confirm that. The results of the comparison confirmed that most of the conversational dynamics present in EmpatheticDia + +
ModelOSEDOSEmpatheticDialogues
PPLD1D2SESPPLD1D2SESPPLD1D2SES
Pre-trained (OS)24.8.046.159.17237.8.046.154.126564.6.044.167.178
Fine-tuned (EDOS)26.9.044.139.16232.3.056.165.137452.6.031.107.176
Fine-tuned (ED)88.9.030.109.174140.8.028.096.13019.3.026.091.316
+ +Table 7: Dialogue model evaluation results. Here PPL denotes perplexity, D1 and D2 denote Distinct-1 and -2, and SES denotes the sentence embedding similarity. : held-out, : zero-shot. + +logues were observed in EDOS. We also proposed some experimental baselines by training a transformer model for empathetic response generation on OS, EDOS, and EmpatheticDialogues datasets and tested them in held-out and zero-shot settings. The results showed that the model fine-tuned on EDOS scored the best in terms of diversity metrics. This dataset can be readily utilized to develop empathetic conversational agents and for fine-grained emotion analysis in dialogues. The pipeline we present can be used when creating similar large-scale datasets in similar or even different domains. + +As future work, we plan to utilize this dataset to further conduct experiments on empathetic response generation. Since it is annotated with emotions and intents, we will use it for experiments involving controllable and interpretable response generation. Particularly, the plus categories present in the dataset can be utilized to condition the chatbot's response generation process, making it possible to control and interpret the generated responses. The dataset can also be used to train state-of-the-art dialogue emotion classifiers. + +# 7 Ethical considerations + +EDOS contains dialogues derived from the OpenSubtitles corpus (Lison et al., 2019), which is publicly available. It is part of the OPUS (Open Parallel corpUS), which is based on open source products and is delivered as an open content package. The workers annotating the dataset were compensated with $0.4 per HIT, which takes 4.12 minutes on average to complete (excluding the time taken by workers who took an unusually long time to complete the task) and a bonus of $0.1 if they completed at least 3 out of 5 quiz questions correctly. Fair compensation was determined based on the US minimum wage of $7.12 per hour. Since the dataset is in English, the annotators recruited from AMT were restricted the majority native English-speaking countries: US; UK; Canada; Australia; and New Zealand. The fact that the dataset is + +English-only potentially perpetuates an English bias in NLP systems. + +Using this dataset to directly train end-to-end chatbot models can involve certain risks. Though we have taken steps to remove profanity from the responses in the dataset, due to the lack of controllability and interpretability in end-to-end neural response generation models, there exists the risk of generating inappropriate or biased responses for certain emotional prompts. A recent example is Microsoft's Taybot that started producing unintended and offensive tweets denying the Holocaust as a result of learning from offensive information from Twitter (Lee, 2016). To mitigate this, researchers have recently focussed on inducing controllability in these end-to-end response generation models by means of jointly modeling dialogue intent selection and response generation (Wu et al., 2018; Sankar and Ravi, 2019; Hedayatnia et al., 2020; Santhanam et al., 2020; Ke et al., 2018; Lee et al., 2020). We encourage the readers to look into these approaches when developing conversational agents using this dataset. + +Though human-like chatbots with emotion recognition and empathetic responding abilities can be beneficial in a number of situations such as in the medical domain, crisis management, customer service, and elderly care, it should not be underestimated that they involve some potential harms. For example, a chatbot can be used to impersonate a real human being and used for cybercrimes such as scamming and phishing. It is also important to note that one could get emotionally attached to a bot, or even become codependent, distracting him or herself from relationships with humans and causing distress if the chatbot becomes dysfunctional. Users may tend to reveal their private and confidential information such as certain health conditions and private attributes during such interaction, which could be misused when in the hands of the wrong people. Developers should take these risks into account when deploying such chatbots in the real world to ensure safe and ethical use. + +# References + +Philip Ball. 2011. How movies mirror our mimicry. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics. +Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation, 42(4):335. +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics. +Ankush Chatterjee, Umang Gupta, Manoj Kumar Chinnakotla, Radhakrishnan Srikanth, Michel Galley, and Puneet Agrawal. 2019. Understanding emotions in text using deep learning and big data. Computers in Human Behavior, 93:309-317. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200. +Michele Filannino and Marilena Di Bari. 2015. Gold standard vs. silver standard: the case of dependency parsing for Italian. *CLiC it*, page 141. +Theodoros Giannakopoulos, Aggelos Pikrakis, and Sergios Theodoridis. 2009. A dimensional approach to emotion recognition of speech from movies. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 65-68. IEEE. +Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, and Dilek Hakkani-Tur. 2020. Policy-driven neural response generation for knowledge-grounded dialog systems. In Proceedings of the 13th International Conference on Natural Language Generation, pages 412-421, Dublin, Ireland. Association for Computational Linguistics. + +Jonathan Herzig, Guy Feigenblat, Michal Shmueli-Scheuer, David Konopnicki, Anat Rafaeli, Daniel Altman, and David Spivak. 2016. Classifying emotions in customer support dialogues in social media. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 64-73. +Chao-Chun Hsu, Sheng-Yeh Chen, Chuan-Chun Kuo, Ting-Hao Huang, and Lun-Wei Ku. 2018. Emotion-Lines: An emotion corpus of multi-party conversations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Amir Kazem Kayhani, Farid Meziane, and Raja Chiky. 2020. Movies emotional analysis using textual contents. In International Conference on Applications of Natural Language to Information Systems, pages 205-212. Springer. +Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1499-1508, Melbourne, Australia. Association for Computational Linguistics. +Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, 22(1):79-86. +Dave Lee. 2016. Tay: Microsoft issues apology over racist chatbot fiasco. +Hung-yi Lee, Cheng-Hao Ho, Chien-Fu Lin, Chiung-Chih Chang, Chih-Wei Lee, Yau-Shian Wang, Tsung-Yuan Hsu, and Kuan-Yu Chen. 2020. Investigation of sentiment controllable chatbot. arXiv preprint arXiv:2007.07196. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT 2016, pages 110-119. +Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986-995, Taipei, Taiwan. Asian Federation of Natural Language Processing. +Pierre Lison and Raveesh Meena. 2016. Automatic turn segmentation for movie & tv subtitles. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 245-252. IEEE. +Pierre Lison, Jörg Tiedemann, Milen Kouylekov, et al. 2019. Open subtitles 2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In LREC 2018, Eleventh International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA). + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Erinc Merdivan, Deepika Singh, Sten Hanke, Johannes Kropf, Andreas Holzinger, and Matthieu Geist. 2020. Human annotated dialogues dataset for natural conversational agents. Applied Sciences, 10(3):762. +Robert Plutchik. 1984. Emotions: A general psychoevolutionary theory. Approaches to emotion, 1984:197-219. +Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 527-536, Florence, Italy. Association for Computational Linguistics. +Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207. +Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics. +Chinnadhurai Sankar and Sujith Ravi. 2019. Deep reinforcement learning for modeling chit-chat dialog with discrete attributes. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 1-10, Stockholm, Sweden. Association for Computational Linguistics. +Sashank Santhanam, Zhuo Cheng, Brodie Mather, Bonnie Dorr, Archna Bhatia, Bryanna Hebenstreit, Alan Zemel, Adam Dalton, Tomek Strzalkowski, and Samira Shaikh. 2020. Learning to plan and realize separately for open-ended dialogue systems. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2736-2750, Online. Association for Computational Linguistics. +Amy E Skerry and Rebecca Saxe. 2015. Neural representations of emotion are organized around abstract event features. Current biology, 25(15):1945-1954. +Isaac Triguero, Salvador Garcia, and Francisco Herrera. 2015. Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study. Knowledge and Information systems, 42(2):245-284. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS 2017, pages 5998-6008. +Anuradha Welivita and Pearl Pu. 2020. A taxonomy of empathetic response intents in human social conversations. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4886-4899. +Wei Wu, Can Xu, Yu Wu, and Zhoujun Li. 2018. Towards interpretable chit-chat: Open domain dialogue generation with dialogue acts. + +# A Computing the readability of OS dialogues + +We followed the following steps in calculating the readability of the dialogues. The dialogues that scored high in readability were preferred for the crowd-annotation task since they avoid the overhead of having to read long and complex dialogues that may exhaust the crowd-worker. + +1. Build a frequency vocabulary by calculating the token count for all the dialogues in the cleaned OS dataset. + +2. For each dialog, aggregate the frequencies of all tokens and take the average using the following formula, in which $f_{sum}$ is the sum of frequencies of all tokens, $n_{tokens}$ is the total number of tokens in the dialog, and $\alpha$ is a constant (set to 87 in our case). The idea behind this is that difficult to read dialogues contain less frequent words and should result in less readability. + +$$ +f = f _ {s u m} / (\alpha + n _ {t o k e n s}) +$$ + +3. For each dialog, also calculate the percentage of distinct words, say $d$ . +4. Finally, compute the readability score for each dialogue by taking the weighted sum of $f$ and $d$ . Experimental results showed that the combination of $f + 0.04d$ was giving the best results. We take the combination of both $f$ and $d$ because, if only $f$ is considered, then dialogues that contain a lot of repetitive tokens can score high in readability, which is undesirable. + +# B AMT task interfaces + +The user interface used to collect labels from the AMT workers is denoted in Figure 4. + +![](images/6052d2638933f44bf6beefa66e15d5ece695b3f6431ca5ce17414e6c91e7dca6.jpg) +Figure 4: The user interface of the AMT crowd-annotation task. + +# C Choice of hyper-parameters and additional training details regarding the dialogue emotion classifier used for annotation + +The choice of 5 seconds to separate dialogues is based on a histogram of time intervals between adjacent subtitle blocks in the OpenSubtitles corpus, which is denoted in Figure 5. As it can be observed in the histogram, most of the time gaps fall below 3 seconds. A clear drop in count was observed between 3-5 seconds. Therefore, we chose 5 seconds as the time interval to separate dialogues. + +![](images/c46e53090cdaa7bd64ebb9ec88df1de73d49748e3b60db583d770259e2cde925.jpg) +Figure 5: Histogram of time intervals between adjacent subtitle blocks in the OpenSubtitles corpus. + +The choice a threshold of 0.92 to select dialogues similar to those that were already annotated was based on manually inspecting a random subset of + +the results obtained after using a range of similarity thresholds. Table 8 shows some example dialogues discovered at this threshold. + +Using decreasing weights for context utterances is based on the intuition that in human dialogues, more attention is paid to the most recent utterances in dialogue history. This idea is backed up by time-decay functions used in neural dialogue understanding approaches (See et al., 2019). We conducted an ablation study with without using decreasing weights in the model. Performance of the unweighted models was lower than the performance of weighted models yielding final F1 scores of 63.44 and 64.86 for unweighted weighted models, respectively. + +We used the same hyper-parameter setting used in RoBERTa (Liu et al., 2019) when training the dialogue emotion classifier used for annotation. We used the Adam optimizer with $\beta_{1}$ of 0.9, $\beta_{2}$ of 0.98, an $\epsilon$ value of $1\times 10^{-6}$ , and a learning rate of $2\times 10^{-5}$ . A dropout of 0.1 was used on all layers and attention weights, and a GELU activation function (Hendrycks and Gimpel, 2016). We limited the maximum number of input tokens to 100, and used a batch size of 256. All the experiments were conducted on a machine with 2x12cores@2.5GHz, 256 GB RAM, 2x240 GB SSD, and 2xGPU (NVIDIA Titan X Maxwell). 546.84 sec.s in total were taken to train the final emotion classifier. The optimal model was selected based on the average cross entropy loss calculated between the ground-truth and predicted labels of the validation set. + +# D EDOS statistics + +Table 9 shows more descriptive statistics of the EDOS dataset: the number of dialogues; and the number of dialogues turns per emotion and intent category. A dialogue is counted under an emotion or an intent if the beginning dialogue prompt is annotated with that emotion or intent. + +# E Additional training details about the experimental baselines + +Here we summarize some of the parameters of the model implementation. We used the RoBERTa tokenizer to tokenize the input utterances, and the vocabulary size is 50,265. We allow a maximum number of 100 tokens as the input to the model. We used 4 sub-layers in the encoder and decoder, with 6 heads in the multi-head attention. The dimension of the hidden units is 300, and the dimension of the + +
Manually annotated dialoguesDialogues discovered using similarity matching (with similarity ≥ 0.92)
- That 's beautiful !. (Acknowledging)- Now, let 's take a look at this beautiful piece of work +- Oh, my God. It 's beautiful. +- Oh. That 's beautiful.
- I thought the coils were closer to me. +- Oh, well ... It was a good one nonetheless. +- I'm so happy! (Joyful)- Actually, I just wanted to say I love you. And I'm sorry if I'm a bit edgy about my book, but all that counts for me is you. You becoming my wife. +- That 's what really matters. +- I'm very happy.
- Hey! Don't eat at my house anymore. +- You're disgusting. (Disgusted)- I thought I told you to stay the fuck away from me if you were back on that shit. +- You're disgusting.
- Was the team mad, then?- It's starting to hurt so bad.
- I wasn't happy!- Really? That bad?
- That's pretty bad. (Acknowledging)- Really bad.
+ +Table 8: Examples of similar dialogues discovered above a cosine similarity threshold of 0.92. The last turn in each dialogue discovered through similarity matching was labeled with the emotion or intent of that of the last turn of the manually labeled dialogue. + +
Emotion or IntentNo. of dialoguesNo. of turns
Prepared21,17848,883
Anticipating27,256100,433
Hopeful21,32854,012
Proud13,91033,365
Excited22,11853,756
Joyful6,58624,282
Content20,68864,569
Caring13,59942,806
Grateful15,41642,222
Trusting41,650134,197
Confident26,19984,918
Faithful8,09525,029
Impressed12,86725,045
Surprised16,65846,022
Terrified9,44928,730
Afraid15,96449,285
Apprehensive8,63446,727
Anxious2,3768,578
Embarrassed11,54132,338
Ashamed3,40114,797
Devastated6,24517,539
Sad23,02366,262
Disappointed5,23418,298
Lonely3,66216,396
Sentimental7,10420,715
Nostalgic7,88020,461
Guilty9,63230,043
Disgusted5,54615,070
Furious54,647169,917
Angry13,22834,924
Annoyed6,63730,072
Jealous5,76620,902
Agreeing20,17396,562
Acknowledging39,781138,165
Encouraging3,02410,329
Consoling3,78517,256
Sympathizing15,55738,774
Suggesting42,470101,591
Questioning357,255841,556
Wishing42,789108,668
Neutral7,64955,932
Total1,000,0002,829,426
+ +Table 9: Descriptive statistics of the EDOS dataset pertaining to each emotion and intent category. + +pointwise feed-forward layers is 1200. We use a dropout rate of 0.1, and the GELU (Hendrycks and Gimpel, 2016) activation function for the hidden layers. The loss function was optimized with the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of $5 \times 105$ For inference, we use beam search with a beam size of 32. To prevent the models from generating repetitive tokens or n-grams, we modified the beam search algorithm + +so that at each time step, if any of the branches contains repetitive 4-grams, we set the log probability of this branch to infinitely negative, to stop it from being further expanded. All the models were trained with a batch size of 512, on machines with 4 Nvidia Titan X Pascal GPUs, 2 Intel Xeon E5-2680 v3 CPUs, and 256GB RAM. Table 10 lists the training details as well as the validation performance for all the models. + +
Model# Parameters# Training EpochsTraining TimeValidation PPL
Pre-trained (OS)121M50 epochs171.00 hr24.51
Fine-tuned (EDOS)121M5 epochs4.23 hr31.78
Fine-tuned (ED)121M9 epochs19.50 min21.04
+ +Table 10: Training details and validation performance of each model configuration. \ No newline at end of file diff --git a/alargescaledatasetforempatheticresponsegeneration/images.zip b/alargescaledatasetforempatheticresponsegeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1a45189c0567f495ce8a2f76340cb334ba2b3d07 --- /dev/null +++ b/alargescaledatasetforempatheticresponsegeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39e76f524f4e599cca46c5983902ed682110632f6c10c03fa0a77bf892c676ed +size 784449 diff --git a/alargescaledatasetforempatheticresponsegeneration/layout.json b/alargescaledatasetforempatheticresponsegeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0e52c9acc03e0ec1bfd165794ce7b6af15dbf7b0 --- /dev/null +++ b/alargescaledatasetforempatheticresponsegeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3497fea378128b5e4ecd0134a3fbfbbab98a853259df75cc5252404eecca7bf2 +size 316319 diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_content_list.json b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..125ceb205ceb6bd2397d7723c456ec61f2efd645 --- /dev/null +++ b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ee2e99a0f62297c7dca6b4b80ef3bea40a800fd52411b106be9126694d9ff3a +size 102286 diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_model.json b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3bcc6f607d8600ea2ea1fe98161dfca013500405 --- /dev/null +++ b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17f57c100d699fdedd4025b2df39fed15dabc1f0a7751c1982bb8adba6a398a4 +size 127759 diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_origin.pdf b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2c4f7987d3601eac42dd231a28319dc2ae448485 --- /dev/null +++ b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00d96de8a6477bcbe9a32028c5fc902b8623b83770cbf58ebe7303288d0865d9 +size 334468 diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/full.md b/alargescalestudyofmachinetranslationinturkiclanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dd4941a7bccfa8c9afbe5ed7878641a578690d6e --- /dev/null +++ b/alargescalestudyofmachinetranslationinturkiclanguages/full.md @@ -0,0 +1,288 @@ +# A Large-Scale Study of Machine Translation in the Turkic Languages + +Jamshidbek Mirzakhalov $^{a,b}$ , Anoop Babu $^{a,b}$ , Duygu Ataman $^{a,c}$ , Sherzod Kariev $^{a,b}$ , Francis Tyers $^{a,d}$ , Otabek Abduraufov $^{a,b}$ , Mammad Hajili $^{a,e}$ , Sardana Ivanova $^{a,f}$ , Abror Khaytbaev $^{a,b}$ , Antonio Laverghetta Jr. $^{a,b}$ , Behzodbek Moydinboyev $^{a,b}$ , Esra Onal $^{a,d}$ , Shaxnoza Pulatova $^{a,g}$ , Ahsan Wahab $^{a}$ , Orhan Firat $^{a,h}$ , Sriram Chellappan $^{a,b}$ + +$^{a}$ Turkic Interlingua, $^{b}$ University of South Florida, $^{c}$ NYU, $^{d}$ Indiana University, $^{e}$ EPFL, $^{f}$ University of Helsinki, $^{g}$ Namangan State University, $^{h}$ Google Research + +# Abstract + +Recent advances in neural machine translation (NMT) have pushed the quality of machine translation systems to the point where they are becoming widely adopted for building competitive systems. However, there is still a large number of languages that are yet to reap the benefits of NMT. In this paper, we provide the first large-scale case study of the practical application of MT in the Turkic language family in order to realize the gains of NMT for Turkic languages under high-resource to extremely low-resource scenarios. In addition to presenting an extensive analysis that identifies the bottlenecks towards building competitive systems to ameliorate data scarcity, our study has several key contributions, including, i) a large parallel corpus covering 22 Turkic languages consisting of common public datasets in combination with new datasets of approximately 2 million parallel sentences, ii) bilingual baselines for 26 language pairs, iii) novel high-quality test sets in three different translation domains and iv) human evaluation scores. All of our data, software and models are publicly available. $^{1}$ + +# 1 Introduction + +Having been studied widely over the last few decades, machine translation (MT) evaluation has traditionally focused on European languages, due to limitations of the available technology as well as resources. Although low-resource MT has recently started to gain more attention and new evaluation benchmarks are becoming available (Guzmán et al., 2019; Ojha et al., 2020; Fraser, 2020; Ansari et al., 2020), there are still a large amount of underrepresented languages excluded from MT evaluation. In addition to the cost of preparing such labor-intensive annotations, the lack of training resources also limits the evaluation of MT models in + +
NameCodesArticlesSpeakersMT?
Englishen, eng6,237,470400M
Russianru, rus1,694,280258M
Turkishtr, tur388,64185.0M
Uzbekuz, uzb139,63527.0M
Azerbaijaniaz, aze177,53623.0M
Kazakhkk, kaz228,12313.2M
Uyghurug, uig4,89810.0M
Turkmentk, tuk5,8766.70M
Tatartt, tat237,3325.20M
Kyrgyzky, kir80,7384.30M
Bashkirba, bak55,4771.40M
Chuvashcv, chv45,2751.04M
Karakalpakkaa1,882583K
Crimean Tatarcrh8,633540K
Sakha (Yakut)sah13,027450K
Kumykkum450K
Karachay-Balkarkrc2,049310K
Tuvantyv3,164280K
Urumuum190K
Gagauzgag2,737148K
Salarslr70K
Altaialt56K
Khakaskjh43K
Shorcjs3K
+ +Table 1: Number of Wikipedia articles for Turkic languages compared to English and Russian along with number of L1 speakers and two- and three-letter language codes. The column MT? indicates if there are currently available online machine translation systems for the language. (K: thousand, M: million.) + +terms of their applicability across a wide range of world languages. On the other hand, many studies have pointed to the limited applicability of prominent methods in MT research including models and evaluation metrics (Birch et al., 2008; Stanojevic et al., 2015; Bugliarello et al., 2020) in translating languages with varying linguistic typology. + +In order to extend the evaluation of the state-of-the-art methods in MT (Joshi et al., 2019) and ultimately aid in designing methods with wider range of applicability, in this paper, we present a large-scale case study of MT methods in a very challenging case of the Turkic language family. The Turkic + +language family consists of around 35 languages spoken by communities across Eurasia by around 200 million people. Of this number, around 20 are official languages of a state, or sub-national entity, with the remaining being minority languages. The languages are distinct in their highly complex use of morphology, and thus create extremely sparse vocabularies, presenting a challenging case of evaluation of statistical models, in particular MT systems (Tantug et al., 2008) and n-gram language models (Bender, 2011; Tsarfaty et al., 2020). Table 1 presents the amount of resources and the number of speakers in Turkic languages2 which aids our analysis on the feasibility in crowdsourcing, based on the approach of Moshagen et al. (2014). + +Our study includes the preparation of novel public resources covering many languages in the Turkic family, most of which included for the first time in parallel corpora. We also present new benchmarks for MT which could be used for assessing different factors determining the limits of MT methods in various languages, such as data size, evaluation metrics, translation domain, linguistic typology, relatedness, and the writing system. We test the use of our resources in MT and present the first evaluation results for many Turkic languages. Our novel resources consist of $i)$ a large-scale multicentric parallel corpus of $75\mathrm{M}+$ sentence pairs in 22 Turkic languages and their translations into English, Russian, as well as in-family languages, covering over 400 translation directions, $ii)$ 3 new test sets for each translation direction curated from our corpus in 3 different translation domains, $iii)$ bilingual baselines in 26 different language pairs. Our baselines are evaluated using automatic metrics as well as human assessments against commercial or open-source systems where applicable. We release our parallel corpora, test sets, and baseline systems publicly to encourage future research in Turkic languages. + +# 2 Turkic Languages & MT + +This section gives a brief overview of Turkic languages from a linguistic perspective as well as presenting the previous work on MT of these languages. In our study, we include 22 Turkic languages: Altai, Azerbaijani, Bashkir, Crimean Tatar, Chuvash, Gagauz, Karachay-Balkar, Karakalpak, Khakas, Kazakh, Kumyk, Kyrgyz, Sakha, Salar, Shor, Turkmen, Turkish, Tatar, Tuvan, Uyghur, + +Urum, and Uzbek. There are several other widely spoken languages that were left out from our study such as Nogai, Khorasani Turkic, Qashqai, and Khalaj, due to the lack of any available parallel corpora. Future work will focus on extending the corpus to these languages as well. + +# 2.1 Linguistic Typology + +The Turkic languages are spoken in a wide area that stretches from south-eastern Europe to northeastern Asia. The languages are of the agglutinative morphological type and uniformly have Subject-Object-Verb main constituent order. + +Nominal morphology is highly similar between the languages, with all of them exhibiting inflection for number, possession, and case. There are a variable number of cases, but the six-core cases of nominative, genitive, accusative, dative, locative, and ablative are extant in the vast majority of languages. As part of the nominal inflectional system, the languages also have a derivational process whereby locatives and genitives can be pronominalized and constitute full noun phrases in their own right. Verbal inflection, on the other hand, is more heterogeneous between the languages with each language having a variety of strategies for encoding tense, aspect, voice, modality, and evidentiality. One common feature however is that each of the languages has an extensive system of non-finite forms: verbal adjectives, verbal nouns, and verbal adverbs. These are full clauses that can be used as either modifiers (in the case of verbal adjectives and verbal adverbs) or heads (in the case of verbal nouns). Many of the languages also have constructions consisting of a non-finite verbal form and an auxiliary verb which constitute a single predicate, with the auxiliary verb giving extra information about tense or mood (Johanson and Johanson, 2015). + +The modern Turkic languages are written in a variety of scripts, with Latin, Cyrillic, and Perso-Arabic being most common. Many of the languages have been written in several writing systems over the past century, making collecting texts more problematic. For example, we can find instances where the same language have texts that are written in Perso-Arabic before the 1920s, in Latin until the 1930s, in Cyrillic until the 1990s, and then in Latin again (Róna-Tas, 2015). In addition, many languages have gone through several orthographic norms based on the same script, and + +some languages are currently written in different scripts depending on which country the speakers are in. This orthographic diversity makes collecting and collating text resources difficult, as many texts may be available only in a previously-used orthography and conversion between orthographic systems is never deterministic owing to the large number of loan words in many texts. + +# 2.2 MT of Turkic Languages + +The need for more comprehensive and diverse multilingual parallel corpora has sped up the creation of such large-scale resources for many language families and linguistic regions (Koehn, 2005; Choudhary and Jha, 2011; Post et al., 2012; Nomoto et al., 2018; Esplà-Gomis et al., 2019; $\forall$ et al., 2020). Tiedemann (2020) released a large-scale corpus for over 500 languages covering thousands of translation directions. The corpus includes 14 Turkic languages and provides bilingual baselines for all translation directions present in the corpus. However, the varying and limited size of the test sets does not allow for the extensive analysis and comparisons between different model artifacts, linguistic features, and translation domains. Khusainov et al. (2020) collected a large-scale Russian-Turkic parallel corpus for 6 language pairs and reports bilingual baselines using a number of NMT-based approaches, although the dataset, test sets, and the models are not released to the public which limits its use to serve as a comparable benchmark. Alküm and Çebi (2019) introduces a rule-based MT framework for Turkic languages and demonstrates the performance with 4 language pairs. Washington et al. (2019) demonstrates several rule-based MT systems built for Turkic languages which are available through the Apertium3 website. + +For individual languages in our corpus, there are several proposed MT systems and linguistic resources: Azerbaijani (Hamzaoglu, 1993; Fatullayev et al., 2008), Bashkir (Tyers et al., 2012), Crimean Tatar (Gokirmak et al., 2019; Altintas, 2001), Karakalpak (Kadirov, 2015), Kazakh (Assylbekov and Nurkas, 2014; Sundetova et al., 2015; Littell et al., 2019; Briakou and Carpuat, 2019; Tukeyev et al., 2019), Kyrgyz (Cetin and Ismailova), Sakha (Ivanova et al., 2019), Turkmen (Tantug et al., 2007), Turkish (Turhan, 1997; El-Kahlout and Oflazer, 2006; Bisazza and Federico, 2009; Tantug et al., 2011; Ataman et al., 2017), + +Tatar (Salimzyanov et al., 2013; Khusainov et al., 2018; Valeev et al., 2019; Gokirmak et al., 2019), Tuvan (Killackey, 2013), Uyghur (Mahsut et al., 2004; Nimaiti and Izumi, 2014; Song and Dai, 2015; Wang et al., 2020), Uzbek (Axmedova et al., 2019). Yet, to our knowledge, there has not been a study that covers Turkic languages in such a large extent as ours, both in terms of multi-lingual parallel corpora and benchmarks including multi-way comparable test sets in all languages. + +# 3 TIL Corpus + +Our parallel corpus is collected through unifying publicly available datasets and additional parallel data we prepare by crawling public domain resources. Table 2 shows the total amount of sentences in that particular language across the corpus along with number of sentences that are newly introduced (previously unavailable). This section describes the details of our data collection process. + +# 3.1 Public Datasets + +In our corpus we include the following public data sets: + +- The Tatoeba corpus (Tiedemann, 2020) provides training and test sets for over 500 languages and thousands of translation pairs. It uses the latest version of OPUS $^4$ (Tiedemann and Nygaard, 2004) as training sets and use parallel sentences from the Tatoeba project for testing. Tatoeba consists of 58 language pairs of interest. For the purposes of our corpus, we merge the training, development, and test sets into a single set for all available languages. +- JW300 (Agić and Vulić, 2019) is a public dataset available for download through OPUS. Although most of the parallel data in JW300 was provided through the Tatoeba corpus, we have identified several pairs that were missing in Tatoeba but present in JW300. To avoid further data loss, we have obtained the JW300 dataset directly from OPUS and deduplicated it against the Tatoeba corpus. This dataset provided data for 59 language pairs of interest and resulted in 5.2 million parallel sentences. +- GoURMET is another dataset available through OPUS and provides parallel sen + +
LanguageDataScriptCategoryNew Data
Turkish52.6MLatinThe Underdogs (4)755.9K
Kazakh5.3MArabic, Cyrillic, LatinThe Rising Star (3)201.9K
Uzbek2.9MArabic, Cyrillic, LatinThe Rising Star (3)1.7M
Azerbaijani2.2MArabic, Cyrillic, LatinThe Scraping-Bys (1)284.8K
Tatar1.8MArabic, CyrillicThe Scraping-Bys (1)192.0K
Kyrgyz1.8MArabic, CyrillicThe Scraping-Bys (1)188.6K
Chuvash1.5MCyrillicThe Scraping-Bys (1)191.0K
Turkmen921.0KArabic, Cyrillic, LatinThe Scraping-Bys (1)191.7K
Bashkir893.1KCyrillicThe Scraping-Bys (1)713.9K
Uyghur343.0KArabic, Cyrillic, LatinThe Scraping-Bys (1)187.0K
Karakalpak253.8KCyrillic, LatinThe Scraping-Bys (1)274.3K
Khakas219.0KCyrillicThe Left-Behinds (0)242.8K
Altai192.6KCyrillicThe Left-Behinds (0)190.0K
Crimean Tatar185.3KCyrillic, LatinThe Scraping-Bys (1)197.6K
Kumyk165.6KCyrillicThe Left-Behinds (0)192.4K
Karachay-Balkar162.8KCyrillic, LatinThe Scraping-Bys (1)182.6K
Gagauz157.4KCyrillic, LatinThe Scraping-Bys (1)177.1K
Sakha157.1KCyrillicThe Scraping-Bys (1)174.8K
Tuvinian103.2KCyrillicThe Scraping-Bys (1)148.3K
Shor2.3KCyrillicThe Left-Behinds (0)6.9K
Salar766LatinThe Left-Behinds (0)1.5K
Urum491Greek, Cyrillic, LatinThe Left-Behinds (0)491
+ +Table 2: Corpus details for each Turkic language. Data shows the aggregated amount of sentences across the corpus. Category refers to the language classes based on data resource according to (Joshi et al., 2020). + +tences for 7 language pairs including English-Turkish and English-Kyrgyz. They are not available in Tatoeba due to a more recent release. English-Kyrgyz consists of 14.5 thousand sentence pairs while English-Turkish contains 1.3 million. + +In addition to this, with the permission from the owners, we include privately owned corpora for English-Azerbaijani6 containing data from news articles, English-Uzbek7 containing data from KhanAcademy website localization, and Bashkir-Russian8 having a mix of data from news articles and literary works. + +# 3.2 Data Crawling + +We obtained additional parallel data from a few different public domain websites that contain a large amount of text translated into many different languages. One of these includes TED Talks, which contains talks across various domains that + +are translated by volunteers. Qi et al. (2018) compiled a dataset for 60 languages, however, only a few Turkic languages were available at their time of curation. We have compiled an updated version of this dataset and obtained sentence pairs for 8 Turkic languages. *Bible.is* is another website that contains an extensive list of languages into which religious texts and books are translated. 19 out of 22 Turkic languages were covered in this source with an average of approximately 8,000 sentence pairs for each translation direction. Additionally, we have crawled other public websites, online dictionaries, and resources with parallel data that were identified by native speakers of these languages. The full list of online resources we used in our crawling is given in the Appendices. + +# 3.3 Data Alignment + +All crawled documents are aligned using Hunalign (Varga et al., 2005), with a threshold of either 0.2 or 0.4 depending on the availability of a native speaker for the language. When crawling prealigned sources such as TED Talks, we noticed + +serious alignment issues with certain Turkic languages, especially when the source and target differ greatly in size. In these cases, we split both sides into sentences using NLTK sentence tokenizer11 and realign using the Hunalign tool. Specifically for the Bible dataset, all the data has been aligned at the verse level first, then split into sentence-level bittexts whenever possible. This results in parallel texts that are relatively longer while ensuring higher quality alignments. + +# 3.4 Data Preprocessing + +Many of the languages in our dataset are written using multiple scripts, which creates consistency problems for building MT systems. Therefore, we transliterate three of the languages in our dataset that have a high mix of multiple scripts. Namely, we transliterate Uzbek into a Latin script, while all Karakalpak text is converted into Cyrillic. Although the performance of transliteration tools (Uzbek $^{12}$ and Karakalpak $^{13}$ ) were not strictly evaluated, the tools we have used were recommended and widely adopted by the native speakers of the languages. Once we combine the entire corpus data, we deduplicate the sentences in each language pair. + +# 4 Bilingual Baselines + +We train bilingual baselines for 26 language pairs in three different resource categories: high ( $\zeta 5\mathrm{M}$ ), medium (100K-5M) and low ( $\zeta 100\mathrm{K}$ ). The choice of pairs to train was based on multiple factors such as the availability of test sets, native speakers (for human evaluation), and other comparable MT systems. + +# 4.1 Model Details + +All models are Transformers (Vaswani et al., 2017) (transformer-base) whose exact configuration depends on the amount of data available for training. Models for low-resource pairs use 256-dimensional embeddings and hidden layers. Models for mid-resource pairs use 512-dimensional embeddings and hidden layers. The models for high-resource pairs use the same 512-dimensional embedding and hidden layer sizes for the encoder, but for the decoder both dimensions are increased to 1024. All models are trained with the Adam optimizer (Kingma and Ba, 2015) over cross-entropy loss with a maximum learning rate of $3 * 10^{-4}$ and a minimum of + +$1*10^{-8}$ , which warms up for the first 4800 training steps and then decays after reaching the maximum. We use a training batch size of 4096. We use perplexity as our early stopping metric with a patience of 5 epochs. We set a dropout (Srivastava et al., 2014) probability of 0.3 in both the encoder and the decoder. We apply a byte pair encoding (BPE) (Sennrich et al., 2015; Dong et al., 2015) with a joint vocabulary size of 4K and 32K for low- and mid/high-resource scenarios respectively. + +All models use the Joey NMT (Kreutzer et al., 2019) implementation and apex14 where possible to speed up training. Models were trained on preemptible GPUs freely available on Google Colab.15 + +# 4.2 Test Sets + +High-quality and diverse test sets are essential in evaluating the strength and weaknesses of MT systems. We curate 3 test sets covering 3 translation domains: religious (Bible), conversational (TED Talks), and news (X-WMT). + +Bible dataset is the main source that exists across almost all of the 24 language pairs that are included in our corpus. From this dataset, around 400 to 800 most commonly present sentences for every language pair were separated to create a test set. This allowed having a test set comparable in all language pairs, which we find essential for a controlled evaluation and believe would be a useful resource in future studies involving multilingual models. + +TED Talks is another resource we use for collecting sentences across multiple languages to create a language-wise comparable test set in the conversational domain. This allows our approach to be comparable also across different domains. After dedduplication, 3000-5000 sentences per language pair are picked as a part of our TED Talks test set. + +X-WMT is our test set in the news domain based on the professionally translated test sets in English-Russian from the WMT 2020 Shared Task (Mathur et al., 2020). This set contains approximately 1,000 sentences curated both from English and Russiancentric news sources. Through the engagement of native speakers and professional translators $^{16}$ , we partially translate this test set into 8 Turkic languages (Bashkir, Uzbek, Turkish, Kazakh, Kyrgyz, Azerbaijani, Karamalpak, and Sakha). + +
enrubatruzkykkazsahkaa
en
ru1000
ba10001000
tr800800800
uz900900900600
ky500500500400500
kk700700700500700500
az600600600500600500500
sah300300300300300300300300
kaa300300300300300300300300300
+ +Table 3: X-WMT test sets. Bolded entries indicate the original translation direction. + +Table 3 highlights the currently available test set directions. Bolded entries in the table indicate the original direction of the translation. While Bashkir and Sakha have been translated by professional translators, other languages have been translated and validated (by another person) by proficient bilingual speakers of both the source and target language. The curation of this test set is an ongoing and growing effort currently covering 88 language directions. + +# 5 Evaluation + +Automatic evaluation metrics are very commonplace in MT research, and there has been a recent line of work exploring better metrics that capture translation quality beyond the syntactic and lexical features (Zhang et al., 2019; Sellam et al., 2020; Rei et al., 2020). Methods relying on contextual embeddings to capture the semantic similarity between the hypothesis and references fall short in terms of their language coverage. This is largely due to the pretraining of these evaluation models that require a significant of monolingual data which most of the low-resource languages lack. In this study, we evaluate our systems using both automatic metrics and human evaluation of translations. + +# 5.1 Automatic Metrics for MT + +We employ two widely adopted metrics: BLEU (Papineni et al., 2002) and $\mathrm{ChrF}$ (Popovic, 2015). BLEU utilizes modified $n$ -gram precision where the consecutive $n$ -grams of the system translation are compared with the consecutive $n$ -grams of the reference translation. We use the standard Sacre-BLEU implementation (Post, 2018). $\mathrm{ChrF}$ applies the same method at the level of character $n$ -gram and we use the original implementation from the paper as provided through NLTK library. $^{17}$ + +# 5.2 Human Evaluation + +To perform a more holistic analysis of MT systems, it is critical to involve native speakers in the evaluation process. We conducted a human evaluation campaign using a randomly sampled subset of 250 sentences from X-WMT or Bible (whenever X-WMT was not available) for evaluating the outputs of 14 bilingual baseline models. Our assessment is based on Direct Assessment (DA) test (Nießen et al., 2000; Papineni et al., 2002; Doddington, 2002), where annotators were asked to rate a translation according to adequacy and fluency on a 5 point Likert scale. All participants of the study were bilingual speakers of the source and target language. To better understand the importance of directionality (e.g. English-X vs X-English) and avoid variance in scores, we ensure that both directions of the same pair are evaluated by the same annotator (whenever possible). While reporting, we average the scores for each pair but report adequacy and fluency separately. Adequacy is defined as how much information is preserved in the translation. A score of 1 would mean that the translation is meaningless and has no correlation with the target sentence. A score of 5 would mean the translation retains all of the information. Fluency is defined as how grammatically, syntactically, and stylistically correct the translation is. A score of 1 would mean the sentence makes no sense grammatically or syntactically. A score of 5 would mean the sentence is perfectly correct. + +# 6 Results & Discussion + +The upper section of Table 4 highlights the bilingual baselines for high-resource pairs and their evaluation scores in the three domains. Despite the large training size, both models perform relatively modestly on the Bible and TED Talks with the en-tr model slightly better than ru-tr. Our hypothesis is that the domain of the Bible test set is far from the rest of the training set for both pairs, as most of the training data for Turkish comes from OpenSubti- tles.[18] Another likely bottleneck is the suboptimal model size and hyperparameters, which were not tuned due to limited computational resources. + +Baseline results for the mid- and low-resource pairs are in the lower part of Table 4. While there are a lot of fluctuations in the results, it is important to note the large disparities in BLEU scores + +
PairTrain sizeTest sizeBibleTest sizeTed TalksTest sizeX-WMT
BLEUChrFBLEUChrFBLEUChrF
en-tr39.9m4167.150.305.2k12.320.4380019.870.51
ru-tr16.8m4557.440.335.1k8.640.388008.810.41
ru-uz1.22M6846.010.412.7K4.510.768005.950.39
uz-ru1.22M6849.840.512.7K7.570.738007.450.37
en-az784K45510.560.243.3K10.580.296008.880.41
az-en784K45521.170.453.3K17.010.1760012.140.42
en-ky733K4516.470.32---5003.180.19
ky-en733K45113.080.43---5004.300.40
tr-az634K60613.780.653.6K20.500.405009.680.33
az-tr634K60611.660.713.6K24.200.9550011.530.49
en-kk601K4533.620.613.6K6.310.297006.990.38
kk-en601K45311.220.273.6K9.780.307009.750.46
en-uz555K4655.230.403.2K5.890.208006.600.42
uz-en555K46516.200.633.2K11.610.1880012.320.48
tr-uz161K4866.500.142.9K4.280.207001.580.23
uz-tr161K4867.400.322.9K3.920.267001.730.22
kk-ky6.4K6962.390.33---5000.140.09
ky-kk6.4K6962.530.24---5000.110.13
en-krc6.5K3745.570.25------
krc-en6.5K37411.570.22------
kk-tt7.7K6784.130.22------
tt-kk7.7K6783.750.17------
ru-sah8K7592.480.27---3000.080.20
sah-ru8K7592.440.23---3000.310.16
uz-kaa8.9K7729.900.71---3005.390.41
kaa-uz8.9K7729.580.60---3005.240.44
+ +Table 4: Bilingual baselines separated by high-res., mid-res., and low-res. pairs (K: thousand, M: million). + +between models when translated in and out of non-Turkic languages. However, these differences are not as prominent when evaluated using ChrF, which is a character-level metric. This can partially be attributed to the complex morphology of Turkic languages which penalizes lexical mispredictions at a much higher rate than in English for example (Tantug et al., 2008). This in return would lead to lower BLEU scores. To examine this phenomena in more detail, we compare the results of X-WMT against human evaluations for the translations these models produced in Section 6.1. + +Another notable aspect is the importance of scripts in the performance of the models. Language pairs with more than one script consistently underperform (both in automatic and human evaluations) the ones where both the source and target language use the same script. In fact, the best 6 models on the X-WMT test sets all have Latin scripts in both the source and target language. A suboptimal performance in the face of a script disparity is a known phenomenon (Anastasopoulos and Neubig, 2019; Murikinati et al., 2020; Aji et al., 2020; Amrhein + +and Sennrich, 2020), where techniques such as transliteration show to improve performance. This is mostly attributable to model's inability to represent both languages in a shared space effectively when they do not share the same script, which can be damaging for the downstream performance. + +# 6.1 Comparing Human Evaluations to BLEU + +Using the Direct Assessment (DA) surveys described in Section 5.2, we obtain average scores of adequacy and fluency for almost all baseline models. Figures 1 show the scores for BLEU/ChrF and adequacy/fluency respectively. Comparing the scores from native speakers of these languages, it is quite evident that the disparities in BLEU scores between two translation directions are exaggerated and, even misleading (e.g. en-az vs az-en). Results in the human evaluations for mid-resource pairs seem a lot more closely clustered than in the BLEU/ChrF figure. These results further emphasize the pitfalls of automatic metrics of MT evaluation and emphasize the role of native speakers in the MT process. + +
PairTest sizeBaselineGoogle TranslateYandex TranslateApertium
BLEUChrFBLEUChrFBLEUChrFBLEUChrF
en-tr80019.870.5169.240.8340.030.69--
ru-tr8008.810.4124.790.5416.640.44--
tr-uz7001.580.2327.250.606.580.42--
uz-tr7001.730.2228.030.585.580.384.310.33
en-uz8006.600.4248.500.7215.660.51--
uz-en80012.320.4832.350.396.930.41--
en-kk7006.990.3826.600.555.510.39--
kk-en7009.750.4622.500.4723.20.50--
tr-az5009.680.3336.780.655.530.38--
az-tr50011.530.4932.670.6211.750.44--
en-ky5003.180.1926.970.565.210.36--
ky-en5004.300.4021.660.503.890.20--
en-az6008.880.4178.540.896.590.40--
az-en60012.140.4239.420.6512.540.46--
ru-uz8005.950.3922.260.5613.190.50--
uz-ru8007.450.3719.000.4810.870.43--
kk-tt*6784.130.225.450.351.580.242.770.28
tt-kk*6783.750.175.440.351.410.22--
ru-sah3000.080.20--8.270.40--
sah-ru3000.310.16--24.930.54--
uz-kaa3005.390.41---11.710.42
kaa-uz3005.240.44----5.220.30
kk-ky5000.140.0920.560.514.780.359.120.35
ky-kk5000.110.1320.570.523.520.346.550.34
+ +Table 5: Bilingual Baseline Compared to online MT Systems on X-WMT (Pair with * uses Bible Data). + +
AdequacyFluency
BLEUChrFBLEUChrF
Turkic0.620.710.750.67
Non-Turkic0.750.680.830.86
+ +Table 6: Correlation between scores from human evaluation and automatic metrics for translating into Turkic and non-Turkic. Correlation is measured using Pearson's $r$ . + +# 6.2 Turkic Languages on the target side + +Even though BLEU scores do not offer a holistic way to compare two MT systems, they are effective in telling which system performs better. As seen clearly from the results in Table 4, the performance of the baseline system as measured by the BLEU metric when translating into a Turkic language from English is substantially worse than when translating into English from a Turkic language. Translating into the Turkic language is typically twice as bad in terms of BLEU as translating from the Turkic language. The reliability of the BLEU score also decreases especially in the case of translating into morphologically-rich languages, + +which has indeed been shown to correlate poorly with human judgments in Turkic languages (Ma et al., 2018, 2019). Table 6 shows the correlation between BLEU/Chrf and adequacy/fluency scores. BLEU seems to correlate with adequacy/fluency a lot better when the target side is a non-Turkic language, which emphasizes our earlier points regarding the language morphology. ChrF's correlation to adequacy scores is about the same regardless of the target language. + +# 6.3 Comparison to Existing Systems + +Table 5 compares our baselines to three commercial/open-source MT systems: Google Translate, $^{19}$ Yandex Translate, $^{20}$ and Apertium (Forcada et al., 2011). Google Translate results are significantly higher than our baselines and other MT systems. There are quite a few reasons for the score disparities. First, commercial systems have access to more data for training and possibly also include the public data we exclude from our test sets. Moreover, several test-set translators used Google Translate to do the translations and performed post + +![](images/ade5e3068254daedb302cf70ec7aee8902a09bc7de273df1da6ee5ba04777bfc.jpg) +(a) BLEU and ChrF scores for select pairs. Note: ChrF scores were multiplied by 20 for better visibility. + +![](images/6dce2c84bc361966bd43c92e4b734e8a527491fa3d740e36752f1171a468cf69.jpg) +(b) Adequacy and Fluency scores (1-5) obtained from human evaluations. +Figure 1: Comparison between BLEU/ChrF scores and Adequacy/Fluency scores. Best viewed in color. + +edits afterwards (e.g. en-uz) which creates a bias favoring sentences generated by Google's service. A safer comparison of the baselines is achieved with Yandex Translate, which despite the lower performance also supports more Turkic languages (8 in Google and 9 in Yandex). However, it is important to note that their API yielded worse results than their web interface. Apertium is a rule-based MT framework that supports several Turkic-Turkic pairs and we include the results whenever one is available. For those pairs, the results are comparable with our baselines and Yandex Translate. + +# 7 Conclusion & Future Work + +In this paper, we introduce a large parallel corpus covering 22 Turkic languages along with in-domain and out-of-domain evaluation sets. We also train the first baseline models for several language pairs and take the initial steps to address the challenges associated with machine translation in the Turkic languages. This study was carried out as in a participatory research setting by a diverse community of researchers, engineers, language specialists, and native speakers of Turkic languages. Future work will focus on studies of methods for effective crosslingual transfer, extending of the coverage of the corpus to more languages and domains, and increasing the size of the test sets to provide more comprehensive benchmarks. + +# Acknowledgements + +This project received support from the Google AI Academic Research Awards and the Swiss National Science Foundation (MUTAMUR; no. 176727). We thank all of the members and partners of the + +Turkic Interlingua (TIL) community for their contributions to the project. Namely, we would like to thank our dedicated translators and annotators: Nurlan Maharramli, Leyla Baghirli, Ipek Baris, Aigiz Kunafin, Aydos Muxammadiyarov, Ziyodabonu Qobiljon qizi, Alperen Cantez, Doniyorbek Rafikjonov, Mukhammadbektosh Khaydarov, Medina Zokirjonova, Erkinbek Vokhabov, Mohiyaxon Uzoqova, Petr Popov, Abilxayr Zholdybai, and Akylbek Khamitov. We also acknowledge and appreciate significant dataset contributions from Rasul Karimov, Iskandar Mamasoliev, Khan Academy O'zbek, and the Foundation for the Preservation and Development of the Bashkir Language. Furthermore, we would like thank Dr. John Licato, Dr. Jonathan Washington and Animesh Nighojkar for their valuable feedback throughout the project. + +# References + +Zeljko Agić and Ivan Vulić. 2019. JW300: A wide-coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204–3210, Florence, Italy. Association for Computational Linguistics. +Alham Fikri Aji, Nikolay Bogoychev, Kenneth Heafield, and Rico Sennrich. 2020. In neural machine translation, what does transfer learning transfer? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7701-7710. +Emel Alküm and Yalçın Cebi. 2019. Machine translation infrastructure for turkic languages (MT-Turk). The International Arab Journal of Information Technology, 16(3):380-388. + +Kemal Altıntas. 2001. Turkish to Crimean Tatar machine translation system. Ph.D. thesis, Bilkent University. +Chantal Amrhein and Rico Sennrich. 2020. On romanization for model transfer between scripts in neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2461-2469. +Antonios Anastasopoulos and Graham Neubig. 2019. Pushing the limits of low-resource morphological inflection. arXiv preprint arXiv:1908.05838. +Ebrahim Ansari, Nguyen Bach, Ondrej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, et al. 2020. Findings of the iwslt 2020 evaluation campaign. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 1-34. +Zhenisbek Assylbekov and Assulan Nurkas. 2014. Initial explorations in kazakh to english statistical machine translation. In The First Italian Conference on Computational Linguistics CLiC-it 2014, page 12. +Duygu Ataman, Matteo Negri, Marco Turchi, and Marcello Federico. 2017. Linguistically motivated vocabulary reduction for neural machine translation from turkish to english. The Prague Bulletin of Mathematical Linguistics, 108(1):331-342. +Xolisa Axmedova, Guzal Abdujalilova, and Umida Abdurahmonova. 2019. Algorithm based on linguistic models in machine translation between russian and Uzbek. ACADEMICIA: An International Multidisciplinary Research Journal, 9(12):16-21. +Emily M Bender. 2011. On achieving and evaluating language-independence in nlp. Linguistic Issues in Language Technology, 6(3):1-26. +Alexandra Birch, Miles Osborne, and Philipp Koehn. 2008. Predicting success in machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 745-754, Honolulu, Hawaii. Association for Computational Linguistics. +Arianna Bisazza and Marcello Federico. 2009. Morphological pre-processing for turkish to english statistical machine translation. In nnnn. +Eleftheria Briakou and Marine Carpuat. 2019. The university of Maryland's Kazakh-English neural machine translation system at WMT19. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 134-140. +Emanuele Bugliarello, Sabrina J. Mielke, Antonios Anastasopoulos, Ryan Cotterell, and Naoaki Okazaki. 2020. It's easier to translate out of English than into it: Measuring neural translation difficulty by cross-mutual information. In Proceedings of the + +58th Annual Meeting of the Association for Computational Linguistics, pages 1640-1649, Online. Association for Computational Linguistics. +Mustafa Alp Cetin and Rita Ismailova. Assisting tool for essay grading for Turkish language instructors. MANAS Journal of Engineering, 7(2):141-146. +Narayan Choudhary and Girish Nath Jha. 2011. Creating multilingual parallel corpora in indian languages. In Language and Technology Conference, pages 527-537. Springer. +George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, pages 138-145. +Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723-1732, Beijing, China. Association for Computational Linguistics. +Ilknur Durgar El-Kahlout and Kemal Oflazer. 2006. Initial explorations in english to turkish statistical machine translation. In Proceedings on the Workshop on Statistical Machine Translation, pages 7-14. +Miquel Esplà-Gomis, Mikel L Forcada, Gema Ramírez-Sánchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119. +Rauf Fatullayev, Ali Abbasov, and Abulfat Fatullayev. 2008. Dilmanc is the 1st MT system for Azerbaijan. Proc. of SLTC-08, Stockholm, Sweden, pages 63-64. +$\forall$ , Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Tajudeen Kolawole, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddee Hassan Muhammad, Salomon Kabongo, Salomey Osei, et al. 2020. Participatory research for low-resourced machine translation: A case study in african languages. Findings of EMNLP. +Mikel L Forcada, Mireia Ginesti-Rosell, Jacob Nordfalk, Jim O'Regan, Sergio Ortiz-Rojas, Juan Antonio Pérez-Ortiz, Felipe Sánchez-Martínez, Gema Ramírez-Sánchez, and Francis M Tyers. 2011. Aperium: a free/open-source platform for rule-based machine translation. Machine translation, 25(2):127-144. +Alexander Fraser. 2020. Findings of the WMT 2020 shared tasks in unsupervised MT and very low resource supervised MT. In Proceedings of the Fifth Conference on Machine Translation, pages 765-771, Online. Association for Computational Linguistics. + +Memduh Gokirmak, Francis Tyers, and Jonathan Washington. 2019. Machine Translation for Crimean Tatar to Turkish. In Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages, pages 24-31. +Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. CoRR, abs/1902.01382. +Ilker Hamzaoglu. 1993. Machine translation from Turkish to other Turkic languages and an implementation for the Azeri language. Ph.D. thesis, MSc Thesis, Bogazici University, Istanbul. +Sardana Ivanova, Anisia Katinskaia, and Roman Yan-garber. 2019. Tools for supporting language learning for sakha. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 155-163, Turku, Finland. Linköping University Electronic Press. +Lars Johanson and Éva Ágnes Csató Johanson. 2015. The Turkic Languages. Routledge. +Pratik Joshi, Christain Barnes, Sebastin Santy, Simran Khanuja, Sanket Shah, Anirudh Srinivasan, Satwik Bhattachamishra, Sunayana Sitaram, Monjit Choudhury, and Kalika Bali. 2019. Unsung challenges of building and deploying language technologies for low resource language communities. arXiv preprint arXiv:1912.03457. +Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the nlp world. arXiv preprint arXiv:2004.09095. +Azizbek Kadirov. 2015. The algorithm of machine translation from uzbek to karakalpak. TurkLang-2015, page 24. +Aidar Khusainov, Dzhavdet Suleymanov, Rinat Gilmullin, and Ajrat Gatiatullin. 2018. Building the Tatar-Russian NMT system based on re-translation of multilingual data. In International Conference on Text, Speech, and Dialogue, pages 163-170. Springer. +Aidar Khusainov, Dzhavdet Suleymanov, Rinat Gilmullin, Alina Minsafina, Lenara Kubedinova, and Nilufar Abdurakhmonova. 2020. First Results of the "TurkLang-7" Project: Creating Russian-Turkic Parallel Corpora and MT Systems. +Rachel Killackey. 2013. Statistical Machine Translation from English to Tuvan. +Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A Method for Stochastic Optimization. In *ICLR 2015: International Conference on Learning Representations* 2015. + +Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In $MT$ summit, volume 5, pages 79-86. CiteSeer. +Julia Kreutzer, Jasmijn Bastings, and Stefan Riezler. 2019. Joey NMT: A minimalist NMT toolkit for novices. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 109-114, Hong Kong, China. Association for Computational Linguistics. +Patrick Littell, Chi-kiu Lo, Samuel Larkin, and Darlene Stewart. 2019. Multi-source transformer for Kazakh-Russian-English neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 267-274. +Qingsong Ma, Ondrej Bojar, and Yvette Graham. 2018. Results of the wmt18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of the third conference on machine translation: shared task papers, pages 671-688. +Qingsong Ma, Johnny Wei, Ondrej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90. +Muhtar Mahsut, Yasuhiro Ogawa, Kazue Sugino, Katsuhiko Toyama, and Yasuyoshi Inagaki. 2004. An experiment on Japanese-Uighur machine translation and its evaluation. In Conference of the Association for Machine Translation in the Americas, pages 208-216. Springer. +Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, and Ondrej Bojar. 2020. Results of the WMT20 metrics shared task. In Proceedings of the Fifth Conference on Machine Translation, pages 688-725, Online. Association for Computational Linguistics. +Sjur Moshagen, Trond Trosterud, Jack Rueter, Francis M. Tyers, and Tommi A. Pirinen. 2014. Open-source infrastructures for collaborative work on under-resourced languages. In Proceedings of CCURL workshop 2014. +Nikitha Murikinati, Antonios Anastasopoulos, and Graham Neubig. 2020. Transliteration for cross-lingual morphological inflection. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 189–197. +Sonja Nießen, Franz Josef Och, Gregor Leusch, Hermann Ney, et al. 2000. An Evaluation Tool for Machine Translation: Fast Evaluation for MT Research. In LREC. + +Maimitili Nimititi and Yamamoto Izumi. 2014. A Rule Based Approach for Japanese-Uyghur Machine Translation System. International Journal of Software Science and Computational Intelligence (IJSSCI), 6(1):56-69. +Hiroki Nomoto, Kenji Okano, David Moeljadi, and Hideo Sawada. 2018. Tufs asian language parallel corpus (talpco). In Proceedings of the Twenty-fourth Annual Meeting of the Association for Natural Language Processing, pages 436-439. +Atul Kr. Ojha, Valentin Malykh, Alina Karakanta, and Chao-Hong Liu. 2020. Findings of the LoResMT 2020 shared task on zero-shot for low-resource languages. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 33-37, Suzhou, China. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318. +Maja Popovic. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics. +Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing parallel corpora for six indian languages via crowdsourcing. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 401-409. +Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Janani Padmanabhan, and Graham Neubig. 2018. When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation? +Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. arXiv preprint arXiv:2009.09025. +András Róna-Tas. 2015. Turkic Writing Systems. In Lars Johanson and Éva Ágnes Csató Johanson, editors, The Turkic Languages, chapter 6, pages 126-137. Routledge. +Ilnar Salimzyanov, J Washington, and F Tyers. 2013. A free/open-source Kazakh-Tatar machine translation system. Machine Translation Summit XIV, pages 175-182. +Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. BLEURT: Learning robust metrics for text generation. arXiv preprint arXiv:2004.04696. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. +JL Song and L Dai. 2015. Construction of Uighur-Chinese parallel corpus. In Multimedia, Communication and Computing Application: Proceedings of the 2014 International Conference on Multimedia, Communication and Computing Application (MCCA 2014), Xiamen, China, October 16-17, 2014, page 353. CRC Press. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958. +Miloš Stanojevic, Amir Kamran, Philipp Koehn, and Ondrej Bojar. 2015. Results of the WMT15 metrics shared task. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 256-273, Lisbon, Portugal. Association for Computational Linguistics. +Aida Sundetova, Mikel Forcada, and Francis Tyers. 2015. A free/open-source machine translation system from english to kazakh. In Proceedings OF THE INTERNATIONAL CONFERENCE TURKIC LANGUAGE PROCESSING" TurkLang-2015, pages 78-90. +A. C. Tantug, E. Adali, and Kemal Offlazer. 2007. Machine translation between turkic languages. In ACL. +Ahmet Cüneyd Tantug, Eşref ADALI, and Kemal OFLAZER. 2011. Türkmenceden Türkiye bilgisiyarı metin cevirisi. ITÜDERGISİ/d, 7(4). +A Cüneyd Tantug, Kemal Oflazer, and Ilknur Durgar El-Kahlout. 2008. BLEU+: a Tool for Fine-Grained BLEU Computation. In LREC. +Jörg Tiedemann. 2020. The Tatoeba Translation Challenge-Realistic Data Sets for Low Resource and Multilingual MT. arXiv preprint arXiv:2010.06354. +Jörg Tiedemann and Lars Nygaard. 2004. The OPUS Corpus-Parallel and Free: http://logos.uio.no/opus. In LREC. Citeseer. +Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From SPMRL to NMRL: What did we learn (and unlearn) in a decade of parsing morphologically-rich languages (MRLs)? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7396-7408, Online. Association for Computational Linguistics. +Ualsher Tukeyev, Aidana Karibayeva, and Balzhan Abduali. 2019. Neural machine translation system for the kazakh language based on synthetic corpora. In MATEC Web of Conferences, volume 252, page 03006. EDP Sciences. + +Cigdem Keyder Turhan. 1997. An English to Turkish machine translation system using structural mapping. In Fifth Conference on Applied Natural Language Processing, pages 320-323. +Francis M Tyers, Jonathan North Washington, Ilnar Salimzyanov, and Rustam Batalov. 2012. A prototype machine translation system for Tatar and Bashkir based on free/open-source components. In First Workshop on Language Resources and Technologies for Turkic Languages, page 11. +Aidar Valeev, Ilshat Gibadullin, Albina Khusainova, and Adil Khan. 2019. Application of Low-resource Machine Translation Techniques to Russian-Tatar Language Pair. arXiv preprint arXiv:1910.00368. +D. Varga, L. Nemeth, P. Halácsy, A. Kornai, V. Trón, and V. Nagy. 2005. Parallel corpora for medium density languages. In Proceedings of the RANLP 2005, pages 590-596. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, volume 30, pages 5998-6008. +Dongqi Wang, Zihan Liu, Qingnan Jiang, Zewei Sun, Shujian Huang, and Jiajun Chen. 2020. NJUNLP's Machine Translation System for CCMT-2020 Uighur - Chinese Translation Task. In China Conference on Machine Translation, pages 76-82. Springer. +Jonathan North Washington, Ilnar Salimzianov, Francis M. Tyers, Memduh Gokirmak, Sardana Ivanova, and Oğuz Kuyrukcu. 2019. Free/open-source technologies for Turkic languages developed in the Aperium project. In Proceedings of the International Conference on Turkic Language Processing (TURK-LANG 2019). +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. +A Overall corpus statistics +Table 7 lists the training size (in sentences) for each language direction. It is important to note that the corpus is continuously growing and improving, so this version of the corpus was used for the bilingual baselines and human evaluations in this paper. + +# B Dataset Sources + +Our parallel corpus is a combination of public resources and individual/group contributions. We list the sources for all the resources and websites used + +in curating our corpus in Table 8. More recent information on the licences and reuse of the corpus can be found in the Github repository $^{21}$ . + +
altazbacjscrhcvengagkaakjhkkkrckumkyrusahslrtktrtttyvuguumuz
alt9.8K11.8K486.8K9.9K11.3K6.6K6.8K6.8K6.8K6.8K6.7K11.2K6.8K6.8K6.6K619.7K11.3K11.3K4.5K6.8K10.5K
az9.8K33.0K486.9K82.3K787.6K6.7K6.9K6.9K8.0K6.9K6.7K220.1K389.0K6.9K52123.4K636.5K215.1K13.6K6.9K227.4K
ba11.8K33.0K6.8K34.4K64.5K6.6K6.8K6.7K6.8K6.8K6.7K35.9K36.2K6.6K30.2K65.4K67.2K16.5K6.7K27.5K
cjs4848495748314746472.4K48444750505348
crh6.8K6.9K6.8K496.7K16.4K6.8K6.9K7.0K6.9K6.9K6.7K6.7K12.3K6.7K526.7K13.9K8.3K7.0K7.3K
cv9.9K82.3K34.4K6.7K156.4K6.5K6.7K6.7K6.8K6.8K6.6K83.1K82.9K6.7K78.6K158.2K160.0K15.1K6.6K59.9K
en11.3K787.6K64.5K5716.4K156.4K6.8K7.3K7.0K614.5K7.0K6.8K625.6K47.1M7.1K837250.4K39.9M572.5K29.0K111.2K507559.7K
gag6.6K6.7K6.6K486.8K6.5K6.8K6.7K6.8K6.7K6.7K6.6K6.6K6.7K6.6K556.5K6.7K6.6K6.8K7.1K
kaa6.8K6.9K6.8K316.9K6.7K7.3K6.7K6.8K6.9K6.9K6.7K6.7K7.3K6.7K326.6K6.9K6.8K6.8K9.8K
kjh6.8K6.9K6.7K477.0K6.7K7.0K6.8K6.8K6.8K7.7K6.7K6.7K6.8K8.1K526.6K6.8K6.7K7.0K7.3K
kk6.8K8.0K6.8K466.9K6.8K614.5K6.7K6.9K6.8K6.9K6.7K7.0K4.5M7.0K486.6K65.9K8.4K6.8K124.9K
krc6.8K6.9K6.8K6.9K6.8K7.0K6.7K6.9K7.7K6.9K6.8K6.7K6.9K7.6K6.6K6.9K6.8K6.8K7.3K
kum6.7K6.7K6.7K6.7K6.6K6.8K6.6K6.7K6.7K6.7K6.8K6.6K6.8K6.6K6.5K6.7K6.6K6.7K7.1K
ky11.2K220.1K35.9K476.7K83.1K625.6K6.6K6.7K7.0K7.0K6.7K6.6K309.3K6.7K50122.5K549.0K232.4K15.0K6.7K127.0K
ru6.8K389.0K36.2K2.4K12.3K82.9K47.1M6.7K7.3K6.8K4.5M6.9K6.8K309.3K8.7K122.5K16.8M296.7K54.6K1.2M
sah6.6K6.9K6.6K486.7K6.7K7.1K6.6K6.7K8.1K7.0K7.6K6.6K6.7K8.7K516.5K6.9K6.9K6.6K7.3K
slr615244528375532524850515659325948
tk9.7K123.4K30.2K476.7K78.6K250.4K6.5K6.6K6.6K6.6K6.6K6.5K122.5K122.5K6.5K56244.1K235.4K14.0K6.6K7.2K
tr11.3K636.5K65.4K5013.9K158.2K39.9M6.7K6.9K6.8K65.9K6.9K6.7K549.0K16.8M6.9M59244.1K531.2K29.2K64.4K254.8K
tt11.3K215.1K67.2K508.3K160.0K572.5K6.6K6.8K6.7K8.4K6.8K6.6K232.4K296.7K6.9K32235.4K531.2K15.0K6.7K136.3K
tvy4.5K13.6K16.5K15.1K29.0K15.0K14.0K29.2K15.0K10.8K
ug6.8K6.9K6.7K537.0K6.6K111.2K6.8K6.8K7.0K6.8K6.8K6.7K6.7K54.6K6.6K596.6K64.4K6.7K19.5K
uum507
uz10.5K227.4K27.5K487.3K59.9K559.7K7.1K9.8K7.3K124.9K7.3K7.1K127.0K1.2M7.3K487.2K254.8K136.3K10.8K19.5K
+ +Table 7: Parallel corpora size for each language pair. + +
SourceLinkLanguagesSize
Tatoeba Challenge (OPUS+Tatoeba+ Gourmet+JW300)https://github.com/Helsinki-NLP/Tatoeba-Challengeaz, ba, crh, cv, gag, kjh, kk, krc, kum, ky, sah, tk, tr, tt, tvv, ug, uz, ru, en~40m
UDHRhttps://www.ohchr.org/EN/UDHR/Pages/SearchByLang.aspxalt, ba, az, cv, cjs, crh, gag, kaa, kjh, kk, ky, sah, slr, tk, tt, tr, ug, uz, ru, en~100 per direction
Biblehttps://www.faithcomesbyhearing.com audio-bible-resources/recordings-databasealt, ba, az, cjs, cv, crh, en, gag, kaa, kjh, kk, ky, sah, tk, tt, ug, uz, tr~9k per direction
Ted Talkshttps://www.ted.com/ participate/translate/ our-languagesaz, en, kk, ky, ru, tt, tr, tt, uz, ug~600k
Mozillaaz, ba, cv, en, kk, ky, sah, tk, tt, ug, uz, tr, ru~300 per direction
Azerbaijani Newshttps://github.com/ derintelligence/en-az-parallel-corpusaz, en~68k
Uzbek/English Newshttps://data.gov.uz https://president.uz https://uz.usembassy.gov https://www.gov.uzuz, en~60k
Uzbekistan Legislative Dataset (Law)https://lex.uz/uz, ru, en~1.5m
KhanAcademy Project Translations(Math/Science)https://uz.khanacademy.org/uz, en~200k
Karakalpak Newshttps://kknews.uz https://www.gov.uz http://karakalpakstan.uz https://www.qrstat.uz/kkkaa, uz, ru, en~60k
Bashkir-Russian Corpushttps://github.com/AigizK/bashkort-parallel-corporaba,ru~600k
Salar Language Materialshttp://www.sino-platonic.org/complete/spp043_salar_language.pdfslr,en~700
Urum Language Materialshttps://web.archive.org/web/20180919233848/http://projects.turkmas.uoa.gr/urum/urum, en~500
Russian-Shor Online Dictionaryhttp://tili.tadarlar.ru/tadar/rus-shor.htmlru,cjs~300
+ +Table 8: Sources and links for resources and websites used. \ No newline at end of file diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/images.zip b/alargescalestudyofmachinetranslationinturkiclanguages/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9e6528c06a732a1c320553fdef55675a2589f10e --- /dev/null +++ b/alargescalestudyofmachinetranslationinturkiclanguages/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7276b8cfd7524a05bbcb66b3945c1674e75495bea480149246b252803f38708b +size 961314 diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/layout.json b/alargescalestudyofmachinetranslationinturkiclanguages/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..89de373cd072f86d5c1333b9ac531fbd8b3899f7 --- /dev/null +++ b/alargescalestudyofmachinetranslationinturkiclanguages/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8652cf8c2a583d9bb7e84bf22d88c84ce4ebd5e413f7ebeb5137cc1a510fdd2b +size 406839 diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_content_list.json b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..46ee6c8cfbe53693fe37ef3daf229186ac5c1d71 --- /dev/null +++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a19365b8f09f7838dfa4d165c4cf122a1da77d6454bafb2c7558ca25eb01cf9e +size 100402 diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_model.json b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_model.json new file mode 100644 index 0000000000000000000000000000000000000000..aeb427c7831fa295f7c8cdedc9a6fd8fad6afcfc --- /dev/null +++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be38b364e6eaed46bc75c84c7af0eb7dfd310de496faaff60ad574fb23980007 +size 118902 diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_origin.pdf b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..07e2069d75b84afd4ae47308caf5ad8653b9770f --- /dev/null +++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7e83c729ab74e3497c336340f5a2b89c3c4720e32c9070ac0ef973031b01e22 +size 614744 diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/full.md b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/full.md new file mode 100644 index 0000000000000000000000000000000000000000..374775814116c186e2b44e364949cc4393773bde --- /dev/null +++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/full.md @@ -0,0 +1,458 @@ +# AlignNART: Non-autoregressive Neural Machine Translation by Jointly Learning to Estimate Alignment and Translate + +Jongyoon Song $^{1,2*}$ + +Sungwon Kim + +Sungroh Yoon $^{1,3\dagger}$ + +1Data Science and AI Laboratory, Seoul National University, South Korea + +2Kakao Enterprise, South Korea + +$^{3}$ Interdisciplinary Program in Artificial Intelligence, Seoul National University, South Korea + +{coms1580, ksw0306, sryoon}@snu.ac.kr + +# Abstract + +Non-autoregressive neural machine translation (NART) models suffer from the multi-modality problem which causes translation inconsistency such as token repetition. Most recent approaches have attempted to solve this problem by implicitly modeling dependencies between outputs. In this paper, we introduce AligNART, which leverages full alignment information to explicitly reduce the modality of the target distribution. AligNART divides the machine translation task into $(i)$ alignment estimation and $(ii)$ translation with aligned decoder inputs, guiding the decoder to focus on simplified one-to-one translation. To alleviate the alignment estimation problem, we further propose a novel alignment decomposition method. Our experiments show that AligNART outperforms previous non-iterative NART models that focus on explicit modality reduction on WMT14 En $\leftrightarrow$ De and WMT16 Ro $\rightarrow$ En. Furthermore, AligNART achieves BLEU scores comparable to those of the state-of-the-art connectionist temporal classification based models on WMT14 En $\leftrightarrow$ De. We also observe that AligNART effectively addresses the token repetition problem even without sequence-level knowledge distillation. + +# 1 Introduction + +In the neural machine translation (NMT) domain, non-autoregressive NMT (NART) models (Gu et al., 2018) have been proposed to alleviate the low translation speeds of autoregressive NMT (ART) models. However, these models suffer from degenerated translation quality (Gu et al., 2018; Sun et al., 2019). To improve the translation quality of NART, several studies on NART iteratively refine decoded outputs with minimal iterations (Ghazvininejad et al., 2019; Kasai et al., 2020a; Lee et al., 2020; Guo et al., 2020; Sahara et al., 2020); other recent + +works target to improve NART without iteration (Qian et al., 2021; Gu and Kong, 2021). + +One of the significant limitations of non-iterative NART models is the multi-modality problem. This problem originates from the fact that the models should maximize the probabilities of multiple targets without considering conditional dependencies between target tokens. For example, in English-to-German translation, a source sentence "Thank you very much." can be translated to "Danke wollen" or "Vielen Dank." Under the conditional independence assumption, the non-iterative NART models are likely to generate improper translations such as "Danke Dank." or "Vielen wollen." (Gu et al., 2018). For the same reason, other inconsistency problems such as token repetition or omission occur frequently in non-iterative NART (Gu and Kong, 2021). + +There are two main methods for non-iterative NART to address the multi-modality problem. Some works focus on an implicit modeling of the dependencies between the target tokens (Gu and Kong, 2021). For example, Ghazvininejad et al. (2020), Sahara et al. (2020), and Gu and Kong (2021) modify the objective function based on dynamic programming, whereas Qian et al. (2021) provide target tokens to the decoder during training. + +On the other hand, other works focus on an explicit reduction of the modality of the target distribution by utilizing external source or target sentence information rather than modifying the objective function. For example, Akoury et al. (2019) and Liu et al. (2021) use syntactic or semantic information; Gu et al. (2018), Zhou et al. (2020b), and Ran et al. (2021) use the alignment information between source and target tokens. However, previous explicit modality reduction methods show suboptimal performance. + +Zhou et al. (2020b) and Ran et al. (2021) extract fertility (Brown et al., 1993) and ordering + +information in word alignments, which enables the modeling of several types of mappings except for many-to-one and many-to-many cases. We hypothesize that leveraging entire mappings significantly reduces the modality and is the key to performance improvement. + +In this work, we propose AligNART, a non-iterative NART model that mitigates the multimodality problem by utilizing complete information in word alignments. AligNART divides the machine translation task into $(i)$ alignment estimation and $(ii)$ non-autoregressive translation under the given alignments. Modeling all the type of mapping guides $(ii)$ more close to one-to-one translation. In AligNART, a module called Aligner is simply augmented to NAT (Gu et al., 2018) which estimates alignments to generate aligned decoder inputs. + +However, it is challenging to estimate the complex alignment information using only source sentence during inference. Specifically, Aligner should simultaneously predict the number of target tokens corresponding to each source token and their mapping. To overcome this problem, we further propose alignment decomposition which factorizes the alignment process into three sub-processes: duplication, permutation, and grouping. Each sub-process corresponds to much feasible sub-problems: one-to-many mapping, ordering, and many-to-one mapping, respectively. + +Our experimental results show that AligNART outperforms previous non-iterative NART models of explicit modality reduction on WMT14 $\mathrm{En} \leftrightarrow \mathrm{De}$ and WMT16 $\mathrm{Ro} \rightarrow \mathrm{En}$ . AligNART achieves performance comparable to that of the recent state-of-the-art non-iterative NART model on WMT14 $\mathrm{En} \leftrightarrow \mathrm{De}$ . We observe that the modality reduction in AligNART addresses the token repetition issue even without sequence-level knowledge distillation (Kim and Rush, 2016). We also conduct quantitative and qualitative analyses on the effectiveness of alignment decomposition. + +# 2 Background + +Given a source sentence $x = \{x_{1},x_{2},\dots,x_{M}\}$ and its translation $y = \{y_{1},y_{2},\dots,y_{N}\}$ , ART models with encoder-decoder architecture are trained with chained target distributions and infer the target sentence autoregressively: + +$$ +p (y | x) = \prod_ {n = 1} ^ {N} p \left(y _ {n} \mid y _ {< n}, x\right). \tag {1} +$$ + +At each decoding position $n$ , the decoder of the model is conditioned with previous target tokens $y_{