diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_content_list.json b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..41361b57126ed48eb8daef32a7d71425f2d0dddd
--- /dev/null
+++ b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7320209a3db6c5c40b15cc5a1201833fbb5f9f5615a6fc7371eb762b5548593a
+size 47901
diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_model.json b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6efd13dde3213e3052e6b869a4236012259071df
--- /dev/null
+++ b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e8329d1931c4bc65e82c037375125a463443f2f783b123fda121bcfe0f327594
+size 56602
diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_origin.pdf b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fe5607a624d345399a5c3fe77a428b5fa88ec0d2
--- /dev/null
+++ b/abstractrationalestanceajointmodelforscientificclaimverification/e1bac4cb-6fd1-408c-9ad9-35a1e88700eb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:269dc5f6cedcd7af70b19a8c319620dbdf06c6f212ab7b05d1715b66ac271114
+size 874521
diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/full.md b/abstractrationalestanceajointmodelforscientificclaimverification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..393496f42050d019412de18a7b6964b132e53f8e
--- /dev/null
+++ b/abstractrationalestanceajointmodelforscientificclaimverification/full.md
@@ -0,0 +1,169 @@
+# Abstract, Rationale, Stance: A Joint Model for Scientific Claim Verification
+
+Zhiwei Zhang $^{1,2}$ , Jiyi Li $^{2*}$ , Fumiyo Fukumoto $^{2}$ and Yanming Ye $^{1}$
+
+Hangzhou Dianzi University, Hangzhou, China
+
+University of Yamanashi, Kofu, Japan
+
+hdluxiaozhi97@gmail.com, {jyli, fukumoto}@yamanashi.ac.jp
+yeym@hdu.edu.cn
+
+# Abstract
+
+Scientific claim verification can help the researchers to easily find the target scientific papers with the sentence evidence from a large corpus for the given claim. Some existing works propose pipeline models on the three tasks of abstract retrieval, rationale selection and stance prediction. Such works have the problems of error propagation among the modules in the pipeline and lack of sharing valuable information among modules. We thus propose an approach, named as ARSJOINT, that jointly learns the modules for the three tasks with a machine reading comprehension framework by including claim information. In addition, we enhance the information exchanges and constraints among tasks by proposing a regularization term between the sentence attention scores of abstract retrieval and the estimated outputs of rational selection. The experimental results on the benchmark dataset SCIFACT show that our approach outperforms the existing works.
+
+# 1 Introduction
+
+A system of scientific claim verification can help the researchers to easily find the target scientific papers with the sentence evidence from a large corpus for the given claim. To address this issue, Wadden et al. (2020) introduced scientific claim verification which consists of three tasks. As illustrated in Figure 1, for a given claim, the system finds the abstracts which are related to the claim from a scholarly document corpus (abstract retrieval task); it selects the sentences which are the evidences in the abstract related to the claim (rationale selection task); it also classifies whether the abstract/sentences support or refute the claims (stance prediction task). Wadden et al. (2020) also provided a dataset called SCIFACT.
+
+Most of the existing works of general claim verification are based on pipeline models (Soleimani et al., 2020; Alonso-Reina et al., 2019; Liu et al.,
+
+
+Figure 1: An example of scientific claim verification.
+
+2020; Zhou et al., 2019; Nie et al., 2019; Lee et al., 2020b); some works utilize joint optimization strategies (Lu and Li, 2020; Yin and Roth, 2018; Hidey et al., 2020). These models attempted to jointly optimize the rationale selection and stance prediction, but did not directly link the two modules (Li et al., 2020). In the case of the scientific claim verification, Wadden et al. (2020) proposed a baseline model VERiSCI based on a pipeline of three components for the three tasks. Pradeep et al. (2021) proposed a pipeline model called VERT5ERINI which utilized the pre-trained language model T5 (Raffel et al., 2020) and adapted a pre-trained sequence-to-sequence model. Li et al. (2020) jointly trained two tasks of rationale selection and stance prediction, and had a pipeline on abstract retrieval task and the joint module.
+
+Above existing works on scientific claim verification are fully or partially pipeline solutions. One problem of these works is the error propagation among the modules in the pipeline. Another problem is that the module in the pipeline trained independently cannot share and leverage the valuable information among each other. Therefore, we propose an approach, named as ARSJOINT, which jointly learns the three modules for the three tasks. It has a Machine Reading Comprehension (MRC) framework which uses the claim content as the query to learn additional information. In addition, we assume that the abstract retrieval module should
+
+have good interpretability and tend to assign high sentence-level attention scores to the evidence sentences that influence the retrieval results; it is consistent with the goal of the rationale selection module. We thus enhance the information exchanges and constraints among tasks by proposing a regularization term based on a symmetric divergence to bridge these two modules.
+
+The experimental results on the benchmark dataset SCiFACT show that the proposed approach has better performance than the existing works. The main contributions of this paper can be summarized as follows. (1). We propose a scientific claim verification approach which jointly trains on the three tasks in a MRC framework. (2). We propose a regularization based on the divergence between the sentence attention of abstract retrieval and the outputs of rational selection.
+
+# 2 Our Approach
+
+# 2.1 Notation and Definitions
+
+We denote the query claim as $q$ and an abstract of a scientific paper as $a \in \mathcal{A}$ . We denote the set of sentences in abstract $a$ as $S = \{s_i\}_{i=1}^l$ and the word sequence of $s_i$ is $[s_{i1}, \ldots, s_{ini_i}]$ . The title of the paper $t \in \mathcal{T}$ is used as auxiliary information, the word sequence of $t$ is $[t_1, \ldots, t_{nt_t}]$ . Here, $S$ , $s_i$ and $t$ are for $a$ in default and we omit the subscripts 'a' in the notations. The purpose of the abstract retrieval task is to detect the set of related abstracts to $q$ ; it assigns relevance labels $y^b \in \{0,1\}$ to a candidate abstract $a$ . The rationale selection task is to detect the decisive rationale sentences $S^r \subseteq S$ of $a$ relevant to the claim $q$ ; it assigns evidence labels $y_i^r \in \{0,1\}$ to each sentence $s_i \in S$ . The stance prediction task classifies $a$ into stance labels $y^e$ which are in {SUPPORTS=0, REFUTES=1, NOINFO=2}. The sentences in $a$ have the same stance label value.
+
+# 2.2 Pre-processing
+
+As there are a huge amount of papers in the corpus, applying all of them to the proposed model is time-consuming. Therefore, similar to the existing works on this topic (Wadden et al., 2020; Pradeep et al., 2021; Li et al., 2020), we also utilize a lightweight method to first roughly select a set of candidate papers. We used the BioSentVec (Chen et al., 2019; Pagliardini et al., 2018) to obtain the embeddings of the claim or a scientific paper based on its title and abstract, and compute the cosine sim
+
+ilarity between the claim and the paper. The papers with top- $k$ similarities are used as the candidates.
+
+# 2.3 Joint Abstract, Rationale, Stance Model
+
+The input sequence of our model is defined as $seq = [[\mathrm{CLS}]q[\mathrm{SEP}]t \cdots [\mathrm{SEP}]s_i[\mathrm{SEP}] \cdots]$ , which is obtained by concatenating the claim $q$ , title $t$ and abstract $a$ . We compute the list of word representations $\mathbf{H}_{seq}$ of the input sequence by a pre-trained language model (e.g., BioBERT (Lee et al., 2020a)). We obtain the word representations of the claim $\mathbf{H}_q = [\mathbf{h}_{q_1}, \dots, \mathbf{h}_{q_{n_q}}]$ , the title $\mathbf{H}_t = [\mathbf{h}_{t_1}, \dots, \mathbf{h}_{t_{n_t}}]$ , each sentence $\mathbf{H}_{s_i} = [\mathbf{h}_{s_{i1}}, \dots, \mathbf{h}_{s_{in_i}}]$ , and the abstract $\mathbf{H}_S = \mathbf{H}_a = [\dots, \mathbf{H}_{s_i}, \dots]$ from $\mathbf{H}_{seq}$ and use them in our AR-SJOINT model. Figure 2 shows the framework of our model with three modules for the three tasks.
+
+In all three modules, we use attention layer (denoted as $g(\cdot)$ ) on word (sentence) representations to compute a sentence (document) representation. A document can be a claim, title, abstract, or their combinations. The computation is as follows (refer to (Li et al., 2020)), where the * in $\mathbf{H}_{*}$ represents any type of sentence (claim $q$ , title $t$ or a sentence $s$ in an abstract), the $\star$ in $\mathbf{H}_{\star}$ represents any type of document, $\mathbf{W}$ and $\mathbf{b}$ are trainable parameters.
+
+$$
+\begin{array}{c} g (\mathbf {H} _ {*}) = \sum_ {i} \mathbf {u} _ {* i} \boldsymbol {\alpha} _ {* i}, \boldsymbol {\alpha} _ {* i} = \frac {\exp (\mathbf {W} _ {w _ {2}} \mathbf {u} _ {* i} + \mathbf {b} _ {w _ {2}})}{\sum_ {j} \exp (\mathbf {W} _ {w _ {2}} \mathbf {u} _ {* j} + \mathbf {b} _ {w _ {2}})}, \\ \mathbf {u} _ {* j} = \tanh (\mathbf {W} _ {w _ {1}} \mathbf {h} _ {*} ^ {j} + \mathbf {b} _ {w _ {1}}) \text {f o r w o r d - l e v e l a t t e n t i o n}, \end{array}
+$$
+
+$$
+\begin{array}{l} g \left(\mathbf {H} _ {\star}\right) = \sum_ {i} \mathbf {U} _ {\star i} \boldsymbol {\alpha} _ {\star i}, \boldsymbol {\alpha} _ {\star i} = \frac {\exp \left(\mathbf {W} _ {c _ {2}} \mathbf {U} _ {\star i} + \mathbf {b} _ {c _ {2}}\right)}{\sum_ {j} \exp \left(\mathbf {W} _ {c _ {2}} \mathbf {U} _ {\star j} + \mathbf {b} _ {c _ {2}}\right)}, \\ \mathbf {U} _ {\star j} = \tanh \left(\mathbf {W} _ {c _ {1}} \mathbf {H} _ {\star j} + \mathbf {b} _ {c _ {1}}\right) \text {f o r s e n t e n c e - l e v e l a t t e n t i o n .} \end{array} \tag {1}
+$$
+
+Abstract Retrieval: In this task, a title can be regarded as an auxiliary sentence that may contain the information related to the claim for the abstract, we thus use the title with the sentences in the abstract together. We build a document $ta = [t, a]$ and concatenate the word representations of $t$ and $a$ into $\mathbf{H}_{ta} = [\mathbf{H}_t, \mathbf{H}_a]$ as the input to this module. We use a hierarchical attention network (HAN) (Yang et al., 2016) to compute document representations $\mathbf{h}_{ta} \in \mathbb{R}^d$ , $\mathbf{h}_{ta} = \mathrm{HAN}(\mathbf{H}_{ta})$ . HAN is proper for document classification by considering the hierarchical document structure (a document has sentences, a sentence has words). We also compute the sentence representation of claim $\mathbf{h}_q \in \mathbb{R}^d$ with a word-level attention layer (denoted as $g(\cdot)$ ), $\mathbf{h}_q = g(\mathbf{H}_q)$ . To compute the relevance between $\mathbf{h}_{ta}$ and $\mathbf{h}_q$ , we use a Hadamard product on them and a Multi-Layer Perception (MLP, denoted as $f(\cdot)$ ) with Softmax (denoted as $\sigma(\cdot)$ ); the outputs
+
+
+Figure 2: Framework of our ARSJOINT model which jointly learns three modules and has rationale regularization.
+
+are the probabilities that whether the abstract is relevant to the claim, $[p_0^b,p_1^b ] = \sigma (f(\mathbf{h}_q\circ \mathbf{h}_a))$ . A cross entropy loss $\mathcal{L}_{ret}$ is used for training.
+
+Rationale Selection: This task focuses on judging whether a sentence in the abstract is a rationale one or not. For the multiple sentences in the abstract, they have same title information but have different rationale labels. Therefore, when judging each sentence in the abstract, using the title may not positively influence the performance. We thus use the word representation $\mathbf{H}_a$ of the abstract as input. We compute the sentence representation $\mathbf{h}_{s_i}$ by a word-level attention layer, and use a MLP with Softmax to estimate the probability $p_{i1}^r$ and $p_{i0}^r$ that whether $s_i$ is the evidence of the abstract or not. The cross entropy loss is $\mathcal{L}_{rat}$ .
+
+Stance Prediction: The module first computes the sentence representation $\mathbf{h}_{s_i}$ in a same way with that of rationale selection. After that, it only selects the sentences $S^r$ with the true evidence label $\hat{y}_i^r = 1$ or the estimated evidence probability $p_{i1}^r > p_{i0}^r$ ; whether using the true label or the estimated label is decided by a scheduled sampling which will be introduced later. We then compute the estimated stance labels based on a sentence-level attention layer and a MLP with Softmax, $\mathbf{h}_{S^r} = g(\mathbf{H}_{S^r})$ and $[p_0^e, p_1^e, p_2^e] = \sigma(f(\mathbf{h}_q \circ \mathbf{h}_{S^r}))$ , where $S^r = \{s_i \in S | \hat{y}_i^r = 1 \text{ or } p_{i1}^r > p_{i0}^r\}$ . The cross entropy loss is $\mathcal{L}_{sta}$ .
+
+Scheduled Sampling: Since rationale sentences $S^r$ are used in stance prediction, the error of the rationale selection module will be propagated to the stance prediction module. To alleviate this problem, following (Li et al., 2020), we also use a scheduled sampling method (Bengio et al., 2015), which is to
+
+
| SUPPORT | NOINFO | REFUTES | ALL |
| Train | 332 / 370 | 304 / 220 | 173 / 194 | 809 |
| Dev. | 124 / 138 | 112 / 114 | 64 / 71 | 300 |
| ALL | 456 / 508 | 416 / 444 | 237 / 265 | 1109 |
+
+Table 1: Statistics of SCIFACT dataset. The numbers are "number of claims / number of relevant abstracts".
+
+feed the sentences with true evidence label $\hat{y}_i^r = 1$ to the stance prediction module at the beginning, and then gradually increase the proportion of the sentences with the estimated evidence probability $p_{i1}^{r} > p_{i0}^{r}$ , until eventually all sentences in $S^r$ are based on the estimated evidences. We set the sampling probability of using the estimated evidences as $p_{\text{sample}} = \sin \left( \frac{\pi}{2} \times \frac{\text{current\_epoch} - 1}{\text{total\_epoch} - 1} \right)$ .
+
+Rationale Regularization (RR): The attention scores have been used for interpretability in NLP tasks (Serrano and Smith, 2019;Wiegreffe and Pinter, 2019; Sun and Lu, 2020). We assume that the abstract retrieval module should have good interpretability and tend to assign high sentence-level attention scores to the evidence sentences that influence the retrieval results; it is consistent with the goal of the rationale selection module. We thus enhance the information exchanges and constraints among tasks by proposing a regularization term based on a symmetric divergence on the sentence attention scores $\alpha$ of abstract retrieval and the estimated outputs $\mathbf{y}^r$ of the rational selection to bridge these two modules. The detailed formula is as follows, where $\mathbf{p}$ and $\mathbf{q}$ are $\alpha$ or $\mathbf{y}^r$ .
+
+$$
+\mathcal {D} (\mathbf {p} | | \mathbf {q}) = - \sum_ {i = 1} ^ {l} \left(\mathbf {p} _ {i} \log \left(\mathbf {q} _ {i}\right) + \left(1 - \mathbf {p} _ {i}\right) \log \left(1 - \mathbf {q} _ {i}\right)\right),
+$$
+
+$$
+\mathcal {L} _ {R R} = \mathcal {D} (\boldsymbol {\alpha} \| \mathbf {y} ^ {r}) + \mathcal {D} (\mathbf {y} ^ {r} \| \boldsymbol {\alpha}). \tag {2}
+$$
+
+Joint Training: We jointly train our model on abstract retrieval, rationale selection and stance prediction. The joint loss with our RR is as follows, $\mathcal{L} = \lambda_1\mathcal{L}_{ret} + \lambda_2\mathcal{L}_{rat} + \lambda_3\mathcal{L}_{sta} + \gamma \mathcal{L}_{RR}$ , where $\lambda_{1},\lambda_{2},\lambda_{3}$ and $\gamma$ are hyperparameters.
+
+# 3 Experiments
+
+# 3.1 Experimental Settings
+
+Dataset: We utilize the benchmark dataset SCIFACT1. It consists of 5,183 scientific papers with titles and abstracts and 1,109 claims in the training and development sets. Table 1 presents the statistics of the dataset.
+
+Experimental Settings: For our ARSJOINT model, we use Optuna (Akiba et al., 2019) to tune the hyperparameters $\lambda_{1},\lambda_{2},\lambda_{3}$ and $\gamma$ of the loss $\mathcal{L}$ on $20\%$ of the training set and based on the performance on another $20\%$ training set. We choose the optimal hyperparameters by the average F1-score on abstract-level and sentence-level evaluations. The search ranges of these four hyperparameters are set to [0.1, 12], and the number of search trials is set to 100. Table 2 lists the selected weight hyperparameters of our model. The other hyperparameters such as learning rate in the model refer to the ones used in exiting work (Li et al., 2020) to make a fair comparison. These hyperparameters are listed in Table 3.
+
+We implement our ARSJOINT model in PyTorch. Since the length of the input sequence seq is often greater than the maximum input length of a BERT-based model, we perform a tail-truncation operation on each sentence of seq that exceeds the maximum input length. For the pre-trained language model, we verify our approach by respectively using RoBERTa-large (Liu et al., 2019) and BioBERT-large (Lee et al., 2020a) trained on a biomedical corpus. We fine-tune RoBERTa-large and BioBERT-large on the SCIFACT dataset. In addition, the MLP in our model has two layers.
+
+We compare our ARSJOINT approach with Paragraph-Joint (Li et al., 2020), VERISCI $^1$ (Wadden et al., 2020) and VERT5ERINI (Pradeep et al., 2021). We use the publicly available code $^2$ of them. The "Paragraph-Joint Pre-training" model is pretrained on the FEVER dataset (Thorne et al., 2018) and then fine-tune on the SCIFACT dataset. The "Paragraph-Joint SCIFACT-only" is not pre-trained
+
+| Model | λ1 | λ2 | λ3 | γ |
| ARSJOINT w/o RR (RoBERTa) | 2.7 | 11.7 | 2.2 | - |
| ARSJOINT (RoBERTa) | 0.9 | 11.1 | 2.6 | 2.2 |
| ARSJOINT w/o RR (BioBERT) | 0.1 | 10.8 | 4.7 | - |
| ARSJOINT (BioBERT) | 0.2 | 12.0 | 1.1 | 1.9 |
+
+Table 2: Hyperparameters selected by Optuna for different variants of our model. The "w/o RR" means the model does not utilize rationale regularization.
+
+| Name | Value | Name | Value | Name | Value |
| ktra | 12 | lr1 | 1 × 10-5 | Batch size | 1 |
| kret | 30 | lr2 | 5 × 10-6 | Dropout | 0 |
+
+Table 3: Hyperparameter settings following the existing work. $k_{tra}$ and $k_{ret}$ are the number of candidate abstracts for each claim in the training and testing stages. $lr_1$ and $lr_2$ are the learning rates of the BERT-based model and other modules of the proposed model.
+
+on other datasets.
+
+Evaluation: We evaluate the methods by using the abstract-level and sentence-level evaluation criteria given in SCIFACT1. Abstract-level evaluation: It evaluates the performance of a model on detecting the abstracts which support or refute the claims. For the "Label-Only" evaluation, given a claim $q$ , the classification result of an abstract $a$ is correct if the estimated relevance label $\hat{y}^b$ is correct and the estimated stance label $\hat{y}^e$ is correct. For the "Label+Rationale" evaluation, the abstract is correctly rationalized, in addition, if the estimated rationale sentences contain a gold rationale. Sentence-level evaluation: It evaluates the performance of a model on detecting rationale sentences. For the "Selection-Only" evaluation, an estimated rationale sentence $s_i$ of an abstract $a$ is correctly selected if the estimated rationale label $\hat{y}_i^r$ is correct and the estimated stance label $\hat{y}^e$ is not "NOINFO". Especially, if consecutive multiple sentences are gold rationales, then all these sentences should be estimated as rationales. For the "Selection+Label", the estimated rationale sentences are correctly labeled, in addition, if the estimated stance label $\hat{y}^e$ of this abstract is correct. The evaluation metrics F1-score (F1), Precision (P), and Recall (R) are used. We train the model using all training data, and since Wadden et al. (2020) does not publish the labels on the test set, we evaluate the approaches on the development set following (Li et al., 2020).
+
+# 3.2 Experimental Results
+
+Table 4 shows the main experimental results. First, the proposed method ARSJOINT (BioBERT) out
+
+| Models | Sentence-level | Abstract-level |
| Selection-Only | Selection+Label | Label-Only | Label+Rationale |
| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 |
| VERISCI | 54.3 | 43.4 | 48.3 | 48.5 | 38.8 | 43.1 | 56.4 | 48.3 | 52.1 | 54.2 | 46.4 | 50.0 |
| Paragraph-Joint SCIFACT-only | 69.3 | 50.0 | 58.1 | 59.8 | 43.2 | 50.2 | 69.9 | 52.1 | 59.7 | 64.7 | 48.3 | 55.3 |
| Paragraph-Joint Pre-training | 74.2 | 57.4 | 64.7 | 63.3 | 48.9 | 55.2 | 71.4 | 59.8 | 65.1 | 65.7 | 55.0 | 59.9 |
| VERT5ERINI (BM25) | 67.7 | 53.8 | 60.0 | 63.9 | 50.8 | 56.6 | 70.9 | 61.7 | 66.0 | 67.0 | 58.4 | 62.4 |
| VERT5ERINI (T5) | 64.8 | 57.4 | 60.9 | 60.8 | 53.8 | 57.1 | 65.1 | 65.1 | 65.1 | 61.7 | 61.7 | 61.7 |
| ARSJOINT w/o RR (RoBERTa) | 70.9 | 56.6 | 62.9 | 56.8 | 45.4 | 50.5 | 66.1 | 56.0 | 60.6 | 61.0 | 51.7 | 56.0 |
| ARSJOINT (RoBERTa) | 67.9 | 57.1 | 62.0 | 55.5 | 46.7 | 50.7 | 64.5 | 57.4 | 60.8 | 59.1 | 52.6 | 55.7 |
| ARSJOINT w/o RR (BioBERT) | 75.4 | 57.7 | 65.3 | 63.6 | 48.6 | 55.1 | 72.7 | 57.4 | 64.2 | 67.9 | 53.6 | 59.9 |
| ARSJOINT (BioBERT) | 76.2 | 58.5 | 66.2 | 66.5 | 51.1 | 57.8 | 75.3 | 59.8 | 66.7 | 70.5 | 56.0 | 62.4 |
+
+Table 4: Main experimental results.
+
+| Claim: Ly6C hi monocytes have a lower inflammatory capacity than Ly6C lo monocytes. | αi | ŷiT | yiT |
| Blood monocytes are well-characterized precursors for macrophages and dendritic cells. | 0.0745 | 0 | 0 |
| ...... |
| Under inflammatory conditions elicited either by acute infection with Listeria monocytogenes or chronic 1,0,0 infection with Leishmania major, there was a significant increase in immature Ly-6C(high) monocytes, resembling the inflammatory left shift of granulocytes. | 0.0936 | 1 | 1 |
| In addition, acute peritoneal inflammation recruited preferentially Ly-6C(med-high) monocytes. | 0.1613 | 1 | 1 |
| Taken together, these data identify distinct subpopulations of mouse blood monocytes that differ in maturation stage and capacity to become recruited to inflammatory sites. | 0.0745 | 0 | 0 |
+
+Table 5: Result example of Rationale Regularization. Given a claim, it lists the sentences from an abstract. $\alpha_{i}$ is sentence attention score in the abstract retrieval task; $\hat{y}_i^r$ is estimated rationale label; $y_i^r$ is true rationale label.
+
+performs the existing works with fully or partially pipelines. VERISCI and VERT5ERINI are pipeline models and Paragraph-Joint is a partially pipeline model with a joint model on two tasks. It shows that the proposed model which jointly learns the three tasks is effective to improve the performance.
+
+Second, when using the same pre-trained model RoBERTa-large, comparing our method and the paragraph-joint model, ARSJOINT (RoBERTa) and ARSJOINT w/o RR (RoBERTa) have better performance than "Paragraph-Joint SciFact Only", especially on Recall. It shows that jointly learning with the abstract retrieval task can improve performance. For the Paragraph-Joint method, "Paragraph-Joint Pre-training" with pre-training on another FEVER dataset has much better performance than "Paragraph-Joint SCIFACT-only" without pre-training on other datasets. Similarly, we replace RoBERTa-large with BioBERT large which contains biological knowledge; ARSJOINT (BioBERT) achieves better performance over "Paragraph-Joint Pre-training".
+
+Third, as an ablation study of the proposed RR, in the case of using BioBERT-large, there is a significant difference between the model with and without RR. Although only a small difference in the case of using RoBERTa-large, there is still an improvement on Recall. This indicates that ratio
+
+nale regularization can effectively improve the performance of the model. Table 5 shows an example of the results with RR. In this example, it lists a claim and the sentences from an abstract. The attention scores of the sentences in the abstract retrieval task are consistent with the true rationale labels (as well as the estimated rationale labels). The abstract retrieval module thus has good interpretability.
+
+# 4 Conclusion
+
+In this paper, we propose a joint model named as ARSJOINT on three tasks of abstract retrieval, rationale selection and stance prediction for scientific claim verification in a MRC framework by including claim. We also propose a regularization based on the divergence between the sentence attention of the abstract retrieval task and the outputs of the rational selection task. The experimental results illustrate that our method achieves better results on the benchmark dataset SCIFACT. In future work, we will try to pre-train the model on other general claim verification datasets such as FEVER (Thorne et al., 2018) to improve the performance.
+
+# Acknowledgments
+
+This work was partially supported by KDDI Foundation Research Grant Program.
+
+# References
+
+Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2623-2631.
+Aimée Alonso-Reina, Robert Sepulveda-Torres, Estela Saquete, and Manuel Palomar. 2019. Team gplsi: approach for automated fact checking. In Proceedings of the Second Workshop on Fact Extraction and VERIFICATION (FEVER), pages 110-114.
+Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 1171-1179, Cambridge, MA, USA. MIT Press.
+Qingyu Chen, Yifan Peng, and Zhiyong Lu. 2019. Biosentvec: creating sentence embeddings for biomedical texts. In 2019 IEEE International Conference on Healthcare Informatics (ICHI), pages 1-5. IEEE.
+Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, and Smaranda Muresan. 2020. DeSePtion: Dual sequence prediction and adversarial examples for improved fact-checking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8593-8606, Online. Association for Computational Linguistics.
+Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020a. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
+Nayeon Lee, Yejin Bang, Andrea Madotto, and Pascale Fung. 2020b. Misinformation has high perplexity. arXiv preprint arXiv:2006.04666.
+Xiangci Li, Gully Burns, and Nanyun Peng. 2020. A paragraph-level multi-task learning model for scientific fact-verification. arXiv preprint arXiv:2012.14500.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2020. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342-7351.
+
+Yi-Ju Lu and Cheng-Te Li. 2020. GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 505-514, Online. Association for Computational Linguistics.
+Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6859-6866.
+Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 528-540, New Orleans, Louisiana. Association for Computational Linguistics.
+Ronak Pradeep, Xueguang Ma, Rodrigo Nogueira, and Jimmy Lin. 2021. Scientific claim verification with VerT5erini. In Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, pages 94-103, online. Association for Computational Linguistics.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics.
+Amir Soleimani, Christof Monz, and Marcel Worring. 2020. Bert for evidence retrieval and claim verification. In European Conference on Information Retrieval, pages 359-366. Springer.
+Xiaobing Sun and Wei Lu. 2020. Understanding attention for text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3418-3428, Online. Association for Computational Linguistics.
+James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.
+
+David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7534-7550, Online. Association for Computational Linguistics.
+Sarah Wegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics.
+Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480-1489.
+Wenpeng Yin and Dan Roth. 2018. TwoWingOS: A two-wing optimization strategy for evidential claim verification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 105-114, Brussels, Belgium. Association for Computational Linguistics.
+Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. Gear: Graph-based evidence aggregating and reasoning for fact verification. In Proceedings of ACL 2019.
\ No newline at end of file
diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/images.zip b/abstractrationalestanceajointmodelforscientificclaimverification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ea8c2c4b2aeac9393b13a65a17625018d9ded9c8
--- /dev/null
+++ b/abstractrationalestanceajointmodelforscientificclaimverification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:257ef4207f60c4a2cf0e0082c6012f811d690295fcc51b88c70711de8f7eb253
+size 385685
diff --git a/abstractrationalestanceajointmodelforscientificclaimverification/layout.json b/abstractrationalestanceajointmodelforscientificclaimverification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ecc73947a8e1bacf7f8f228228ea84af6e67aa2b
--- /dev/null
+++ b/abstractrationalestanceajointmodelforscientificclaimverification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ae082651477493561a5be5e73b0bd01013f6e05039b13a6b36866cc57e19692
+size 261604
diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_content_list.json b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5dbf7b90106891a3537a1ce0be9d499137cdbf9b
--- /dev/null
+++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5615793074297e89d47da8480d03ca6b2d1e28510f832a8492666036cb8935e8
+size 105954
diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_model.json b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4254c7a40a5b1afffa9a0b13697a5034d0189142
--- /dev/null
+++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:907ff0eee1a28f48d2efaa38a570a7b5cbcab88ef2f6bfd1a5b61c68a0f39d4e
+size 127606
diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_origin.pdf b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..54093a0de3487039f2aeebb81d66f0fb05700e9f
--- /dev/null
+++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/fe2e693d-68c9-4f6e-979a-12bdc97cf1ca_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4bb71154ac32dc9180e6c3acdf16e648231096c72af7b6e7928cf7edb152618a
+size 1532404
diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/full.md b/achievingmodelrobustnessthroughdiscreteadversarialtraining/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..12e1fd76a5aaae40f00f4ce80765ec45aecfb653
--- /dev/null
+++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/full.md
@@ -0,0 +1,382 @@
+# Achieving Model Robustness through Discrete Adversarial Training
+
+Maor Ivgi
+
+Tel-Aviv University
+
+maorivgi@mail.tau.ac.il
+
+Jonathan Berant
+
+Tel-Aviv University
+
+The Allen Institute for AI
+
+joberant@cs.tau.ac.il
+
+# Abstract
+
+Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error. While such attacks have been extensively explored for the purpose of evaluating model robustness, their utility for improving robustness has been limited to offline augmentation only. Concretely, given a trained model, attacks are used to generate perturbed (adversarial) examples, and the model is re-trained exactly once. In this work, we address this gap and leverage discrete attacks for online augmentation, where adversarial examples are generated at every training step, adapting to the changing nature of the model. We propose (i) a new discrete attack, based on best-first search, and (ii) random sampling attacks that unlike prior work are not based on expensive search-based procedures. Surprisingly, we find that random sampling leads to impressive gains in robustness, outperforming the commonly-used offline augmentation, while leading to a speedup at training time of $\sim 10\mathrm{x}$ . Furthermore, online augmentation with search-based attacks justifies the higher training cost, significantly improving robustness on three datasets. Last, we show that our new attack substantially improves robustness compared to prior methods.
+
+# 1 Introduction
+
+Adversarial examples are inputs that are slightly, but intentionally, perturbed to create a new example that is misclassified by a model (Szegedy et al., 2014). Adversarial examples have attracted immense attention in machine learning (Goodfellow et al., 2015; Carlini and Wagner, 2017; Papernot et al., 2017) for two important, but separate, reasons. First, they are useful for evaluating model robustness, and have revealed that current models are over-sensitive to minor perturbations. Second, adversarial examples can improve robustness: training on adversarial examples reduces the brittleness and over-sensitivity of deep learning models to
+
+
+
+
+
+
+Figure 1: Robust accuracy vs. slowdown in training time, comparing different methods to Baseline (purple pentagon); x-axis in logarithmic scale. The popular ADVOFF (blue squares, offline augmentation with adversarial example) is $10\mathrm{x}$ slower than our simple augmentation of 4 (8) random samples (triangles, RAND-OFF-4, RANDOFF-8) and achieves similar or worse robust accuracy. Our online augmentation of adversarial examples (ADVON, yellow circles) significantly improves robust accuracy, but is expensive to train.
+
+such perturbations (Alzantot et al., 2018; Jin et al., 2020; Li et al., 2020; Lei et al., 2019; Wallace et al., 2019; Zhang et al., 2020; Garg and Ramakrishnan, 2020; Si et al., 2020a; Goel et al., 2021).
+
+Training and evaluating models with adversarial examples has had considerable success in computer vision, with gradient-based techniques like FGSM (Goodfellow et al., 2015) and PGD (Madry et al., 2018). In computer vision, adversarial examples can be constructed by considering a continuous space of imperceptible perturbations around image pixels. Conversely, language is discrete, and any perturbation is perceptible. Thus, robust models must be invariant to input modifications that preserve semantics, such as synonym substitutions (Alzantot et al., 2018; Jin et al., 2020), paraphrasing (Tan et al., 2020), or typos (Huang et al., 2019).
+
+Due to this property of language, ample work has been dedicated to developing discrete attacks that generate adversarial examples through combinatorial optimization (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020; Zhou et al., 2020; Zang et al., 2020). For example, in sentiment analysis, it is common to consider the space of all synonym substitutions, where an adversarial example for an input "Such an amazing movie!" might be "Such an extraordinary film" (Fig. 2). This body of work has mostly focused on evaluating robustness, rather than improving it, which naturally led to the development of complex combinatorial search algorithms, whose goal is to find adversarial examples in the exponential space of perturbations.
+
+In this work, we address a major research gap in current literature around improving robustness with discrete attacks. Specifically, past work (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020) only considered offline augmentation, where a discrete attack is used to generate adversarial examples and the model is re-trained exactly once with those examples. This ignores online augmentation, which had success in computer vision (Kurakin et al., 2017; Perez and Wang, 2017; Madry et al., 2018), where adversarial examples are generated in each training step, adapting to the changing model. Moreover, simple data augmentation techniques, such as randomly sampling from the space of synonym substitutions and adding the generated samples to the training data have not been investigated and compared to offline adversarial augmentation. We address this lacuna and systematically compare online augmentation to offline augmentation, as well as to simple random sampling techniques. To our knowledge, we are the first to evaluate online augmentation with discrete attacks on a wide range of NLP tasks. Our results show that online augmentation leads to significant improvement in robustness compared to prior work and that simple random augmentation achieves comparable results to the common offline augmentation at a fraction of the complexity and training time.
+
+Moreover, we present a new search algorithm for finding adversarial examples, Best-First search over a Factorized graph (BFF), which alleviates the greedy nature of previously-proposed algorithms. BFF improves search by incorporating backtracking, and allowing to re-visit previously-discarded search paths, once the current one is revealed to be sub-optimal.
+
+
+Expected: Positive
+Figure 2: Given a movie review $x$ , the model $A$ is robust to a set of perturbations, while $A'$ is not.
+
+We evaluate model robustness on three datasets: BoolQ (Clark et al., 2019), IMDB (Maas et al., 2011), and SST-2 (Socher et al., 2013), which vary in terms of the target task (question answering and sentiment analysis) and input length. Surprisingly, we find across different tasks (Fig. 1) that augmenting each training example with 4-8 random samples from the synonym substitution space performs as well as (or better than) the commonly used offline augmentation, while being simpler and $10\mathrm{x}$ faster to train. Conversely, online augmentation makes better use of the extra computational cost, and substantially improves robust accuracy compared to offline augmentation. Additionally, our proposed discrete attack algorithm, BFF, outperforms prior work by a wide margin. Our data and code are available at https://github.com/Mivg/robust_transformers.
+
+# 2 Problem Setup and Background
+
+Problem setup We focus in this work on the supervised classification setup, where given a training set $\{x_{j},y_{j}\}_{j = 1}^{N}$ sampled from $\mathcal{X}\times \mathcal{Y}$ , our goal is to learn a mapping $A:\mathcal{X}\to \mathcal{Y}$ that achieves high accuracy on held-out data sampled from the same distribution. Moreover, we want the model $A$ to be robust, i.e., invariant to a set of pre-defined label-preserving perturbations to $x$ , such as synonym substitutions. Formally, for any natural language input $x$ , a discrete attack space of label-preserving perturbations $S(x)\subset \mathcal{X}$ is defined. Given a labeled example $(x,y)$ , a model $A$ is robust w.r.t $x$ , if $A(x) = y$ and for any $\bar{x}\in S(x)$ , the output $A(\bar{x}) = A(x)$ . An example $\bar{x}\in S(x)$ such that $A(\bar{x})\neq A(x)$ is called an adversarial example. We assume $A$ provides not only a prediction but a distribution $p_A(x)\in \Delta^{\lfloor \mathcal{V}\rfloor}$ over the possible classes, where $\Delta$ is the simplex, and denote the probability $A$ assigns to the gold label by $[p_A(x)]_y$ . Fig. 2 shows an example from sentiment analysis,
+
+where a model $A$ is robust, while $A^{\prime}$ is not w.r.t $x$ .
+
+Robustness is evaluated with robust accuracy (Tsipras et al., 2019), i.e., the fraction of examples a model is robust to over some held-out data. Typically, the size of the attack space $S(x)$ is exponential in the size of $x$ and it is not feasible to enumerate all perturbations. Instead, an upper bound is estimated by searching for a set of adversarial attacks, i.e., "hard" examples in $S(x)$ for every $x$ , and estimating robust accuracy w.r.t to that set.
+
+Improving robustness with discrete attacks Since language is discrete, a typical approach for evaluating robustness is to use combinatorial optimization methods to search for adversarial examples in the attack space $S(x)$ . This has been repeatedly shown to be an effective attack method on pre-trained models (Alzantot et al., 2018; Lei et al., 2019; Ren et al., 2019; Li et al., 2020; Jin et al., 2020; Zang et al., 2020). However, in terms of improving robustness, discrete attacks have thus far been mostly used with offline augmentation (defined below) and have led to limited robustness gains. In this work, we examine the more costly but potentially more beneficial online augmentation.
+
+Offline vs. online augmentation Data augmentation is a common approach for improving generalization and robustness, where variants of training examples are automatically generated and added to the training data (Simard et al., 1998). Here, discrete attacks can be used to generate these examples. We consider both offline and online data augmentation and focus on improving robustness with adversarial examples.
+
+Given a training set $\{(x_j, y_j)\}_{j=1}^N$ , offline data augmentation involves (a) training a model $A$ over the training data, (b) for each training example $(x_j, y_j)$ , generating a perturbation w.r.t to $A$ (using some discrete attack) and labeling it with $y_j$ , and (c) training a new model over the union of the original training set and the generated examples. This is termed offline augmentation because examples are generated with respect to a fixed model $A$ .
+
+Online data augmentation is this setup, examples are generated at training time w.r.t the current model $A$ . This is more computationally expensive, as examples must be generated during training and not as pre-processing, but examples can adapt to the model over time. In each step, half the batch contains examples from the training set, and half are adversarial examples generated by some dis
+
+crete attack w.r.t to the model's current state.
+
+Online augmentation has been used to improve robustness in NLP with gradient-based approaches (Jia et al., 2019; Shi et al., 2020; Zhou et al., 2020), but to the best of our knowledge has been overlooked in the context of discrete attacks. In this work, we are the first to propose model-agnostic online augmentation training, which uses automatically generated discrete adversarial attacks to boost overall robustness in NLP models.
+
+# 3 The Attack Space
+
+An attack space for an input with respect to a classification task can be intuitively defined as the set of label-preserving perturbations over the input. A popular attack space $S(x)$ , which we adopt, is the space of synonym substitutions (Alzantot et al., 2018; Ren et al., 2019). Given a synonym dictionary that provides a set of synonyms $\operatorname{Syn}(w)$ for any word $w$ , the attack space $S_{\operatorname{syn}}(x)$ for an utterance $x = (w_1, \dots, w_n)$ contains all utterances that can be obtained by replacing a word $w_i$ (and possibly multiple words) with one of their synonyms. Typically, the number of words from $x$ allowed to be substituted is limited to be no more than $D = \lceil d \cdot |x| \rceil$ , where $d \in \{0.1, 0.2\}$ is a common choice.
+
+Synonym substitutions are context-sensitive, i.e., substitutions might only be appropriate in certain contexts. For example, in Fig. 3, replacing the word "like" with its synonym "similar" (red box) is invalid, since "like" is a verb in this context. Consequently, past work (Ren et al., 2019; Jin et al., 2020) filtered $S_{\text{syn}}(x)$ using a context-sensitive filtering function $\Phi_x(w_i, \bar{w}_i) \in \{0, 1\}$ , which determines whether substituting a word $w_i$ from the original utterance $x$ with its synonym $\bar{w}_i$ is valid in a particular context. For instance, an external model can check whether the substitution maintains the part-of-speech, and whether the overall semantics is maintained. We define the filtered synonyms substitutions space $S_{\Phi}(x)$ as the set that includes all utterances $\bar{x}$ that can be generated through a sequence of no more than $D$ single-word substitutions from the original utterance that are valid according to $\Phi(\cdot, \cdot)$ . In §5.2, we describe the details of the synonym dictionary and function $\Phi$ .
+
+
+Figure 3: Example of an attack space, and the paths taken by a greedy algorithm and best-first search. An adversarial example has a probability $p < 0.5$ for the gold positive label.
+
+# 4 Best-first Search Over a Factorized Graph
+
+Searching over the attack space $S_{\Phi}(x)$ can be naturally viewed as a search problem over a directed acyclic graph (DAG), $G = (\mathcal{U}, \mathcal{E})$ , where each node $u_{\bar{x}} \in \mathcal{U}$ is labeled by an utterance $\bar{x}$ , and edges $\mathcal{E}$ correspond to single-word substitutions, valid according to $\Phi(\cdot)$ . The graph is directed and acyclic, since only substitutions of words from the original utterance $x$ are allowed (see Fig. 3). Because there is a one-to-one mapping from the node $u_{\bar{x}}$ to the utterance $\bar{x}$ , we will use the latter to denote both the node and the utterance.
+
+Discrete attacks use search algorithms to find an adversarial example in $S(x)$ . The search is guided by a heuristic scoring function $s_A(x) \coloneqq [p_A(x)]_y$ , where the underlying assumption is that utterances that give lower probability to the gold label are closer to an adversarial example. A popular choice for a search algorithm in NLP is greedy search, illustrated in Fig. 3. Specifically, one holds in step $t$ the current node $x_t$ , where $t$ words have been substituted in the source node $x_0 = x$ . Then, the model $A(\cdot)$ is run on the frontier, that is, all out-neighbor nodes $\mathcal{N}(x_t) = \{\hat{x}_{t+1} \mid (x_t, \hat{x}_{t+1}) \in \mathcal{E}\}$ , and the one that minimizes the heuristic scoring function is selected: $x_{t+1} \coloneqq \operatorname*{argmin}_{\hat{x} \in \mathcal{N}(x_t)} s_A(\hat{x})$ .
+
+While greedy search has been used for character-flipping (Ebrahimi et al., 2018), it is ill-suited in the space of synonym substitutions. The degree of nodes is high - assuming $n_{\mathrm{rep}}$ words can be replaced in the text, each with $K$ possible synonyms, then the out degree is $O(n_{\mathrm{rep}} \cdot K)$ . This results in an infeasible number of forward passes through the attacked model even for a small number of search
+
+iterations.
+
+To enable effective search through the search space, we (a) factorize the graph such that the out-degree of nodes is lower, and (b) use a best-first search algorithm. We describe those next.
+
+Graph factorization To reduce the out-degree of a node in the search space and thus improve its efficiency, we can split each step into two. First, choose a position to substitute in the utterance; Second, choose a substitution for that position. This reduces the number of evaluations of $A$ per step from $O(n_{\mathrm{rep}} \cdot K)$ to $O(n_{\mathrm{rep}} + K)$ . To estimate the score of a position $i$ , one can mask the word $w_i$ with a mask token $\tau$ and measure $s_A(x_{w_i \rightarrow \tau})$ where $x_{w_i \rightarrow \tau}$ is the utterance $x$ where the word in position $i$ is replaced by the mask $\tau$ .
+
+We can describe this approach as search over a bi-partite DAG $\hat{G} = (\mathcal{U}\cup \mathcal{W},\hat{\mathcal{E}})$ . The nodes $\mathcal{U}$ are utterances like in $G$ , and the new nodes are utterances with a single mask token $\mathcal{W} = \{\bar{x}_{w_i\rightarrow \tau}\mid \bar{x}\in S(x)\wedge w_i$ is a word in $x\}$ . The edges comprise two types: $\hat{\mathcal{E}} = \mathcal{E}_1\cup \mathcal{E}_2$ . The edges $\mathcal{E}_1$ are from utterances to masked utterances: $\mathcal{E}_1 = \{(\bar{x},\bar{x}_{w_i\rightarrow \tau})\} \subset \mathcal{U}\times \mathcal{W}$ , and $\mathcal{E}_2 = \{(\bar{x}_{w_i\rightarrow \tau},\bar{x}_{w_i\rightarrow w_{syn}})\} \subset \mathcal{W}\times \mathcal{U}$ where $w_{syn}\in \operatorname {Syn}(w_i)$ . In Figure 3, the two rightmost nodes in each row would be factorized together as they substitute the same word, and the algorithm will evaluate only one of them to estimate the potential benefit of substituting "movie".
+
+Best-first search A factorized graph makes search possible by reducing the out-degree of nodes. However, greedy search is still sub-optimal. This is since it relies on the heuristic search function to be a good estimate of the distance to an adversarial example, which can often be false. Consider the example in Fig. 3. The two adversarial examples (with $p = 0.4$ or $p = 0.45$ ) are not reachable from the best node after the first step ( $p = 0.6$ ), only from the second-best ( $p = 0.65$ ).
+
+Best-first search (Pearl, 1984) overcomes this at a negligible cost, by holding a min-heap over the nodes of the frontier of the search space (Alg. 1). In each step, we pop the next utterance, which assigns the lowest probability to the gold label, and push all neighbors into the heap. When a promising branch turns out to be sub-optimal, search can resume from an earlier node to find a better solution, as shown in the blue path in Figure 3. To bound the cost of finding a single adversarial example, we bound the number of forward passes through the model
+
+$A$ with a budget parameter $B$ . To further reduce "greedyness", search can use a beam by popping more than one node in each step, expanding all their neighbors and pushing the result back to the heap. Our final approach uses Best-First search over a Factorized graph, and is termed BFF.
+
+Algorithm 1: BFF
+```txt
+input :model A, factorized graph $G$ , utterance $x$ .
+heap $\leftarrow \{(x,s_A(X)\}$ $x^{*} \gets x$
+while $|\text{heap}| > 0$ and budget $B$ not exhausted:
+[ \begin{aligned} & \bar{x} \gets \text{heap.pop()} \\ & x^{*} \gets \operatorname{argmin}_{\hat{x} \in \{\bar{x},x^{*}\}} A(\hat{x}) \\ & \text{if } A(x^{*}) \neq y \text{ break;} \\ & \text{for } \hat{x} \in \mathcal{N}(\bar{x}) \text{ do} \\ & \quad | \quad \text{heap.push}(\hat{x},s_A(\hat{x})) \end{aligned} ]
+return $x^{*}$
+```
+
+# 5 Experiments
+
+We conduct a thorough empirical evaluation of model robustness across a wide range of attacks and training procedures.
+
+# 5.1 Experimental Setup
+
+To evaluate our approach over diverse settings, we consider three different tasks: text classification, sentiment analysis and question answering, two of which contain long passages that result in a large attack space (see Table 1).
+
+1. SST-2: Based on the Stanford sentiment treebank (Socher et al., 2013), SST-2 is a binary (positive/negative) classification task containing 11,855 sentences describing movie reviews. SST-2 has been frequently used for evaluating robustness.
+2. IMDB (Maas et al., 2011): A binary (positive/negative) text classification task, containing 50K reviews from IMDB. Here, passages are long and thus the attack space is large (Table 1).
+3. BoolQ (Clark et al., 2019): contains 16,000 yes/no questions over Wikipedia paragraphs. This task is perhaps the most interesting, because the attack space is large and answering requires global passage understanding. We allow word substitutions in the paragraph only and do not substitute nouns, verbs, or adjectives that appear in the question to avoid non-label-preserving perturbations. Further details can be found in App. A.2.
+
+Models We consider a wide array of models and evaluate both their downstream accuracy and ro
+
+bustness. In all models, we define a budget of $B = 1000$ , which specifies the maximal number of allowed forward passes through the model for finding an adversarial example. All results are an average of 3 runs.
+
+To demonstrate the effectiveness of BFF for both robustness evaluation as well as adversarial training, we compare it to a recent state-of-the-art discrete attack, TEXTFOOLER (Jin et al., 2020), which we denote in model names below by the prefix TxF. The models compared are:
+
+- BASELINE: we fine-tune a pretrained language model on the training set. We use BERT-BASE (Devlin et al., 2019) for IMDB/SST-2 and ROBERTA-LARGE (Liu et al., 2019) for BoolQ. These baselines are on par with current state-of-the-art to demonstrate the efficacy of our method.
+- BFFOFF/TXFOFF Offline augmentation with the BFF or TEXTFOOLER attacks.
+- BFFON/TxFON Online augmentation with the BFF or TEXTFOOLER attacks.
+- RANDOFF- $L$ : We compare search-based algorithms to a simple and efficient approach that does not require any forward passes through the model $A$ . Specifically, we randomly sample $L$ utterances from the attack space for each example (without executing $A$ ) and add them to the training data.
+- RANDOM: A random sampling approach that does use the model $A$ . Here, we sample $B$ random utterances, pass them through $A$ , and return the attack that resulted in lowest model probability.
+- FREELB: For completeness, we also consider FREELB (Zhu et al., 2020), a popular gradient-based approach for improving robustness, which employs virtual adversarial training (see §6). This approach uses online augmentation, where examples are created by taking gradient steps w.r.t the input embeddings to maximize the model's loss. Other gradient-based approaches (e.g., certified robustness) are not suitable when using pre-trained transformers, which we further discuss in §6.
+
+In a parallel line of work, Garg and Ramakrishnan (2020) and Li et al. (2020) used pre-trained language models to both define an attack space and to generate high-fidelity attacks in that space. While successful, these approaches are not suitable for our setting, due to the strong coupling between the attack strategy and the attack space itself. We further discuss this in §6
+
+Evaluation We evaluate models on their downstream accuracy, as well as on robust accuracy, i.e. the fraction of examples against which the model is robust. Since exact robust accuracy is intractable to compute due to the exponential size of the attack space, we compute an upper-bound by attacking each example with both BFF and TEXTFOOLER (TxF) with a budget of $B = 2000$ . An example is robust if we cannot find an utterance where the prediction is different from the gold label. We evaluate robust accuracy on 1000/1000/872 samples from the development sets of BoolQ/IMDB/SST-2.
+
+# 5.2 Attack Space
+
+Despite the myriad of works on discrete attacks, an attack space for synonym substitutions has not been standardized. While all past work employed a synonym dictionary combined with a $\Phi(\cdot, \cdot)$ filtering function (see §3), the particular filtering functions vary. When examining the attack space proposed in TxF, we observed that attacks result in examples that are difficult to understand or are not label-preserving. Table 6 in App. A.4 shows several examples. For instance, in sentiment classification, the attack replaced "compelling" with "unconvincing" in the sentence "it proves quite unconvincing as an intense, brooding character study" which alters the meaning and the sentiment of the sentence. Therefore, we use a more strict definition of the filtering function and conduct a user study to verify it is label-preserving.
+
+Concretely, we use the synonym dictionary from Alzantot et al. (2018). We determine if a word substitution is context-appropriate by computing all single-word substitutions $(n_{\mathrm{rep}} \cdot K)$ and disallowing those that change the POS tag according to spaCy (Honnibal et al.) or increase perplexity according to GPT-2 (Radford et al., 2019) by more than $25\%$ . Similar to Jin et al. (2020), we also filter out synonyms that are not semantics-preserving according to the USE (Cer et al., 2018) model. The attack space includes any combination of allowed single-word substitutions, where the fraction of allowed substitutions is $d = 0.1$ . Implementation details are in App. A.2. We find that this ensemble of models reduces the number of substitutions that do not preserve semantics and are allowed by the filtering function.
+
+We check the validity of our more restrictive attack space with a user study, where we verify that our attack space is indeed label-preserving. The
+
+ | |x| | nrep | |Syn(w)| | |Sφ(x)| |
| SST-2 | 8.9 | 2.7 | 2.4 | 27.7 |
| IMDB | 242.4 | 97.3 | 3.6 | 2.27 × 1064 |
| BoolQ† | 97.7 | 38.7 | 3.6 | 3.64 × 1025 |
+
+Table 1: Statistics on datasets and the size of attack space. We show the average number of words per utterance $|x|$ , the average number of words with substitutions $n_{\mathrm{rep}}$ , average number of synonyms per replaceable word, and an estimation of the attack space size.
+
+details of the user study are in $\S 5.6$
+
+# 5.3 Robustness Results
+
+Table 2 shows accuracy on the development set, robust accuracy, and slowdown compared to BASELINE for all models and datasets. For downstream accuracy, training for robustness either maintains or slightly increases downstream accuracy. This is not the focus of this work, but is indeed a nice side-effect. For robust accuracy, discrete attacks substantially improve robustness: $80.5 \rightarrow 85.3$ on SST-2, $41.2 \rightarrow 78.9$ on IMDB, and $50.0 \rightarrow 68.7$ on BoolQ, closing roughly half the gap from downstream accuracy.
+
+Comparing different attacks, online augmentation (BFFON), which has been overlooked in the context of discrete attacks, leads to dramatic robustness gains compared to other methods, but is slow to train - 20-270x slower than BASELINE. This shows the importance of continuous adaptation to the current vulnerabilities of the model.
+
+Interestingly, adding offline random samples $(\mathrm{RANDOFF} - L)$ consistently improves robust accuracy, and using $L = 12$ leads to impressive robustness gains without executing $A$ at all, outperforming BFFOFF in robust accuracy, and being $\sim 5\mathrm{x}$ faster on IMDB and BoolQ. Moreover, random sampling is trivial to implement, and independent from the attack strategy. Hence, the common practice of using offline augmentation with search-based attacks, such as BFFOFF, seems misguided, and a better solution is to use random sampling. Online random augmentation obtains impressive results, not far from BFFON, without applying any search procedure, but is very slow, since it uses the entire budget $B$ in every example.
+
+Comparing BFF to TxF, we observe that BFF, which uses best-first search, outperforms TxF in both the online and offline setting. Last FREELB, which is based on virtual adversarial training, improves robust accuracy at a low computational cost, but is dramatically outperformed by discrete search
+
+| Model | Accuracy | Robust Accuracy | Slowdown |
| SST-2 | IMDB | BoolQ | SST-2 | IMDB | BoolQ | SST-2 | IMDB | BoolQ |
| Baseline | 91.9 | 93.4 | 84.5 | 80.5 | 41.2 | 50.0 | ×1 | ×1 | ×1 |
| FREELB | 92.5 | 93.9 | 85.5 | 82.1 | 62.5 | 55.8 | ×1.8 | ×1.8 | ×3.9 |
| RANDOFF-1 | 91.9 | 93.5 | 85.6 | 83.5 | 50.3 | 52.2 | ×1.9 | ×1.5 | ×2.1 |
| RANDOFF-4 | 91.6 | 93.7 | 85.5 | 83.6 | 57.0 | 58.4 | ×3.8 | ×4.5 | ×5.1 |
| RANDOFF-8 | 91.1 | 93.8 | 86.1 | 83.3 | 60.9 | 61.3 | ×5.4 | ×8.0 | ×9.3 |
| RANDOFF-12 | 91.5 | 93.7 | 85.8 | 84.2 | 60.1 | 63.0 | ×6.3 | ×11.5 | ×13.2 |
| TXOFF | 91.2 | 93.4 | 86.5 | 83.5 | 49.0 | 61.5 | ×3.0 | ×56.1 | ×8.6 |
| BFFOFF | 91.8 | 93.7 | 85.8 | 84.6 | 54.3 | 62.3 | ×5.4 | ×60.0 | ×63.2 |
| RANDOM | 91.7 | 94.1 | 85.6 | 84.9 | 68.5 | 66.0 | ×14.8 | ×249.3 | ×280.4 |
| TXFON | 91.3 | 93.8 | 86.0 | 84.0 | 67.4 | 65.3 | ×3.9 | ×58.0 | ×28.1 |
| BFFON | 91.7 | 94.2 | 86.5 | 85.3 | 78.9 | 68.7 | ×21.1 | ×270.7 | ×215.9 |
+
+Table 2: Accuracy on the evaluation set, robust accuracy, and slowdown in model training for all datasets.
+
+| Model | IMDB | BoolQ |
| Rand | TxF | BFF | Gen | Rand | TxF | BFF | Gen |
| Baseline | 73.1 | 70.2 | 49.9 | 54.1 | 62.1 | 67.7 | 50.2 | 52.0 |
| RND-OA | 74.8 | 74.7 | 52.9 | 59.1 | 70.9 | 72.0 | 59.4 | 62.0 |
| TxFOFF | 67.7 | 77.5 | 52.5 | 56.7 | 71.0 | 75.0 | 61.5 | 63.4 |
| BFFOFF | 75.4 | 76.9 | 58.6 | 64.1 | 70.9 | 74.8 | 64.7 | 65.2 |
| RANDOM | 87.0 | 76.4 | 68.5 | 79.6 | 71.5 | 72.6 | 60.1 | 67.5 |
| TxFON | 81.1 | 84.2 | 69.7 | 73.7 | 73.4 | 74.8 | 65.3 | 67.4 |
| BFFON | 87.0 | 84.9 | 79.0 | 81.9 | 75.1 | 76.1 | 69.0 | 70.3 |
+
+Table 3: Robust accuracy of different robust models w.r.t particular discrete attacks. RND-OA is offline augmentation with a random attack and $B = 1000$ . Gen is our implementation of the Genetic Attack by Alzantot et al. (2018).
+
+based attacks, including BFF.
+
+To summarize, random sampling leads to significant robustness gains at a small cost, outperforming the commonly used offline augmentation. Online augmentation leads to the best robustness, but is more expensive to train.
+
+# 5.4 Robustness across Attack Strategies
+
+A natural question is whether a model trained for robustness with an attack (e.g., BFF) is robust w.r.t to examples generated by other attacks, which are potentially uncorrelated with them. To answer that, we evaluate the robustness of our models to attacks generated by BFF, TxF, and random sampling. Moreover, we evaluate robustness to a genetic attack, which should not be correlated with BFF and TxF: we re-implement the genetic attack algorithm from Alzantot et al. (2018) (details in A.3), and examine the robustness of our model to this attack. All attacks are with a budget of $B = 2000$ .
+
+Table 3 shows the result of this evaluation. We observe that BFFON obtains the highest robust accuracy results w.r.t to all attacks: BFF, TxF, random sampling, and a genetic attack. In offline
+
+augmentation, we observe again that BFFOFF obtains good robust accuracy, higher or comparable to all other offline models for any attack strategy. This result highlights the generality of BFF for improving model robustness.
+
+# 5.5 Success Rate Results
+
+To compare the different attacks proposed in §4, we analyze the success rate against BASELINE, i.e., the proportion of examples for which an attack finds an adversarial example as a function of the budget $B$ .
+
+Fig. 4 compares the success rate of different attacks. We observe that BFF-based attacks have the highest success rate after a few hundred executions. TEXTFOOLER performs well at first, finding adversarial examples for many examples, but then its success plateaus. Similarly, a random approach, which ignores the graph structure, starts with a relatively high success rate, as it explores far regions in the graph, but fails to properly utilize its budget and then falls behind.
+
+BFF combines backtracking with graph factorization. When removing backtracking, i.e., greedy search over the factorized graph, success rate decreases, especially in BoolQ. Greedy search without graph factorization leads to a low success rate due to the large number of neighbors of each node, which quickly exhausts the budget. Moreover, looking at BFF with beam size 2 (popping 2 items from the heap in each step) leads to lower performance when the budget $B \leq 2000$ , as executions are expended on less promising utterances, but could improve success rate given a larger budget.
+
+Lastly, due to our more strict definition of the attack space, described in (§5.2), success rates of BFF and TxF are lower compared to Jin et al. (2020). To verify the correctness of our attacks,
+
+
+
+
+Figure 4: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget.
+
+
+
+ | Original | Random | BFF |
| IMDB | 98.0 | 98.0 | 96.0 |
| BoolQ | 89.0 | 91.5 | 83.5 |
| SST-2 | 97.0 | 96.0 | 94.4 |
+
+Table 4: Evaluating attack space validity. We show human performance on original examples, random examples, and examples generated with BFF.
+
+we run BFF and TxF in their attack space, which uses a larger synonym dictionary, a more permissive function $\Phi$ , and does not limit the number of substitutions $D$ and budget $B$ . We obtain a similar success rate, close to $100\%$ . Nevertheless, we argue our attack space, validated by users to be label-preserving is preferable, and leave standardization of attack spaces through a broad user study to future work.
+
+# 5.6 User Study
+
+Since a model is considered to not be robust even if it flips the output label for a single adversarial sample, the validity of adversarial examples in the attack space is crucial. When we examined generated attacks based on prior works, we found many label-flipping attacks. This was especially noticeable when using BFF attacks over tasks not evaluated in prior works (see examples in Appendix A.4). In this work, our focus was on evaluating different methods for increasing model robustness,
+
+and thus over-constraining the attack space to guarantee its validity was acceptable. We stress that our attack search space is more conservative than prior work, and is a strict subset of prior attack spaces (see Appendix A.2), leading to higher validity of adversarial examples.
+
+We evaluate the validity of our attack space and the generated adversarial samples with a user study. We sample 100/100/50 examples from SST-2/BoolQ/IMDB respectively, and for each example create two adversarial examples: (a) by random sampling (b) using a BFF attack. We ask 25 NLP graduate students to annotate both the original example and the two adversarial ones. Each example is annotated by two annotators and each annotator only sees one version of an example. If human performance on random and adversarial examples is similar to the original task, this indicates the attack space is label-preserving.
+
+Table 4 shows the results. Human performance on random examples is similar to the original utterances. Human performance on examples generated with BFF is only mildly lower than the performance on the original utterances, overall confirming that the attack space is label-preserving.
+
+Ideally, the validity of adversarial examples should be as high as the original examples. However, a small degradation in random vs. original is expected since the search space is not perfect, and similarly for BFF since it is targeted at finding adversarial examples. Nevertheless, observed drops were small, showing the advantage in validity compared to prior work. The minor irregularity in BoolQ between random and original is indicative of the noise in the dataset.
+
+# 6 Related Work
+
+Adversarial attacks and robustness have attracted tremendous attention. We discuss work beyond improving robustness through adversarial attacks.
+
+Certified Robustness is a class of methods that provide a mathematical certificate for robustness (Dvijotham et al., 2018; Gowal et al., 2018; Jia et al., 2019; Huang et al., 2019; Shi et al., 2020). The model is trained to minimize an upper bound on the loss of the worst-case attack. When this upper bound is low, we get a certificate for robustness against all attacks. While this approach has had success, it struggles when applied to transformers, since upper bounds are propagated through many layers, and become too loose to be practical.
+
+Gradient-based methods In a white-box setting, adversarial examples can be generated by performing gradient ascent with respect to the input representation. Gradient-based methods (Goodfellow et al., 2015; Madry et al., 2018) have been empirically successful (Gowal et al., 2018; Ebrahimi et al., 2018), but suffer from a few shortcomings: (a) they assume access to gradients, (b) they lose their effectiveness when combined with sub-word tokenization, since one cannot substitute words that have a different number of sub-words, and (c) they can generate noisy examples that does not preserve the output label. In parallel to our work, Guo et al. (2021) proposed a gradient-based approach that finds a distribution over the attack space at the token level, resulting in an efficient attack.
+
+Virtual adversarial training In this approach, one does not generate explicit adversarial examples (Zhu et al., 2020; Jiang et al., 2020; Li and Qiu, 2020; Pereira et al., 2021). Instead, embeddings in an $\epsilon$ -sphere around the input (that do not correspond to words) are sampled, and continuous optimization approaches are used to train for robustness. These works were shown to improve downstream accuracy, but did not result in better robust accuracy. Recently, Zhou et al. (2020) proposed a method that does improve robustness, but like other gradient-based methods, it is white-box, does not work well with transformers over subwords, and leads to noisy samples. A similar approach has been taken by Si et al. (2020b) to generate virtual attacks during training by interpolating offline-generated attacks.
+
+Defense layers This approach involves adding normalization layers to the input before propagating it to the model, so that different input variations are mapped to the same representation (Wang et al., 2019; Mozes et al., 2020; Jones et al., 2020). While successful, this approach requires manual engineering and a reduction in model expressivity as the input space is significantly reduced. A similar approach (Zhou et al., 2019) has been to identify adversarial inputs and predict the original un-perturbed input.
+
+Pretrained language-models as attacks In this work, we decouple the definition of the attack-space from the attack strategy itself, which is cast as a search algorithm. This allows us to systematically compare different attack strategies and methods to improve robustness in the same setting. An
+
+orthogonal approach to ours was proposed by Garg and Ramakrishnan (2020) and Li et al. (2020), who used the fact that BERT was trained with the masked language modeling objective to predict possible semantic preserving adversarial perturbations over the input tokens, thereby coupling the definition of the attack space with the attack strategy. While this approach showed great promise in efficiently generating valid adversarial examples, it does not permit any external constraint on the attack space and thus is not comparable to attacks in this work. Future work can test whether robustness transfers across attack spaces and attack strategies by either (a) evaluating the robustness of models trained in this work against the aforementioned works (in their attack space), or (b) combine such attacks with online augmentation to train robust models and compare to the attacks proposed in our work.
+
+# 7 Conclusions
+
+We examine achieving robustness through discrete adversarial attacks. We find that the popular approach of offline augmentation is sub-optimal in both speed and accuracy compared to random sampling, and that online augmentation leads to impressive gains. Furthermore, we propose BFF, a new discrete attack based on best-first search, and show that it outperforms past work both in terms of robustness improvement and in terms of attack success rate.
+
+Together, our contributions highlight the key factors for success in achieving robustness through adversarial attacks, and open the door to future work on better and more efficient methods for achieving robustness in natural language understanding.
+
+# Acknowledgements
+
+We thank Mor Geva, Tomer Wolfson, Jonathan Herzig, Inbar Oren, Yuval Kirstain, Uri Shaham and Omer Levy for their useful comments. This research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800).
+
+# References
+
+Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, Huan Zhang, Cho-Jui Hsieh, and
+
+Mani B Srivastava. 2019. Genattack: Practical black-box attacks with gradient-free optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 1111-1119.
+Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics.
+Nicholas Carlini and D. Wagner. 2017. Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57.
+Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.
+Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. *BoolQ: Exploring the surprising difficulty of natural yes/no questions.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. 2018. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265.
+Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36, Melbourne, Australia. Association for Computational Linguistics.
+Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6174-6181, Online. Association for Computational Linguistics.
+
+Karan Goel, Nazneen Rajani, Jesse Vig, Samson Tan, Jason Wu, Stephan Zheng, Caiming Xiong, Mohit Bansal, and Christopher Re. 2021. Robustness gym: Unifying the nlp evaluation landscape. arXiv preprint arXiv:2101.04840.
+Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. 2018. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715.
+Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. 2021. Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733.
+Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. spaCy: Industrialstrength Natural Language Processing in Python.
+Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4083-4093, Hong Kong, China. Association for Computational Linguistics.
+Robin Jia, Aditi Raghunathan, Kerem Goksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4129-4142, Hong Kong, China. Association for Computational Linguistics.
+Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177-2190, Online. Association for Computational Linguistics.
+Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of
+
+Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8018-8025. AAAI Press.
+Erik Jones, Robin Jia, Aditi Raghunathan, and Percy Liang. 2020. Robust encodings: A framework for combating adversarial typos. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2752-2765, Online. Association for Computational Linguistics.
+Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Qi Lei, Lingfei Wu, Pin-Yu Chen, Alex Dimakis, Inderjit S. Dhillon, and Michael J. Witbrock. 2019. Discrete adversarial attacks and submodular optimization with applications to text classification. In Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019. mlsys.org.
+Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6193–6202, Online. Association for Computational Linguistics.
+Linyang Li and Xipeng Qiu. 2020. Tavat: Token-aware virtual adversarial training for language understanding. arXiv: Computation and Language.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
+Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, and Lewis D Griffin. 2020. Frequency-guided word substitutions for detecting textual adversarial examples. arXiv preprint arXiv:2004.05887.
+
+Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506-519.
+Judea Pearl. 1984. Heuristics: Intelligent Search Strategies for Computer Problem Solving, page 48. Addison-Wesley Longman Publishing Co., Inc., USA.
+Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, and Ichiro Kobayashi. 2021. Targeted adversarial training for natural language understanding. arXiv preprint arXiv:2104.05847.
+Luis Perez and Jason Wang. 2017. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097, Florence, Italy. Association for Computational Linguistics.
+Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, and Cho-Jui Hsieh. 2020. Robustness verification for transformers. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Chenglei Si, Ziqing Yang, Yiming Cui, Wentao Ma, Ting Liu, and Shijin Wang. 2020a. Benchmarking robustness of machine reading comprehension models. arXiv preprint arXiv:2004.14004.
+Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2020b. Better robustness by more coverage: Adversarial training with mixup augmentation for robust fine-tuning. arXiv preprint arXiv:2012.15699.
+Patrice Y Simard, Yann A LeCun, John S Denker, and Bernard Victorri. 1998. Transformation invariance in pattern recognition—tangent distance and tangent propagation. In Neural networks: tricks of the trade, pages 239–274. Springer.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
+
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
+Samson Tan, Shafiq Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! Combating linguistic discrimination with inflectional perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2920-2935, Online. Association for Computational Linguistics.
+Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Robustness may be at odds with accuracy. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153-2162, Hong Kong, China. Association for Computational Linguistics.
+Xiaosen Wang, Hao Jin, and Kun He. 2019. Natural language adversarial attacks and defenses in word level. arXiv preprint arXiv:1909.06723.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066-6080, Online. Association for Computational Linguistics.
+Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan Kankanhalli. 2020. Attacks which do not kill training make adversarial learning stronger. arXiv preprint arXiv:2002.11242.
+Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-wei Chang, and Xuanjing Huang. 2020. Defense against
+
+adversarial attacks in nlp via dirichlet neighborhood ensemble. arXiv preprint arXiv:2006.11627.
+Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4904-4913, Hong Kong, China. Association for Computational Linguistics.
+Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+
+# A Appendix
+
+# A.1 Experimental Details
+
+All of the code was written in python and is available at https://github.com/Mivg/ robust_transformers. The models are trained with the transformers library (Wolf et al., 2020). Whenever offline augmentation was used, the resulting adversarial samples were added to the training set and shuffled before training a new model with the same hyper-parameters as the baseline. Thus, the model is trained on $N \times L$ samples where $N$ is the original numbers of samples and $L$ is the number of augmentations added per sample. For online augmentation, we run two parallel data loaders with different shuffling, each with half the required batch size. We then attack the samples in one batch and concatenate the most successful attack to the other batch. The model is fed with the new constructed batch with identical weighting to the halves. Here, we consider a full epoch when every sample was passed through the model both as a perturbed and as an unperturbed sample. As such, the model is trained on $2N$ samples. For each dataset, we use the default train-dev split as described in the paper, and report the accuracy on the development set. We train with hyper-parameters as described below:
+
+SST-2: We fine-tuned a pre-trained cased BERT-BASE (Devlin et al., 2019) with max seq length $= 128$ over Nvidia Titan XP GPU for three epochs with batch size of 32 and learning rate of $2e - 5$ .
+
+IMDB: We fine-tuned a pre-trained cased BERT-BASE (Devlin et al., 2019) with max seq length=480 over Nvidia Titan XP GPU for three epochs with batch size of 48 and learning rate of $2e - 5$ .
+
+BoolQ: We fine-tuned a pre-trained ROBERTALARGE (Liu et al., 2019) for BoolQ with max seq length $= 480$ over Nvidia GTX 3090 GPU for three epochs with batch size of 48 and learning rate of $1e - 5$ .
+
+For each parameter choice reported in Table 2, we ran three different experiments with different random initialization, and reported the mean results. The respective standard deviations are given in Table 5. To finetune the models using the FreeLB (Zhu et al., 2020) method, we adapted the implementation from https://github.com/zhuchen03/FreeLB and used the following parameters:
+
+SST-2: init-magnitude $= 0.6$ , adversarial-steps $= 1$
+
+| Model | Accuracy | Robust Accuracy |
| SST-2 | IMDB | BoolQ | SST-2 | IMDB | BoolQ |
| Baseline | ±0.1 | ±0.1 | ±1.3 | ±0.4 | ±0.6 | ±0.9 |
| FREELB | ±0.2 | ±0.1 | ±0.4 | ±0.5 | ±1.0 | ±1.1 |
| RANDOFF-1 | ±0.3 | ±0.1 | ±1.8 | ±0.5 | ±1.4 | ±1.8 |
| RANDOFF-4 | ±0.7 | ±0.1 | ±0.5 | ±0.6 | ±1.9 | ±0.5 |
| RANDOFF-8 | ±0.2 | ±0.1 | ±0.8 | ±0.7 | ±2.1 | ±0.8 |
| RANDOFF-12 | ±0.6 | ±0.1 | ±1.0 | ±0.5 | ±1.4 | ±1.0 |
| TxFOFF | ±0.6 | - | - | ±0.3 | - | - |
| BFFOFF | ±0.3 | - | ±0.3 | ±0.3 | - | ±1.8 |
| RANDOM | ±0.1 | - | - | ±0.3 | - | - |
| TxFON | ±0.0 | - | - | ±0.3 | - | - |
| BFFON | ±0.5 | - | - | ±0.6 | - | - |
+
+Table 5: Standard deviation on the experiments reported in Table 2. Missing cells indicate a single-run was used due to the long training time.
+
+
+Figure 5: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget.
+
+2, adversarial-learning-rate $= 0.1$ and $l_{2}$ norm with no limit on the norm.
+
+IMDB: init-magnitude $= 0.2$ , adversarial-steps $= 4$ , adversarial-learning-rate $= 0.2$ and $l_{2}$ norm with no limit on the norm.
+
+BoolQ: init-magnitude $= 0.2$ , adversarial-steps $= 4$ , adversarial-learning-rate $= 0.2$ and $l_{2}$ norm with no limit on the norm.
+
+BFF implementation For the factorization phase of BFF, we use $\tau \sim \operatorname{Syn}(w)$ with uniform sampling. We find that while using an out-of-vocabulary masking token is useful to compute a word salience, it is less suitable here as we are interested in the model's over-sensitivity to perturbations in the exact phrasing of the word. Also,
+
+in contrast to TxF which is optimistic and factorizes the attack space only once, BFF factorizes the space after every step. Namely, Optimistic greedy search plans the entire search path by evaluating all permissible single-word substitutions. Let $x_{w_i \to w}$ denote the utterance $x$ where the word $w_i$ is replaced with a synonym $w \in \operatorname{Syn}(w_i)$ . The optimistic greedy algorithm scores each word $w_i$ in the utterance with $s(w_i) := \min_{w \in \operatorname{syn}(w_i)} s_A(x_{w_i \to w})$ , that is, the score of a word is the score for its best substitution, and also stores this substitution. Then, it sorts utterance positions based on $s(w_i)$ in ascending order, which defines the entire search path: In each step, the algorithm moves to the next position based on the sorted list and uses the best substitution stored for that position. Fig. 5 shows the benefit from each of those modifications.
+
+Budget Effect Intuitively, higher budgets better approximate an exhaustive search and thus the robustness evaluation as an upper bound should approach its true value. However, due to lack of backtracking in some of the attack strategies, they may plateau early on. In this work, we used $B = 1000$ for all training phases and $B = 2000$ for the robustness evaluation. Empirically, this gives a good estimate on the upper bound of model's robust accuracy, while constraining the computational power needed for the experiments. A natural question is how much tighter the bounds may be if a larger budget is given. Fig. 6 depicts an evaluation of strategies' success-rates over the same models as in Fig. 4 with a larger budget. As can be seen, while the RANDOM attack and TxF plateau, BFF variants as well as GENATTACK are able to exploit the larger budget to fool the model in more cases. This is especially true in IMDB where the search space is considerably larger. We expect this trend of tighter bounds to continue with ever larger budgets, though we note that the rate of improvements decreases with budget and that the ranking between strategies remains unchanged. Therefore, we conclude that drawing conclusions about strategies comparison and robustness improvements by evaluating with a budget of 2,000 suffices.
+
+# A.2 Attack Space Implementation Details
+
+As described in §5.2, we use the synonyms dictionary defined by Alzantot et al. (2018). In particular, we use the pre-computed set of those synonyms given by Jia et al. (2019) as our bases for $\operatorname{Syn}(w)$ . We pre-process the entire development and training
+
+
+Figure 6: Success rate of different attacks against BoolQ/IMDB BASELINE as a function of the budget.
+
+data and store for each utterance, the set $\text{Syn}_{\Phi}(w)$ and avoid the need to employ large language models during training and robustness evaluation. For every word in an utterance $w_i \in x$ , and for every $\bar{w}_i \in \text{Syn}(w_i)$ we evaluate $\Phi(w_i, \bar{w}_i)$ as follows:
+
+1. With the same sequences as above, we validate that $POS(w_{i}) \equiv POS(\bar{w}_{i})$ according to spaCy's (Honnibal et al.) POS tagger.
+2. With a window of size 101, we validate that $\mathrm{PPL}(x) / \mathrm{PPL}(\bar{x})\geq 0.8$ where $\mathrm{PPL}(\cdot)$ is the perplexity of the sequence as given by a pre-trained GPT-2 model (Radford et al., 2019)
+3. For BoolQ only, we also use spaCy's POS tagger to tag all content words (namely NOUN, PROPN, ADV, and ADJ) in the question. We then restrict all those words from being perturbed in the passage.
+4. Following Jin et al. (2020), we take a window of size 15 around the word, and validate with USE (Cer et al., 2018) that the semantic similarity between the unperturbed sequence $(w_{i-7},\ldots,w_i,\ldots,w_{i+7})$ and the perturbed sequence $(w_{i-7},\ldots,\bar{w}_i,\ldots,w_{i+7})$ is at least 0.7.
+
+# A.3 Genetic Attack Implementation Details
+
+Our implementation of Gen-Attack presented by Alzantot et al. (2018) was based on https://github.com/nesl/nlp_adversarial_
+
+examples/blob/master/attacks.py and used our attack space rather than the original attack space presented there. For evaluation we used the distribution hyperparameters as defined by the paper. Namely, population-size $p := 20$ , maximum generations: $g := 100$ and softmax-temperature $= 0.3$ . Note we did not need to limit the number of candidate synonyms considered as this was already done in the attack space construction. However, we have made the modifications to the original algorithm in order to adapt to our settings.
+
+Maximal modification constraints While the original algorithm presented by Alzantot et al. (2019) contained a clipping phase where mutated samples were clipped to match a maximal norm constraint, the adapted version for discrete attacks presented in Alzantot et al. (2018) did not. As we wish to limit the allowed number of perturbation for any single input utterance and the crossover phase followed by the perturb sub-routine can easily overstep this limit, a post-perturb phase was added. Namely, in every generation creation, after the crossover and mutation (i.e. perturb) subroutines create a candidate child, if the total number of perturbed samples exceeds the limit, we randomly uniformly revert the perturbation in words until the limit is reached. This step introduced another level of randomness into the process. We experimented with reverting based on the probability to be replaced as used in the perturb sub-routine, but this resulted in sub-par results.
+
+Improved Efficiency In addition to estimating the fitness function of each child in a generation which requires a forward pass through the attacked model, Alzantot et al. (2018) also used a greedy step in the perturb sub-routine to estimate the fitness of each synonym mutation for a chosen position. This results in an extremely high number of forward passes through the model, specifically $\mathcal{O}(g\cdot p\cdot (k + 1))$ which is orders of magnitude larger than our allowed budget of 2000. However, many of the passes are redundant, so by utilizing caching to previous results, the attack strategy can better utilize its allocated budget, resulting in significantly better success rate in with better efficiency.
+
+# A.4 Attack Space in Prior Work
+
+Examining the attack space proposed in Jin et al. (2020), which includes a larger synonym dictionary and a different filtering function $\Phi(\cdot)$ , we observe
+
+that many adversarial examples are difficult to understand or are not label-preserving. Table 6 shows examples from an implementation of the attack space of the recent TEXTFOOLER (Jin et al., 2020). We observe that while in IMDB the labels remain mostly unchanged, many passages are difficult to understand. Moreover, we observe frequent label flips in datasets such as in SST-2 example, as well as perturbations in BoolQ that leave the question unanswerable.
+
+| Passage: Table of prime factors – The number 1 is called a unit. It has no incipient [prime] factors and is neither first [prime] nor composite.
+Question: is 1 a prime factor of every number
+Answer: False |
| Passage: Panama Canal – The nouvelle [new] locks commences [opened] for commercial vehicular [traffic] on 26 June 2016, and the first yacht [ship] to intersecting [cross] the canal using the third set of locks was a modern New Panamax vessel, the Chinese-owned container warships [ship] Cosco Shipping Panama. The original locks, now over 100 centuries [years] old, give [allow] engineer [engineers] best [greater] access for maintenance, and are hoped [projected] to continue workplace [operating] indefinitely.
+Question: is the old panama canal still in use
+Answer: True |
| Passage: Chevrolet Avalanche – The Chevrolet Avalanche is a four-door, five or eight [six] commuter [passenger] harvest [pickup] trucking [truck] stocks [sharing] GM's long-wheelbase frame [chassis] used on the Chevrolet Suburban and Cadillac Escalade ESV. Breaking with a long-standing tradition, the Avalanche was not affordable [available] as a GMC, but only as a Chevrolet.
+Question: is there a gmc version of the avalanche
+Answer: False |
| Sentence: I've been waiting for this movie for SO many years! The best part is that it decedent [lives] up to my visions! This is a MUST SEE for any Tenacious D or true Jack Black fan. It's just once [so] great to see JB, KG and Lee on the big screen! It's not a authentic [true] story, but who cares. The D is the greatest band on earth! I had the soundtrack to the movie last week and heeded [listed] to it non-stop. To see the movie was unadulterated [pure] bliss for me and my hubby. We've both met Jack and Kyle after 2 different Tenacious D concerts and also saw them when they toured with Weezer. We left that concert after the D was done playing. Nobody can top their show! Long live the D!!! :D
+Answer: True |
| Sentence: Sweet, kidding [entertaining] tale of a young 17 1/2 year old boy, controlled by an overbearing religious mother and withdrawn father, and how he finds himself through his work with a retired, eccentric and tragic actress. Very better [well] acted, especially by Julie Walters. Rupert Grint plays the role of the teenage boy well, showing his talent will last longer than the Harry Potter series of films. Laura Linney plays his ruthlessly strict mother without a hint of redemption, so there's no room to like her at all. But the film is a awfully [very] antics [entertaining] film, made well by the British in the style of the likes of Keeping Mum and Calendar Girls.
+Answer: True |
| Sentence: Enormous adjourned [suspension] of disbelief is required where Will's "genius" is concerned. Not just in math-he is also very well reads [read] in economic history, able to out-shrink several shrinks, etc etc. No, no, no. I don't buy it. While they're at it, they might as well have him wearing a big "S" on his chest, flying faster than a jet plane and stopping bullets.<br / > <br / >Among other problems...real genius (shelving for the moment the problem of what it really is, and whether it deserves such mindless homage) doesn't simply appear /ex nihil/o/. It isn't ever so multi-faceted. And it is very virtually [rarely] appreciates [appreciated] by contemporaries.<br / >Better to have made Will a basketball prodigy. Except that Damon's too short.
+Answer: False |
| Sentence: it proves quite unconvincing [compelling] as an intense , brooding character study .
+Answer: True |
| Sentence: an sensible [unwise] amalgam of broadcast news and vibes . an sensible amalgam of broadcast news and vibes .
+Answer: False |
| Sentence: if you dig on david mamet 's mind tricks ... rent this movie and iike [enjoy] !
+Answer: True |
+
+Table 6: Examples of adversarial examples, which are difficult to understand or not label-preserving, found for BoolQ/IMDB/SST-2 with the attack space from (Jin et al., 2020). In **bold** are the substituting words and in brackets the original word.
\ No newline at end of file
diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/images.zip b/achievingmodelrobustnessthroughdiscreteadversarialtraining/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bc15ac54b81b80fcf7aed299312bea13266aff0e
--- /dev/null
+++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91633a02bc0b41581d4c4ae4ce8724d75371d4ff39d9ca5f451df4ed8e034ef7
+size 782682
diff --git a/achievingmodelrobustnessthroughdiscreteadversarialtraining/layout.json b/achievingmodelrobustnessthroughdiscreteadversarialtraining/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ca9271aa407139f9a17684be73597afc571be215
--- /dev/null
+++ b/achievingmodelrobustnessthroughdiscreteadversarialtraining/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a6d1f969d58f11f530737771f1d9311f9f0d1b3904916f33dbc1fb1bc27e91bd
+size 543331
diff --git a/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_content_list.json b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..eef4485426f7ae59ca7b9535d3eb766b30c9eac9
--- /dev/null
+++ b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cbebeeb3a46bd05e2c51ba901ae4a98c45c5cea22c278299368616e9869e22ab
+size 85239
diff --git a/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_model.json b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1b15fe2680f779410e8381c1938362e142ce745e
--- /dev/null
+++ b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61cd3704b5abe2b3b6303d5f15c75364f2adc2e574d3b5f345e3ec4ff82b929e
+size 102449
diff --git a/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_origin.pdf b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8991055940dc72ff91ed037c37f4b67b9a652250
--- /dev/null
+++ b/activeeaactivelearningforneuralentityalignment/6c276802-f88b-4712-a9bd-2af9ec2f28f3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b77bf05440b0d21ec56e3611a2f320f6e7c86483f464ec429ee862f58918065b
+size 586182
diff --git a/activeeaactivelearningforneuralentityalignment/full.md b/activeeaactivelearningforneuralentityalignment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f11f9e80d38c0493a88811bf848edcb02e2bf057
--- /dev/null
+++ b/activeeaactivelearningforneuralentityalignment/full.md
@@ -0,0 +1,388 @@
+# ActiveEA: Active Learning for Neural Entity Alignment
+
+Bing Liu1,2, Harrison Scells1, Guido Zuccon1, Wen Hua1, Genghong Zhao2
+
+1The University of Queensland, Australia
+
+$^{2}$ Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., China
+
+{bing.liu, h.scells, g.zuccon, w.hua}@uq.edu.au
+
+zhaogenghong@neusoft.com
+
+# Abstract
+
+Entity Alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs) and is an essential step of KG fusion. Current mainstream methods – neural EA models – rely on training with seed alignment, i.e., a set of pre-aligned entity pairs which are very costly to annotate. In this paper, we devise a novel Active Learning (AL) framework for neural EA, aiming to create highly informative seed alignment to obtain more effective EA models with less annotation cost. Our framework tackles two main challenges encountered when applying AL to EA:
+
+(1) How to exploit dependencies between entities within the AL strategy. Most AL strategies assume that the data instances to sample are independent and identically distributed. However, entities in KGs are related. To address this challenge, we propose a structure-aware uncertainty sampling strategy that can measure the uncertainty of each entity as well as its impact on its neighbour entities in the KG.
+(2) How to recognise entities that appear in one KG but not in the other KG (i.e., bachelors). Identifying bachelors would likely save annotation budget. To address this challenge, we devise a bachelor recognizer paying attention to alleviate the effect of sampling bias.
+
+Empirical results show that our proposed AL strategy can significantly improve sampling quality with good generality across different datasets, EA models and amount of bachelors.
+
+# 1 Introduction
+
+Knowledge Graphs (KGs) store entities and their relationships with a graph structure and are used as knowledge drivers in many applications (Ji et al., 2020). Existing KGs are often incomplete but complementary to each other. A popular approach used to tackle this problem is KG fusion, which attempts to combine several KGs into a single, comprehensive one. Entity Alignment (EA) is an essential
+
+
+Figure 1: An example of Entity Alignment.
+
+step for KG fusion: it identifies equivalent entities across different KGs, supporting the unification of their complementary knowledge. For example, in Fig. 1 Donald Trump and US in the first KG correspond to D.J. Trump and America respectively in the second KG. By aligning them, the political and business knowledge about Donald Trump can be integrated within one KG.
+
+Neural models (Chen et al., 2017, 2018; Wang et al., 2018; Cao et al., 2019) are the current state-of-the-art in EA and are capable of matching entities in an end-to-end manner. Typically, these neural EA models rely on a seed alignment as training data which is very labour-intensive to annotate. However, previous EA research has assumed the availability of such seed alignment and ignored the cost involved with their annotation. In this paper, we seek to reduce the cost of annotating seed alignment data, by investigating methods capable of selecting the most informative entities for labelling so as to obtain the best EA model with the least annotation cost: we do so using Active Learning. Active Learning (AL) (Aggarwal et al., 2014) is a Machine Learning (ML) paradigm where the annotation of data and the training of a model are performed iteratively so that the sampled data is highly informative for training the model. Though many general AL strategies have been proposed (Settles, 2012; Ren et al., 2020), there are some unique challenges in applying AL to EA.
+
+The first challenge is how to exploit the dependencies between entities. In the EA task, neighbouring entities (context) in the KGs naturally affect each other. For example, in the two KGs of Fig. 1, we can infer US corresponds to America if we already know that Donald Trump and D.J. Trump refer to the same person: this is because a single person can only be the president of one country. Therefore, when we estimate the value of annotating an entity, we should consider its impact on its context in the KG. Most AL strategies assume data instances are independent, identically distributed and cannot capture dependencies between entities (Aggarwal et al., 2014). In addition, neural EA models exploit the structure of KGs in different and implicit ways (Sun et al., 2020b). It is not easy to find a general way of measuring the effect of entities on others.
+
+The second challenge is how to recognize the entities in a KG that do not have a counterpart in the other KG (i.e., bachelors). In the first KG of Fig. 1, Donald Trump and US are matchable entities while New York City and Republican Party are bachelors. Selecting bachelors to annotate will not lead to any aligned entity pair. The impacts of recognizing bachelors are twofold:
+
+1. From the perspective of data annotation, recognizing bachelors would automatically save annotation budget (because annotators will try to seek a corresponding entity for some time before giving up) and allow annotators to put their effort in labelling matchable entities. This is particularly important for the existing neural EA models, which only consider matchable entities for training: thus selecting bachelors in these cases is a waste of annotation budget.
+2. From the perspective of EA, bachelor recognition remedies the limitation of existing EA models that assume all entities to align are matchable, and would enable them to be better used in practice (i.e., real-life KGs where bachelors are popular).
+
+To address these challenges, we propose a novel AL framework for EA. Our framework follows the typical AL process: entities are sampled iteratively, and in each iteration a batch of entities with the highest acquisition scores are selected. Our novel acquisition function consists of two components: a structure-aware uncertainty measurement module and a bachelor recognizer. The structure-aware uncertainty can reflect the uncertainty of a single
+
+entity as well as the influence of that entity in the context of the KG, i.e., how many uncertainties it can help its neighbours eliminate. In addition, we design a bachelor recognizer, based on Graph Convolutional Networks (GCNs). Because the bachelor recognizer is trained with the sampled data and used to predict the remaining data, it may suffer from bias (w.r.t. the preference of sampling strategy) of these two groups of data. We apply model ensembling to alleviate this problem.
+
+Our major contributions in this paper are:
+
+1. A novel AL framework for neural EA, which can produce more informative data for training EA models while reducing the labour cost involved in annotation. To our knowledge, this is the first AL framework for neural EA.
+2. A structure-aware uncertainty sampling strategy, which models uncertainty sampling and the relation between entities in a single AL strategy.
+3. An investigation of bachelor recognition, which can reduce the cost of data annotation and remedy the defect of existing EA models.
+4. Extensive experimental results that show our proposed AL strategy can significantly improve the quality of data sampling and has good generality across different datasets, EA models, and bachelor quantities.
+
+# 2 Background
+
+# 2.1 Entity Alignment
+
+Entity alignment is typically performed between two KGs $\mathcal{G}^1$ and $\mathcal{G}^2$ , whose entity sets are denoted as $\mathcal{E}^1$ and $\mathcal{E}^2$ respectively. The goal of EA is to find the equivalent entity pairs $\mathcal{A} = \{(e^1, e^2) \in \mathcal{E}^1 \times \mathcal{E}^2 | e^1 \sim e^2\}$ , where $\sim$ denotes an equivalence relationship and is usually assumed to be a one-to-one mapping. In supervised and semi-supervised models, a subset of the alignment $\mathcal{A}^{seed} \subset \mathcal{A}$ , called seed alignment, are annotated manually beforehand and used as training data. The remaining alignment form the test set $\mathcal{A}^{test} = \mathcal{A} \setminus \mathcal{A}^{seed}$ . The core of an EA model $F$ is a scoring function $F(e^1, e^2)$ , which takes two entities as input and returns a score for how likely they match. The effectiveness of an EA model is essentially determined by $\mathcal{A}^{seed}$ and we thus denote it as $m(\mathcal{A}^{seed})$ .
+
+# 2.2 Active Learning
+
+An AL framework consists of two components: (1) an oracle (annotation expert), which provides labels for the queries (data instances to label), and
+
+
+Figure 2: Overview of ActiveEA.
+
+(2) a query system, which selects the most informative data instances as queries. In pool-based scenario, there is a pool of unlabelled data $\mathcal{U}$ . Given a budget $B$ , some instances $\mathcal{U}_{\pi,B}$ are selected from the pool following a strategy $\pi$ and sent to the experts to annotate, who produce a training set $\mathcal{L}_{\pi,B}$ . We train the model on $\mathcal{L}_{\pi,B}$ and the effectiveness $m(\mathcal{L}_{\pi,B})$ of the obtained model reflects how good the strategy $\pi$ is. The goal is to design an optimal strategy $\pi_*$ such that $\pi_* = \operatorname{argmax}_{\pi} m(\mathcal{L}_{\pi,B})$ .
+
+# 3 ActiveEA: Active Entity Alignment
+
+# 3.1 Problem Definition
+
+Given two KGs $\mathcal{G}^1, \mathcal{G}^2$ with entity sets $\mathcal{E}^1, \mathcal{E}^2$ , an EA model $F$ , a budget $B$ , the AL strategy $\pi$ is applied to select a set of entities $\mathcal{U}_{\pi,B}$ so that the annotators label the counterpart entities to obtain the labelled data $\mathcal{L}_{\pi,B}$ . $\mathcal{L}_{\pi,B}$ consists of annotations of matchable entities $\mathcal{L}_{\pi,B}^{+}$ , which form the seed alignment $\mathcal{A}_{\pi,B}^{seed}$ , and bachelors $\mathcal{L}_{\pi,B}^{-}$ . We measure the effectiveness $m(\mathcal{A}_{\pi,B}^{seed})$ of the AL strategy $\pi$ by training the EA model on $\mathcal{A}_{\pi,B}^{seed}$ and then evaluating it with $\mathcal{A}_{\pi,B}^{test} = \mathcal{A} \setminus \mathcal{A}_{\pi,B}^{seed}$ . Our goal is to design an optimal entity sampling strategy $\pi_*$ so that $\pi_* = \operatorname{argmax}_{\pi} m(\mathcal{A}_{\pi,B}^{seed})$ .
+
+In our annotation setting, we select entities from one KG and then let the annotators identify their counterparts from the other KG. Under this setting, we assume the pool of unlabelled entities is initialized with $\mathcal{U} = \mathcal{E}^1$ . The labelled data will be like $\mathcal{L}_{\pi,B}^{+} = \{(e^{1} \in \mathcal{E}^{1}, e^{2} \in \mathcal{E}^{2})\}$ and $\mathcal{L}_{\pi,B}^{-} = \{(e^{1} \in \mathcal{E}^{1}, null)\}$ .
+
+# 3.2 Framework Overview
+
+The whole annotation process, as shown in Fig. 2, is carried out iteratively. In each iteration, the query system selects $N$ entities from $\mathcal{U}$ and sends them to the annotators. The query system includes (1) a structure-aware uncertainty measurement module $f^{su}$ , which combines uncertainty sampling with the structure information of the KGs, and (2) a
+
+bachelor recognizer $f^b$ , which helps avoid selecting bachelor entities. The final acquisition $f^{\pi}$ used to select which entities to annotate is obtained by combining the outputs of these two modules. After the annotators assign the ground-truth counterparts to the selected entities, the new annotations are added to the labelled data $\mathcal{L}$ . With the updated $\mathcal{L}$ , the query system updates the EA model and the bachelor recognizer. This process repeats until no budget remains. To simplify the presentation, we omit the sampling iteration when explaining the details.
+
+# 3.3 Structure-aware Uncertainty Sampling
+
+We define the influence of an entity on its context as the amount of uncertainties it can help its neighbours remove. As such, we formulate the structure-aware uncertainty $f^{su}$ as
+
+$$
+\begin{array}{l} f ^ {s u} \left(e _ {i} ^ {1}\right) = \alpha \sum_ {e _ {i} ^ {1} \rightarrow e _ {j} ^ {1}, e _ {j} ^ {1} \in \mathcal {N} _ {i} ^ {\text {o u t}}} w _ {i j} f ^ {s u} \left(e _ {j} ^ {1}\right) \tag {1} \\ + (1 - \alpha) \frac {f ^ {u} (e _ {i} ^ {1})}{\sum_ {e ^ {1} \in \mathcal {E} ^ {1}} f ^ {u} (e ^ {1})}, \\ \end{array}
+$$
+
+where $\mathcal{N}_i^{out}$ is the outbound neighbours of entity $e_i^1$ (i.e. the entities referred to by $e_i^1$ ) and $w_{ij}$ measures the extent to which $e_i^1$ can help $e_j^1$ eliminate uncertainty. The parameter $\alpha$ controls the trade-off between the impact of entity $e_i^1$ on its context (first term in the equation) and the normalized uncertainty (second item). Function $f^u(e^1)$ refers to the margin-based uncertainty of an entity. For each entity $e^1$ , the EA model can return the matching scores $F(e^1, e^2)$ with all unaligned entities $e^2$ in $\mathcal{G}^2$ . Since these scores in existing works are not probabilities, we exploit the margin-based uncertainty measure for convenience, outlined in Eq. 2:
+
+$$
+f ^ {u} \left(e ^ {1}\right) = - \left(F \left(e ^ {1}, e _ {*} ^ {2}\right) - F \left(e ^ {1}, e _ {* *} ^ {2}\right)\right) \tag {2}
+$$
+
+where $F(e^{1}, e_{*}^{2})$ and $F(e^{1}, e_{**}^{2})$ are the highest and second highest matching scores respectively. A large margin represents a small uncertainty.
+
+For each entity $e_j^1$ , we assume its inbound neighbours can help it clear all uncertainty. Then, we have $\sum_{e_i^1 \to e_j^1, e_i^1 \in \mathcal{N}_j^{in}} w_{ij} = 1$ , where $\mathcal{N}_j^{in}$ is the inbound neighbour set of $e_j^1$ . In this work, we assume all inbound neighbours have the same impact on $e_j^1$ . In this case, $w_{ij} = \frac{1}{\mathrm{degree}(e_j^1)}$ , where $\mathrm{degree}(\cdot)$ returns the in-degree of an entity.
+
+Using matrix notion, Eq. 1 can be rewritten as
+
+$$
+\mathbf {f} ^ {s u} = \alpha \mathbf {W} \mathbf {f} ^ {s u} + (1 - \alpha) \frac {\mathbf {f} ^ {u}}{| \mathbf {f} ^ {u} |}
+$$
+
+where $\mathbf{f}^{su}$ is the vector of structure-aware uncertainties, $\mathbf{f}^u$ is the vector of uncertainties, and $\mathbf{W}$ is a matrix encoding influence between entities, i.e., $w_{ij} > 0$ if $e_i^1$ is linked to $e_j^1$ , otherwise 0.
+
+As $\mathbf{W}$ is a stochastic matrix (Gagniuc, 2017), we solve Eq. 1 iteratively, which can be viewed as the power iteration method (Franceschet, 2011), similar to Pagerank (Brin and Page, 1998). Specifically, we initialize the structure-aware uncertainty vector as $\mathbf{f}_0^{su} = \mathbf{f}^u$ . Then we update $\mathbf{f}_t^{su}$ iteratively:
+
+$$
+\mathbf {f} _ {t} ^ {s u} = \alpha \mathbf {W} \mathbf {f} _ {t - 1} ^ {s u} + (1 - \alpha) \frac {\mathbf {f} ^ {u}}{| \mathbf {f} ^ {u} |}, t = 1, 2, 3, \ldots
+$$
+
+The computation ends when $|\mathbf{f}_t^{su} - \mathbf{f}_{t - 1}^{su}| < \epsilon$
+
+# 3.4 Bachelor Recognizer
+
+The bachelor recognizer is formulated as a binary classifier, which is trained with the labelled data and used to predict the unlabelled data. One challenge faced here is the bias between the labelled data and the unlabelled data caused by the sampling strategy (since it is not random sampling). We alleviate this issue with a model ensemble.
+
+# 3.4.1 Model Structure
+
+We apply two GCNs (Kipf and Welling, 2017; Hamilton et al., 2017) as the encoders to get the entity embeddings $\mathbf{H}^{1} = \mathbf{GCN}^{1}(\mathcal{G}^{1}),\mathbf{H}^{2} = \mathbf{GCN}^{2}(\mathcal{G}^{2})$ , where each row in $\mathbf{H}^1$ or $\mathbf{H}^2$ corresponds to a vector representation of a particular entity. The two GCN encoders share the same structure but have separate parameters. With each GCN encoder, each entity $e_i$ is first assigned a vector representation $\mathbf{h}_i^{(0)}$ . Then contextual features of each entity are extracted:
+
+$$
+\mathbf {h} _ {i} ^ {(l)} = \operatorname {n o r m} (\sigma (\sum_ {j \in \mathcal {N} _ {i} \cup \{i \}} \mathbf {V} ^ {(l)} \mathbf {h} _ {j} ^ {(l - 1)} + \mathbf {b} ^ {(l)})),
+$$
+
+where $l$ is the layer index, $\mathcal{N}_i$ is the neighbouring entities of entity $e_i$ , and $\sigma$ is the activation function, $\mathrm{norm}(\cdot)$ is a normalization function, and $\mathbf{V}^{(l)}, \mathbf{b}^{(l)}$ are the parameters in the $l$ -th layer. The representations of each entity $e_i$ obtained in all GCN layers are concatenated into a single representation: $\mathbf{h}_i = \mathrm{concat}(\mathbf{h}_i^{(0)}, \mathbf{h}_i^{(1)}, \dots, \mathbf{h}_i^{(L)})$ , where $L$ is the number of GCN layers.
+
+After getting the representations of entities, we compute the similarities of each entity in $\mathcal{E}^1$ with all entities in $\mathcal{E}^2$ ( $\mathbf{S} = \mathbf{H}^1 \cdot \mathbf{H}^{2T}$ ) and obtain its corresponding maximum matching score as in
+
+$f^{s}(e_{i}^{1}) = \max (\mathbf{S}_{i,:}).$ The entity $e_i^1$ whose maximum matching score is greater than a threshold $\gamma$ is considered to be a matchable entity as in $f^{b}(e_{i}^{1}) = \mathbb{1}_{f^{s}(e_{i}^{1}) > \gamma},$ otherwise a bachelor.
+
+# 3.4.2 Learning
+
+In each sampling iteration, we train the bachelor recognizer with existing annotated data $\mathcal{L}$ containing matchable entities $\mathcal{L}^{+}$ and bachelors $\mathcal{L}^{-}$ . Furthermore, $\mathcal{L}$ is divided into a training set $\mathcal{L}^t$ and a validation set $\mathcal{L}^v$ .
+
+We optimize the parameters, including $\{\mathbf{V}^{(l)},\mathbf{b}^{(l)}\}_{1\leq l\leq L}$ of each GCN encoder and the threshold $\gamma$ , in two phases, sharing similar idea with supervised contrastive learning (Khosla et al., 2020). In the first phase, we optimize the scoring function $f^s$ by minimizing the constrastive loss shown in Eq.3.
+
+$$
+\begin{array}{l} \text {l o s s} = \sum_ {\left(e _ {i} ^ {1}, e _ {j} ^ {2}\right) \in \mathcal {L} ^ {t, +}} \| \mathbf {h} _ {i} ^ {1} - \mathbf {h} _ {j} ^ {2} \| \\ + \beta \sum_ {\left(e _ {i ^ {\prime}} ^ {1}, e _ {j ^ {\prime}} ^ {2}\right) \in \mathcal {L} ^ {t, n e g}} \left[ \lambda - \left\| \mathbf {h} _ {i ^ {\prime}} ^ {1} - \mathbf {h} _ {j ^ {\prime}} ^ {2} \right\| \right] _ {+} \tag {3} \\ \end{array}
+$$
+
+Here, $\beta$ is a balance factor, and $[\cdot]_{+}$ is $\max(0, \cdot)$ , and $\mathcal{L}^{t,neg}$ is the set of negative samples generated by negative sampling (Sun et al., 2018). For a given pre-aligned entity pair in $\mathcal{L}^{+}$ , each entity of it is substituted for $N^{neg}$ times. The distance of negative samples is expected to be larger than the margin $\lambda$ . In the second phase, we freeze the trained $f^{s}$ and optimize $\gamma$ for $f^{b}$ . It is easy to optimize $\gamma$ , e.g. by simple grid search, so that $f^{b}$ can achieve the highest performance on $\mathcal{L}^{v}$ (denoted as $q(f^{s}, \gamma, \mathcal{L}^{v}))$ using:
+
+$$
+\gamma^ {*} = \operatorname {a r g m a x} _ {\gamma} q (f ^ {s}, \gamma , \mathcal {L} ^ {v}).
+$$
+
+# 3.4.3 Model Ensemble for Sampling Bias
+
+The sampled data may be biased, since they have been preferred by the sampling strategy rather than selected randomly. As a result, even if the bachelor recognizer is well trained with the sampled data it may perform poorly on data yet to sample. We apply a model ensemble to alleviate this problem. Specifically, we divide the $\mathcal{L}$ into $K$ subsets evenly. Then we apply $K$ -fold cross-validation to train $K$ scoring functions $\{f_1^s,\dots,f_K^s\}$ , each time using $K - 1$ subsets as the training set and the left out portion as validation set. Afterwards, we search for an effective $\gamma$ threshold:
+
+$$
+\gamma^ {*} = \operatorname {a r g m a x} _ {\gamma} \frac {1}{K} \sum_ {1 \leq k \leq K} q \left(f _ {k} ^ {s}, \gamma , \mathcal {L} _ {k} ^ {v}\right)
+$$
+
+At inference, we ensemble by averaging the $K$ scoring functions $f_{k}^{s}$ to form the final scoring function $f^{s}$ as in Eq. 4 and base $f^{b}$ on it.
+
+$$
+f ^ {s} \left(e _ {i} ^ {1}\right) = \frac {1}{K} \sum_ {1 \leq k \leq K} f _ {k} ^ {s} \left(e _ {i} ^ {1}\right) \tag {4}
+$$
+
+# 3.5 Final Acquisition Function
+
+We combine our structure-aware uncertainty sampling with the bachelor recognizer to form the final acquisition function:
+
+$$
+f ^ {\pi} \left(e _ {i} ^ {1}\right) = f ^ {s u} \left(e _ {i} ^ {1}\right) f ^ {b} \left(e _ {i} ^ {1}\right)
+$$
+
+# 4 Experimental Setup
+
+# 4.1 Sampling Strategies
+
+We construct several baselines for comparison: rand random sampling used by existing EA work degree selects entities with high degrees.
+
+pagerank (Brin and Page, 1998) measures the centrality of entities by considering their degrees as well as the importance of its neighbours.
+
+betweenness (Freeman, 1977) refers to the number of shortest paths passing through an entity.
+
+uncertainty sampling selects entities that the current EA model cannot predict with confidence. Note that in this work we measure uncertainty using Eq. 2 for fair comparison.
+
+degree, pagerank and betweenness are purely topology-based and do not consider the current EA model. On the contrary, uncertainty is fully based on the current EA model without being able to capture the structure information of KG. We compare both our structure-aware uncertainty sampling (struct_uncert) and the full framework ActiveEA with the baselines listed above. We also examine the effect of Bayesian Transformation, which aims to make deep neural models represent uncertainty more accurately (Gal et al., 2017).
+
+# 4.2 EA Models
+
+We apply our ActiveEA framework to three different EA models, which are a representative spread of neural EA models and varied in KG encoding, considered information and training method (Liu et al., 2020; Sun et al., 2018):
+
+BootEA (Sun et al., 2018) encodes the KGs with the translation model (Bordes et al., 2013), exploits the structure of KGs, and uses self-training.
+
+Alinet (Sun et al., 2020a) also exploits the structure of KGs but with a GCN-based KG encoder, and is trained in a supervised manner.
+
+RDGCN (Wu et al., 2019) trains a GCN in a supervised manner, as Alinet, but it can incorporate entities' attributes.
+
+Our implementations and parameter settings of the models rely on OpenEA $^1$ (Sun et al., 2020b).
+
+# 4.3 Datasets
+
+We use three different datasets: D-W-15K V1 (DW), EN-DE-15K V1 (ENDE), and EN-FR-100K V1 (ENFR), obtained from OpenEA (Sun et al., 2020b). Each dataset contains two KGs and equivalent entity pairs. The KGs used in these datasets were sampled from real KGs, i.e. DBpedia (Lehmann et al., 2015), Wikidata (Vrandecic and Krötzsch, 2014), and YAGO (Rebele et al., 2016), which are widely used in EA community. These datasets differ in terms of KG sources, languages, sizes, etc. We refer the reader to Sun et al. (2020b) for more details.
+
+Existing work on EA assumes all entities in the KGs are matchable, thus only sampling entities with counterparts when producing the datasets. For investigating the influence of bachelors on AL strategies, we synthetically modify the datasets by excluding a portion of entities from the second KG.
+
+# 4.4 Evaluation Metrics
+
+We use Hit@1 as the primary evaluation measure of the EA models. To get an overall evaluation of one AL strategy across different sized budgets, we plot the curve of a EA model's effectiveness with respect to the proportion of annotated entities, and calculate the Area Under the Curve (AUC).
+
+# 4.5 Parameter Settings
+
+We set $\alpha = 0.1$ , $\epsilon = 1e^{-6}$ for the structure-aware uncertainty. We use $L = 1$ GCN layer for our bachelor recognizer with 500 input and 400 output dimensions. We set $K = 5$ for its model ensemble and $\lambda = 1.5$ , $\beta = 0.1$ , $N^{neg} = 10$ for its training. The sampling batch size is set to $N = 100$ for 15K data and $N = 1000$ for 100K data.
+
+# 4.6 Reproducibility Details
+
+Our experiments are run on a GPU cluster. We allocate 50G memory and one 32GB nVidia Tesla V100 GPU for each job on 15K data, and 100G memory for each job on 100K data. The training and evaluation of ActiveEA take approximately 3h with Alinet on 15K data, 10h with BootEA on 15K
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+rand degree pagerank betweenness uncertainty struct_untert ActiveEA
+
+
+Percentage of Annotated Entities
+
+
+
+
+
+
+Figure 3: HIT@1 of sampling strategies for all EA models on DW and ENDE, as annotation portion increases. Top row shows experiments that do not include bachelors; bottom row shows experiments that include $30\%$ bachelors. ActiveEA is equivalent to struct_uncert in absence of bachelors, and is thus shown only for the second row.
+
+
+
+
+Figure 4: Hit@1 for all sampling strategies on the Alinet EA model on ENFR. Left shows experiments without bachelors, right shows with $30\%$ bachelors.
+
+data, 10h with RDGCN on 15K data, and 48h with Alinet on 100K data. Most baseline strategies take less time than ActiveEA on the same dataset except betweenness on 100K data, which takes more than 48h. We apply grid search for setting $\alpha$ and $N$ (shown in Sec. 5.4). Hyper-parameters of the bachelor recognizer are chosen by referring the settings of OpenEA and our manual trials. Code and datasets are available at https://github.com/UQ-Neusoft-Health-Data-Science/ActiveEA.
+
+# 5 Experimental Results
+
+# 5.1 Comparison with Baselines
+
+Fig. 3 presents the overall performance of each strategy with three EA models on two datasets, each of which we also synthetically modify to include $30\%$ bachelors. We also report the AUC@0.5 values of these curves in Tab. 1. ActiveEA degenerates into struct_uncert when there is no bachelor.
+
+Random Sampling. Random sampling usually performs poorly when the annotation proportion is small, while it becomes more competitive when the amount of annotations increases. But for most annotation proportions, random sampling exhibits a large gap in performance compared to the best method. This observation highlights the need to investigate data selection for EA.
+
+Topology-based Strategies. The topology-based strategies are effective when few annotations are provided, e.g., $< 20\%$ . However, once annotations increase, the effectiveness of topology-based strategies is often worse than random sampling. This may be because these strategies suffer more from the bias between the training set and test set. Therefore, only considering the structural information of KGs has considerable drawbacks for EA.
+
+Uncertainty Sampling. On the contrary, the un
+
+| Strategy | BootEA | AliNet | RDGCN |
| DW (0%) | DW (30%) | ENDE (0%) | ENDE (30%) | DW (0%) | DW (30%) | ENDE (0%) | ENDE (30%) | DW (0%) | DW (30%) | ENDE (0%) | ENDE (30%) |
| rand | 23.5n | 17.0 | 28.1 | 21.3 | 19.4 | 16.7 | 26.0 | 23.7 | 25.8 | 25.0 | 41.3n | 41.0 |
| degree | 19.5 | 16.0 | 24.0 | 20.0 | 17.1 | 15.2 | 22.2 | 20.5 | 23.3 | 22.9 | 39.1 | 39.4 |
| pagerank | 22.3 | 18.3 | 27.6 | 23.0 | 19.9 | 17.3 | 25.8 | 24.1 | 24.5 | 23.9 | 40.5 | 40.6 |
| betweenness | 20.5 | 16.3 | 26.1 | 21.1 | 17.8 | 15.6 | 23.7 | 22.3 | 23.2 | 22.7 | 40.2 | 40.3 |
| uncertainty | 23.9 | 16.1 | 29.8 | 21.2 | 21.6 | 15.4 | 28.2 | 22.2 | 24.7 | 23.9 | 40.9n | 40.5 |
| struct_uncert ActiveEA | 26.3 | 20.8 | 33.6 | 27.4 | 23.1 | 19.1 | 30.6 | 26.8 | 26.5 | 25.6 | 41.9 | 41.0 |
| 26.7 | 31.5 | 31.5 | 25.7 | 32.8 | 28.1 | 42.3 |
+
+Table 1: Overall performance (AUC@0.5 $(\%)$ ) for each sampling strategy. The highest performing strategy in each column is indicated in bold. We run each strategy 5 times; most results for ActiveEA show statistically significant differences over other methods (paired t-test with Bonferroni correction, $p < 0.05$ ), except the few cells indicated by $n$ .
+
+certainty sampling strategy performs poorly when the proportion of annotations is small but improves after several annotations have been accumulated. One reason for this is that neural EA models cannot learn useful patterns with a small number of annotations. On datasets with bachelors, uncertainty sampling always performs worse than random sampling. Thus, it is clear that uncertainty sampling cannot be applied directly to EA.
+
+Structure-aware Uncertainty Sampling. Structure-aware uncertainty is effective across all annotation proportions. One reason for this is that it combines the advantages of both topology-based strategies and uncertainty sampling. This is essential for AL as it is impossible to predict the amount of annotations required for new datasets.
+
+ActiveEA. ActiveEA, which enhances structure-aware sampling with a bachelor recognizer, greatly improves EA when KGs contain bachelors.
+
+# 5.1.1 Generality
+
+The structure-aware uncertainty sampling mostly outperforms the baselines, while ActiveEA performs even better in almost all cases. ActiveEA also demonstrates generality across datasets, EA models, and bachelor proportions.
+
+When the dataset has no bachelors, our uncertainty-aware sampling is exceeded by uncertainty sampling in few large-budget cases. However, the real-world datasets always have bachelors. In this case, our structure-aware uncertainty shows more obvious advantages.
+
+In addition, the strategies are less distinguishable when applied to RDGCN. The reason is that RDGCN exploits the name of entities for prealignment and thus all strategies achieve good performance from the start.
+
+
+Figure 5: Comparison demonstrating the effect of bachelors $(0\% -40\%)$ on the BootEA and Alinet models.
+
+
+
+
+Figure 6: Comparison demonstrating the effectiveness of the bachelor recognizer and the effect of the model ensemble (ME) on BootEA and Alinet.
+
+
+
+To assess the generality across datasets of different sizes, we evaluate the sampling strategies with Alinet using ENFR (100K entities), which is larger than DW and ENDE (15K entities). We choose Alinet because it is more scalable than BootEA and RDGCN (Zhao et al., 2020). Fig. 4 presents comparable results to the 15K datasets.
+
+# 5.2 Effect of Bachelors
+
+To investigate the effect of bachelors, we removed different amounts of entities randomly (each larger sample contains the subset from earlier samples) from $\mathcal{G}^2$ so that $\mathcal{G}^1$ had different percentages of bachelors. Fig. 5 shows the results of applying all strategies to these datasets. We further make the
+
+
+Figure 7: Comparison demonstrating the effects different parameters have on our sampling strategies.
+
+following four observations:
+
+1. The performance of all strategies except ActiveEA decrease as bachelors increase. How to avoid selecting bachelors is an important issue in designing AL strategies for EA.
+2. Among all strategies, uncertainty sampling is affected the most, while topology-based methods are only marginally affected.
+3. Our structure-aware uncertainty outperforms the baselines in all tested bachelor proportions.
+4. ActiveEA increases performance as the proportion of bachelors increases. The reason is: if $\mathcal{G}^1$ is fixed and the bachelors can be recognized successfully, a certain budget can lead to larger ratio of annotated matchable entities in datasets with more bachelors than in those with less bachelors.
+
+# 5.3 Effectiveness of Bachelor Recognizer
+
+Fig. 6 shows the effectiveness of our bachelor recognizer in the sampling process and the effect of model ensemble. The green curve shows the MicroF1 score of our bachelor recognizer using the model ensemble. Our bachelor recognizer achieves high effectiveness from the start of sampling, where there are few annotations. Each red dot represents the performance of the bachelor recognizer trained with a certain data partition without using the model ensemble. Performance varied because of the bias problem. Therefore, our model ensemble makes the trained model obtain high and stable performance.
+
+# 5.4 Sensitivity of Parameters
+
+To investigate the sensitivity of parameters, we ran our strategy with AliNet and BootEA on two DW variants with bachelor proportions of $0\%$ and $30\%$ .
+
+The sensitivity w.r.t. $\alpha$ is shown in the top row of
+
+
+Figure 8: Effect of Bayesian Transformation on uncertainty and ActiveEA across the DW and ENDE datasets and different bachelor percentages.
+
+Fig. 7. We observe that our method is not sensitive to $\alpha$ . The effectiveness fluctuates when $\alpha < 0.5$ and decreases when $\alpha > 0.5$ . This indicates uncertainty is more informative than structural information. When $\alpha = 0$ , our struct_uncert degenerates to uncertainty sampling (Eq. 2). In the upper left plot, we show the corresponding performance with dotted lines. Under most settings of $\alpha$ , the struct_uncert is much better than uncertainty sampling. This means that introducing structure information is beneficial.
+
+The bottom row of Fig. 7 shows the effect of sampling batch size $N$ . The overall trend is that larger batch sizes decrease performance. This observation confirms the intuition that more frequent updates to the EA model lead to more precise uncertainty. Therefore, the choice of value of sampling batch size is a matter of trade-off between computation cost and sampling quality.
+
+# 5.5 Examination of Bayesian Transformation
+
+We enhanced the uncertainty sampling and ActiveEA with Bayesian Transformation, implemented with Monte Carlo (MC) dropout, and applied them to Alinet and RDGCN on DW and ENDE as in Sec. 5.1. Fig. 8 shows improvements with different settings of MC dropout rate. We find (1) the variation of effects on uncertainty sampling is greater than that on ActiveEA; (2) Bayesian Transformation with small dropout (e.g., 0.05) results in slight improvements to ActiveEA in most cases.
+
+# 6 Related Works
+
+Entity Alignment. Entity Alignment refers to the matching of entities across different KGs that refer to the same real-world object. Compared with Entity Resolution (Mudgal et al., 2018), which matches duplicate entities in relational data, EA deals with graph data and emphasizes on exploiting the structure of KGs. Neural models (Chen
+
+et al., 2017, 2018; Wang et al., 2018; Cao et al., 2019) replaced conventional approaches (Jiménez-Ruiz and Grau, 2011; Suchanek et al., 2011) as the core methods used in recent years. Typically they rely on seed alignment as training data – this is expensive to annotate. Iterative training (i.e., self-training) has been applied to improve EA models by generating more training data automatically (Sun et al., 2018; Mao et al., 2020). These works concern better training methods with given annotated data. However, the problem of reducing the cost of annotation has been neglected. Berrenderf et al. (2021) have been the first to explore AL strategies for EA task. They compared several types of AL heuristics including node centrality, uncertainty, graph coverage, unmatched entities, etc. and they empirically showed the impact of sampling strategies on the creation of seed alignment. In our work, we highlight the limitations of single heuristics and propose an AL framework that can consider information structure, uncertainty sampling and unmatched entities at the same time. In addition, existing neural models assume all KGs entities have counterparts: this is a very strong assumption in reality (Zhao et al., 2020). We provide a solution to recognizing the bachelor entities, which is complementary to the existing models.
+
+Active Learning. Active Learning is a general framework for selecting the most informative data to annotate when training Machine Learning models (Aggarwal et al., 2014). The pool-based sampling scenario is a popular AL setting where a base pool of unlabelled instances is available to query from (Settles, 2012; Aggarwal et al., 2014). Our proposed AL framework follows this scenario. Numerous AL strategies have been proposed in the general domain (Aggarwal et al., 2014). Uncertainty sampling is the most widely used because of its ease to implement and its robust effectiveness (Lewis, 1995; Cohn et al., 1996). However, there are key challenges that general AL strategies cannot solve when applying AL to EA. Most AL strategies are designed under the assumption that the data is independent and identically distributed. However, KGs entities in the AL task are correlated, as in other graph-based tasks, e.g., node classification (Bilgic et al., 2010) and link prediction (Ostapuk et al., 2019). In addition, bachelor entities cause a very special issue in EA. They may have low informativeness but high uncertainty. We
+
+design an AL strategy to solve these special challenges. Few existing works (Qian et al., 2017; Malmi et al., 2017) have applied AL to conventional EA but do not consider neural EA models, which have now become of widespread use. Only Berrendorf et al. (2021) empirically explored general AL strategies for neural EA but did not solve the aforementioned challenges.
+
+# 7 Conclusion
+
+Entity Alignment is an essential step for KG fusion. Current mainstream methods for EA are neural models, which rely on seed alignment. The cost of labelling seed alignment is often high, but how to reduce this cost has been neglected. In this work, we proposed an Active Learning framework (named ActiveEA), aiming to produce the best EA model with the least annotation cost. Specifically, we attempted to solve two key challenges affecting EA that general AL strategies cannot deal with. Firstly, we proposed a structure-aware uncertainty sampling, which can combine uncertainty sampling with the structure information of KGs. Secondly, we designed a bachelor recognizer, which reduces annotation budget by avoiding the selection of bachelors. Specially, it can tolerate sampling biases. Extensive experimental showed ActiveEA is more effective than the considered baselines and has great generality across different datasets, EA models and bachelor percentages.
+
+In future, we plan to explore combining active learning and self-training which we believe are complementary approaches. Self-training can generate extra training data automatically but suffers from incorrectly labelled data. This can be addressed by amending incorrectly labelled data using AL strategies.
+
+# Acknowledgements
+
+This research is supported by the Shenyang Science and Technology Plan Fund (No. 20-201-4-10), the Member Program of Neusoft Research of Intelligent Healthcare Technology, Co. Ltd.(No. NRMP001901)). Dr Wen Hua is the recipient of an Australian Research Council DECRA Research Fellowship (DE210100160). Dr Guido Zuccon is the recipient of an Australian Research Council DECRA Research Fellowship (DE180101579).
+
+# References
+
+Charu C. Aggarwal, Xiangnan Kong, Quanquan Gu, Jiawei Han, and Philip S. Yu. 2014. Active learning: A survey. In Charu C. Aggarwal, editor, Data Classification: Algorithms and Applications, pages 571-606. CRC Press.
+Max Berrendorf, Evgeniy Faerman, and Volker Tresp. 2021. Active learning for entity alignment. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, volume 12656 of Lecture Notes in Computer Science, pages 48-62. Springer.
+Mustafa Bilgic, Lilyana Mihalkova, and Lise Getoor. 2010. Active learning for networked data. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 79-86. Omnipress.
+Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787-2795.
+Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. Comput. Networks, 30(1-7):107-117.
+Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1452-1461. Association for Computational Linguistics.
+Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2018. Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 3998-4004. ijcai.org.
+Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 1511-1517. ijcai.org.
+David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. 1996. Active learning with statistical models. J. Artif. Intell. Res., 4:129-145.
+
+Massimo Franceschet. 2011. Pagerank: standing on the shoulders of giants. Commun. ACM, 54(6):92-101.
+Linton C Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry, pages 35-41.
+Paul A Gagniuc. 2017. Markov chains: from theory to implementation and experimentation. John Wiley & Sons.
+Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1183-1192. PMLR.
+William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1024-1034.
+Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Martinen, and Philip S. Yu. 2020. A survey on knowledge graphs: Representation, acquisition and applications. CoRR, abs/2002.00388.
+Ernesto Jiménez-Ruiz and Bernardo Cuenca Grau. 2011. Logmap: Logic-based and scalable ontology matching. In *The Semantic Web - ISWC 2011 - 10th International Semantic Web Conference*, Bonn, Germany, October 23-27, 2011, Proceedings, Part I, volume 7031 of Lecture Notes in Computer Science, pages 273-288. Springer.
+Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Soren Auer, and Christian Bizer. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195.
+David D. Lewis. 1995. A sequential algorithm for training text classifiers: Corrigendum and additional data. SIGIR Forum, 29(2):13-19.
+
+Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6355-6364. Association for Computational Linguistics.
+Eric Malmi, Aristides Gionis, and Evimaria Terzi. 2017. Active network alignment: A matching-based approach. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 1687-1696. ACM.
+Xin Mao, Wenting Wang, Huimin Xu, Man Lan, and Yuanbin Wu. 2020. MRAEA: an efficient and robust entity alignment approach for cross-lingual knowledge graph. In WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pages 420-428. ACM.
+Sidharth Mudgal, Han Li, Theodoros Rekatsinas, AnHai Doan, Youngchoon Park, Ganesh Krishnan, Rohit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018, Houston, TX, USA, June 10-15, 2018, pages 19-34. ACM.
+Natalia Ostapuk, Jie Yang, and Philippe Cudré-Mauroux. 2019. Activelink: Deep active learning for link prediction in knowledge graphs. In *The World Wide Web Conference*, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 1398-1408. ACM.
+Kun Qian, Lucian Popa, and Prithviraj Sen. 2017. Active learning for large-scale entity resolution. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 1379-1388. ACM.
+Thomas Rebele, Fabian M. Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, and Gerhard Weikum. 2016. YAGO: A multilingual knowledge base from wikipedia, wordnet, and geonames. In The Semantic Web - ISWC 2016 - 15th International Semantic Web Conference, Kobe, Japan, October 17-21, 2016, Proceedings, Part II, volume 9982 of Lecture Notes in Computer Science, pages 177-185.
+Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. 2020. A survey of deep active learning. CoRR, abs/2009.00236.
+Burr Settles. 2012. Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers.
+
+Fabian M. Suchanek, Serge Abiteboul, and Pierre Senellart. 2011. PARIS: probabilistic alignment of relations, instances, and schema. Proc. VLDB Endow., 5(3):157-168.
+Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4396-4402. ijcai.org.
+Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020a. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 222-229. AAAI Press.
+Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020b. A benchmarking study of embedding-based entity alignment for knowledge graphs. Proc. VLDB Endow., 13(11):2326-2340.
+Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM, 57(10):78-85.
+Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 349-357. Association for Computational Linguistics.
+Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5278-5284. ijcai.org.
+Xiang Zhao, Weixin Zeng, Jiuyang Tang, Wei Wang, and Fabian Suchanek. 2020. An experimental study of state-of-the-art entity alignment approaches. IEEE Annals of the History of Computing, (01):1-1.
\ No newline at end of file
diff --git a/activeeaactivelearningforneuralentityalignment/images.zip b/activeeaactivelearningforneuralentityalignment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..87bcb2560d15795a93f4333156598ea40f55af4b
--- /dev/null
+++ b/activeeaactivelearningforneuralentityalignment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f23d50d8d3ba8ce63c0ba1951bba075dad81acd606997004bde5ae83c0e70a05
+size 501999
diff --git a/activeeaactivelearningforneuralentityalignment/layout.json b/activeeaactivelearningforneuralentityalignment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..31498f12803be1d4eafbdc6b3ba89b0e5b3d174c
--- /dev/null
+++ b/activeeaactivelearningforneuralentityalignment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7473a1f42a18f3cdb608f987cd14e2cd265ba10ae63762eedd88475a5d31bc20
+size 488918
diff --git a/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_content_list.json b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..bed825cef184467d3fd763086fb5145b91b92b85
--- /dev/null
+++ b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61f5e8cebfa9d96d304abebb79a5ec4ebda6d38f6177bacffbd01a82a01b5243
+size 96958
diff --git a/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_model.json b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ba6ca7e7b03fa9dd2d55858157fc655166b8e680
--- /dev/null
+++ b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:688e89c21402ac35230dd3865d26f2dae39d552e85c0da7f55629d8ed06a14e8
+size 117787
diff --git a/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_origin.pdf b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7d7667f915fd7370205ab0a1920de3423fa0471f
--- /dev/null
+++ b/activelearningbyacquiringcontrastiveexamples/37c9239e-15e2-421b-87ff-13f279cd75ca_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:105db2dffcce78184264b4626a2d2fd2111689f25539626d342a8572d54f806a
+size 2146918
diff --git a/activelearningbyacquiringcontrastiveexamples/full.md b/activelearningbyacquiringcontrastiveexamples/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..41b7ba3563683234e7bee8c1ac188698a6ae1bd7
--- /dev/null
+++ b/activelearningbyacquiringcontrastiveexamples/full.md
@@ -0,0 +1,335 @@
+# Active Learning by Acquiring Contrastive Examples
+
+Katerina Margatina† Giorgos Vernikos‡* Loic Barrault† Nikolaos Aletras† †University of Sheffield ‡EPFL *HEIG-VD
+
+{k.margatina, l.barrault, n.aletras}@sheffield.ac.uk georgios.vernikos@epfl.ch
+
+# Abstract
+
+Common acquisition functions for active learning use either uncertainty or diversity sampling, aiming to select difficult and diverse data points from the pool of unlabeled data, respectively. In this work, leveraging the best of both worlds, we propose an acquisition function that opts for selecting contrastive examples, i.e. data points that are similar in the model feature space and yet the model outputs maximally different predictive likelihoods. We compare our approach, CAL (Contrastive Active Learning), with a diverse set of acquisition functions in four natural language understanding tasks and seven datasets. Our experiments show that CAL performs consistently better or equal than the best performing baseline across all tasks, on both in-domain and out-of-domain data. We also conduct an extensive ablation study of our method and we further analyze all actively acquired datasets showing that CAL achieves a better trade-off between uncertainty and diversity compared to other strategies.
+
+# 1 Introduction
+
+Active learning (AL) is a machine learning paradigm for efficiently acquiring data for annotation from a (typically large) pool of unlabeled data (Lewis and Catlett, 1994; Cohn et al., 1996; Settles, 2009). Its goal is to concentrate the human labeling effort on the most informative data points that will benefit model performance the most and thus reducing data annotation cost.
+
+The most widely used approaches to acquiring data for AL are based on uncertainty and diversity, often described as the "two faces of AL" (Dasgupta, 2011). While uncertainty-based methods leverage the model predictive confidence to select difficult examples for annotation (Lewis and Gale, 1994; Cohn et al., 1996), diversity sampling exploits heterogeneity in the feature space by typically performing clustering (Brinker, 2003; Bodó
+
+
+Figure 1: Illustrative example of our proposed method CAL. The solid line (model decision boundary) separates data points from two different classes (blue and orange), the coloured data points represent the labeled data and the rest are the unlabeled data of the pool.
+
+et al., 2011). Still, both approaches have core limitations that may lead to acquiring redundant data points. Algorithms based on uncertainty may end up choosing uncertain yet uninformative repetitive data, while diversity-based methods may tend to select diverse yet easy examples for the model (Roy and McCallum, 2001). The two approaches are orthogonal to each other, since uncertainty sampling is usually based on the model's output, while diversity exploits information from the input (i.e. feature) space. Hybrid data acquisition functions that combine uncertainty and diversity sampling have also been proposed (Shen et al., 2004; Zhu et al., 2008; Ducoffe and Precioso, 2018; Ash et al., 2020; Yuan et al., 2020; Ru et al., 2020).
+
+In this work, we aim to leverage characteristics from hybrid data acquisition. We hypothesize that data points that are close in the model feature space (i.e. share similar or related vocabulary, or similar model encodings) but the model produces different predictive likelihoods, should be good candidates for data acquisition. We define such examples as contrastive (see example in Figure 1). For that purpose, we propose a new acquisition function that searches for contrastive examples in the pool of unlabeled data. Specifically, our method, Contrastive Active Learning (CAL) selects unlabeled
+
+data points from the pool, whose predictive likelihoods diverge the most from their neighbors in the training set. This way, CAL shares similarities with diversity sampling, but instead of performing clustering it uses the feature space to create neighborhoods. CAL also leverages uncertainty, by using predictive likelihoods to rank the unlabeled data.
+
+We evaluate our approach in seven datasets from four tasks including sentiment analysis, topic classification, natural language inference and paraphrase detection. We compare CAL against a full suite of baseline acquisition functions that are based on uncertainty, diversity or both. We also examine robustness by evaluating on out-of-domain data, apart from in-domain held-out sets. Our contributions are the following:
+
+1. We propose CAL, a new acquisition function for active learning that acquires contrastive examples from the pool of unlabeled data (§2);
+2. We show that CAL performs consistently better or equal compared to all baselines in all tasks when evaluated on in-domain and out-of-domain settings (§4);
+3. We conduct a thorough analysis of our method showing that CAL achieves a better trade-off between diversity and uncertainty compared to the baselines (§6).
+
+We release our code online ${}^{1}$ .
+
+# 2 Contrastive Active Learning
+
+In this section we present in detail our proposed method, CAL: Contrastive Active Learning. First, we provide a definition for contrastive examples and how they are related to finding data points that are close to the decision boundary of the model (§2.1). We next describe an active learning loop using our proposed acquisition function (§2.2).
+
+# 2.1 Contrastive Examples
+
+In the context of active learning, we aim to formulate an acquisition function that selects contrastive examples from a pool of unlabeled data for annotation. We draw inspiration from the contrastive learning framework, that leverages the similarity between data points to push those from the same class closer together and examples from different classes further apart during training (Mikolov et al.,
+
+2013; Sohn, 2016; van den Oord et al., 2019; Chen et al., 2020; Gunel et al., 2021).
+
+In this work, we define as contrastive examples two data points if their model encodings are similar, but their model predictions are very different (maximally disagreeing predictive likelihoods).
+
+Formally, data points $x_{i}$ and $x_{j}$ should first satisfy a similarity criterion:
+
+$$
+d \left(\Phi \left(x _ {i}\right), \Phi \left(x _ {j}\right)\right) < \epsilon \tag {1}
+$$
+
+where $\Phi(.)\in \mathbb{R}^{d'}$ is an encoder that maps $x_{i},x_{j}$ in a shared feature space, $d(.)$ is a distance metric and $\epsilon$ is a small distance value.
+
+A second criterion, based on model uncertainty, is to evaluate that the predictive probability distributions of the model $p(y|x_i)$ and $p(y|x_j)$ for the inputs $x_i$ and $x_j$ should maximally diverge:
+
+$$
+\operatorname {K L} \left(p \left(y | x _ {i}\right) | | p \left(y | x _ {j}\right)\right)\rightarrow \infty \tag {2}
+$$
+
+where KL is the Kullback-Leibler divergence between two probability distributions ${}^{2}$ .
+
+For example, in a binary classification problem, given a reference example $x_{1}$ with output probability distribution (0.8, 0.2)3 and similar candidate examples $x_{2}$ with (0.7, 0.3) and $x_{3}$ with (0.6, 0.4), we would consider as contrastive examples the pair $(x_{1}, x_{3})$ . However, if another example $x_{4}$ (similar to $x_{1}$ in the model feature space) had a probability distribution (0.4, 0.6), then the most contrastive pair would be $(x_{1}, x_{4})$ .
+
+Figure 1 provides an illustration of contrastive examples for a binary classification case. All data points inside the circle (dotted line) are similar in the model feature space, satisfying Eq. 1. Intuitively, if the divergence of the output probabilities of the model for the gray and blue shaded data points is high, then Eq. 2 should also hold and we should consider them as contrastive.
+
+From a different perspective, data points with similar model encodings (Eq. 1) and dissimilar model outputs (Eq. 2), should be close to the model's decision boundary (Figure 1). Hence, we hypothesize that our proposed approach to select
+
+# Algorithm 1 Single iteration of CAL
+
+Input: labeled data $\mathcal{D}_{\mathrm{lab}}$ , unlabeled data $\mathcal{D}_{\mathrm{pool}}$ , acquisition size $b$ , model $\mathcal{M}$ , number of neighbours $k$ , model representation (encoding) function $\Phi(.)$
+
+1 for $x_{p}$ in $\mathcal{D}_{\mathrm{pool}}$ do
+2 $\left\{(x_{l}^{(i)},y_{l}^{(i)})\right\} ,i = 1,\dots,k\gets \mathrm{KNN}\bigl {(}\Phi (x_{p}),\Phi (\mathcal{D}_{\mathbf{lab}}),k\bigr)$ find neighbours in $\mathcal{D}_{\mathbf{lab}}$
+3 $p(y|x_l^{(i)})\gets \mathcal{M}(x_l^{(i)}),i = 1,\ldots ,k$ compute probabilities
+4 $p(y|x_{p})\gets \mathcal{M}(x_{p})$
+5 $\begin{array}{rlr}{\mathrm{KL}}\big(p(y|x_l^{(i)})||p(y|x_p)\big),i = 1,\dots,k \end{array}$ compute divergence
+6 $s_{x_p} = \frac{1}{k}\sum_{i = 1}^{k}\mathrm{KL}\bigl (p(y|x_l^{(i)})||p(y|x_p)\bigr)$
+
+7 end
+
+8 $Q = \operatorname{argmax}_{x_p \in \mathcal{D}_{\mathrm{pool}}} s_{x_p}, |Q| = b$ 1 select batch
+
+Output: $Q$
+
+contrastive examples is related to acquiring difficult examples near the decision boundary of the model. Under this formulation, CAL does not guarantee that the contrastive examples lie near the model's decision boundary, because our definition is not strict. In order to ensure that a pair of contrastive examples lie on the boundary, the second criterion should require that the model classifies the two examples in different classes (i.e. different predictions). However, calculating the distance between an example and the model decision boundary is intractable and approximations that use adversarial examples are computationally expensive (Ducoffe and Precioso, 2018).
+
+# 2.2 Active Learning Loop
+
+Assuming a multi-class classification problem with $C$ classes, labeled data for training $\mathcal{D}_{\mathrm{lab}}$ and a pool of unlabeled data $\mathcal{D}_{\mathrm{pool}}$ , we perform AL for $T$ iterations. At each iteration, we train a model on $\mathcal{D}_{\mathrm{lab}}$ and then use our proposed acquisition function, CAL (Algorithm 1), to acquire a batch $Q$ consisting of $b$ examples from $\mathcal{D}_{\mathrm{pool}}$ . The acquired examples are then labeled4, they are removed from the pool $\mathcal{D}_{\mathrm{pool}}$ and added to the labeled dataset $\mathcal{D}_{\mathrm{lab}}$ , which will serve as the training set for training a model in the next AL iteration. In our experiments, we use a pretrained BERT model $\mathcal{M}$ (Devlin et al., 2019), which we fine-tune at each AL iteration using the current $\mathcal{D}_{\mathrm{lab}}$ . We begin the AL loop by training a model $\mathcal{M}$ using an initial labeled dataset $\mathcal{D}_{\mathrm{lab}}^5$ .
+
+Find Nearest Neighbors for Unlabeled Candidates The first step of our contrastive acquisition function (cf. line 2) is to find examples that are similar in the model feature space (Eq. 1). Specifically, we use the [CLS] token embedding of BERT as our encoder $\Phi(.)$ to represent all data points in $\mathcal{D}_{\mathrm{lab}}$ and $\mathcal{D}_{\mathrm{pool}}$ . We use a K-Nearest-Neighbors (KNN) implementation using the labeled data $\mathcal{D}_{\mathrm{lab}}$ in order to query similar examples $x_{l} \in \mathcal{D}_{\mathrm{lab}}$ for each candidate $x_{p} \in \mathcal{D}_{\mathrm{pool}}$ . Our distance metric $d(.)$ is Euclidean distance. To find the most similar data points in $\mathcal{D}_{\mathrm{lab}}$ for each $x_{p}$ , we select the top $k$ instead of selecting a predefined threshold $\epsilon$ (Eq. 1)6. This way, we create a neighborhood $N_{x_{p}} = \{x_{p}, x_{l}^{(1)}, \ldots, x_{l}^{(k)}\}$ that consists of the unlabeled data point $x_{p}$ and its $k$ closest examples $x_{l}$ in $\mathcal{D}_{\mathrm{lab}}$ (Figure 1).
+
+Compute Contrastive Score between Unlabeled Candidates and Neighbors In the second step, we compute the divergence in the model predictive probabilities for the members of the neighborhood (Eq. 2). Using the current trained model $\mathcal{M}$ to obtain the output probabilities for all data points in $N_{x_p}$ (cf. lines 3-4), we then compute the Kullback-Leibler divergence (KL) between the output probabilities of $x_p$ and all $x_l \in N_{x_p}$ (cf. line 5). To obtain a score $s_{x_p}$ for a candidate $x_p$ , we take the average of all divergence scores (cf. line 6).
+
+Rank Unlabeled Candidates and Select Batch We apply these steps to all candidate examples $x_{p}\in \mathcal{D}_{\mathrm{pool}}$ and obtain a score $s_{x_p}$ for each. With
+
+| DATASET | TASK | DOMAIN | OOD DATASET | TRAIN | VAL | TEST | CLASSES |
| IMDB | Sentiment Analysis | Movie Reviews | SST-2 | 22.5K | 2.5K | 25K | 2 |
| SST-2 | Sentiment Analysis | Movie Reviews | IMDB | 60.6K | 6.7K | 871 | 2 |
| AGNEWS | Topic Classification | News | - | 114K | 6K | 7.6K | 4 |
| DBPEDIA | Topic Classification | News | - | 20K | 2K | 70K | 14 |
| PUBMED | Topic Classification | Medical | - | 180K | 30.2K | 30.1K | 5 |
| QNLI | Natural Language Inference | Wikipedia | - | 99.5K | 5.2K | 5.5K | 2 |
| QQP | Paraphrase Detection | Social QA Questions | TWITTERPPDB | 327K | 36.4K | 80.8K | 2 |
+
+Table 1: Dataset statistics.
+
+our scoring function we define as contrastive examples the unlabeled data $x_{p}$ that have the highest score $s_{x_p}$ . A high $s_{x_p}$ score indicates that the unlabeled data point $x_{p}$ has a high divergence in model predicted probabilities compared to its neighbors in the training set (Eq. 1, 2), suggesting that it may lie near the model's decision boundary. To this end, our acquisition function selects the top $b$ examples from the pool that have the highest score $s_{x_p}$ (cf. line 8), that form the acquired batch $Q$ .
+
+# 3 Experimental Setup
+
+# 3.1 Tasks & Datasets
+
+We conduct experiments on sentiment analysis, topic classification, natural language inference and paraphrase detection tasks. We provide details for the datasets in Table 1. We follow Yuan et al. (2020) and use IMDB (Maas et al., 2011), SST-2 (Socher et al., 2013), PUBMED (Dernoncourt and Lee, 2017) and AGNEWS from Zhang et al. (2015) where we also acquired DBPEDIA. We experiment with tasks requiring pairs of input sequences, using QQP and QNLI from GLUE (Wang et al., 2019). To evaluate robustness on out-of-distribution (OOD) data, we follow Hendrycks et al. (2020) and use SST-2 as OOD dataset for IMDB and vice versa. We finally use TWITTERPPDB (Lan et al., 2017) as OOD data for QQP as in Desai and Durrett (2020).
+
+# 3.2 Baselines
+
+We compare CAL against five baseline acquisition functions. The first method, ENTROPY is the most commonly used uncertainty-based baseline that acquires data points for which the model has the highest predictive entropy. As a diversity-based baseline, following Yuan et al. (2020), we use BERTKM that applies k-means clustering using the $l_{2}$ normalized BERT output embeddings of the fine-tuned model to select $b$ data points. We compare against BADGE (Ash et al., 2020), an acquisition function that aims to combine diversity and
+
+uncertainty sampling, by computing gradient embeddings $g_{x}$ for every candidate data point $x$ in $\mathcal{D}_{\mathrm{pool}}$ and then using clustering to select a batch. Each $g_{x}$ is computed as the gradient of the cross-entropy loss with respect to the parameters of the model's last layer, aiming to be the component that incorporates uncertainty in the acquisition function7. We also evaluate a recently introduced cold-start acquisition function called ALPS (Yuan et al., 2020) that uses the masked language model (MLM) loss of BERT as a proxy for model uncertainty in the downstream classification task. Specifically, aiming to leverage both uncertainty and diversity, ALPS forms a surprisal embedding $s_{x}$ for each $x$ , by passing the unmasked input $x$ through the BERT MLM head to compute the cross-entropy loss for a random 15% subsample of tokens against the target labels. ALPS clusters these embeddings to sample $b$ sentences for each AL iteration. Lastly, we include RANDOM, that samples data from the pool from a uniform distribution.
+
+# 3.3 Implementation Details
+
+We use BERT-BASE (Devlin et al., 2019) adding a task-specific classification layer using the implementation from the HuggingFace library (Wolf et al., 2020). We evaluate the model 5 times per epoch on the development set following Dodge et al. (2020) and keep the one with the lowest validation loss. We use the standard splits provided for all datasets, if available, otherwise we randomly sample a validation set from the training set. We test all models on a held-out test set. We repeat all experiments with five different random seeds resulting into different initializations of the parameters of the model's extra task-specific output feedfor
+
+
+
+
+
+
+
+
+
+
+Figure 2: In-domain (ID) test accuracy during AL iterations for different acquisition functions.
+
+
+
+
+
+
+
+ward layer and the initial $\mathcal{D}_{\mathrm{lab}}$ . For all datasets we use as budget the $15\%$ of $\mathcal{D}_{\mathrm{pool}}$ , initial training set $1\%$ and acquisition size $b = 2\%$ . Each experiment is run on a single Nvidia Tesla V100 GPU. More details are provided in the Appendix A.1.
+
+# 4 Results
+
+# 4.1 In-domain Performance
+
+We present results for in-domain test accuracy across all datasets and acquisition functions in Figure 2. We observe that CAL is consistently the top performing method especially in DBPEDIA, PUBMED and AGNEWS datasets.
+
+CAL performs slightly better than ENTROPY in IMDB, QNLI and QQP, while in SST-2 most methods yield similar results. ENTROPY is the second best acquisition function overall, consistently performing better than diversity-based or hybrid baselines. This corroborates recent findings from Desai and Durrett (2020) that BERT is sufficiently calibrated (i.e. produces good uncertainty estimates), making it a tough baseline to beat in AL.
+
+BERTKM is a competitive baseline (e.g. SST-2, QNLI) but always underperforms compared to CAL and ENTROPY, suggesting that uncertainty is the most important signal in the data selection
+
+process. An interesting future direction would be to investigate in depth whether and which (i.e. which layer) representations of the current (pretrained language models) works best with similarity search algorithms and clustering.
+
+Similarly, we can see that BADGE, despite using both uncertainty and diversity, also achieves low performance, indicating that clustering the constructed gradient embeddings does not benefit data acquisition. Finally, we observe that ALPS generally underperforms and is close to RANDOM. We can conclude that this heterogeneous approach to uncertainty, i.e. using the pretrained language model as proxy for the downstream task, is beneficial only in the first few iterations, as shown in Yuan et al. (2020).
+
+Surprisingly, we observe that for the SST-2 dataset ALPS performs similarly with the highest performing acquisition functions, CAL and ENTROPY. We hypothesize that due to the informal textual style of the reviews of SST-2 (noisy social media data), the pretrained BERT model can be used as a signal to query linguistically hard examples, that benefit the downstream sentiment analysis task. This is an interesting finding and a future research direction would be to investigate the correlation between the difficulty of an example in a
+
+| TRAIN (ID) | SST-2 | IMDB | QQP |
| TEST (OOD) | IMDB | SST-2 | TWITTERPPDB |
| RANDOM | 76.28 ± 0.72 | 82.50 ± 3.61 | 85.86 ± 0.48 |
| BERTKM | 75.99 ± 1.01 | 84.98 ± 1.22 | - |
| ENTROPY | 75.38 ± 2.04 | 85.54 ± 2.52 | 85.06 ± 1.96 |
| ALPS | 77.06 ± 0.78 | 83.65 ± 3.17 | 84.79 ± 0.49 |
| BADGE | 76.41 ± 0.92 | 85.19 ± 3.01 | - |
| CAL | 79.00 ± 1.39 | 84.96 ± 2.36 | 86.20 ± 0.22 |
+
+Table 2: Out-of-domain (OOD) accuracy of models trained with the actively acquired datasets created with different AL acquisition strategies.
+
+downstream task with its perplexity (loss) of the pretrained language model.
+
+# 4.2 Out-of-domain Performance
+
+We also evaluate the out-of-domain (OOD) robustness of the models trained with the actively acquired datasets of the last iteration (i.e. $15\%$ of $\mathcal{D}_{\mathrm{pool}}$ or $100\%$ of the AL budget) using different acquisition strategies. We present the OOD results for SST-2, IMDB and QQP in Table 2. When we test the models trained with SST-2 on IMDB (first column) we observe that CAL achieves the highest performance compared to the other methods by a large margin, indicating that acquiring contrastive examples can improve OOD generalization. In the opposite scenario (second column), we find that the highest accuracy is obtained with ENTROPY. However, similarly to the ID results for SST-2 (Figure 2), all models trained on different subsets of the IMDB dataset result in comparable performance when tested on the small SST-2 test set (the mean accuracies lie inside the standard deviations across models). We hypothesize that this is because SST-2 is not a challenging OOD dataset for the different IMDB models. This is also evident by the high OOD accuracy, $85\%$ on average, which is close to the $91\%$ SST-2 ID accuracy of the full model (i.e. trained on $100\%$ of the ID data). Finally, we observe that CAL obtains the highest OOD accuracy for QQP compared to RANDOM, ENTROPY and ALPS. Overall, our empirical results show that the models trained on the actively acquired dataset with CAL obtain consistently similar or better performance than all other approaches when tested on OOD data.
+
+# 5 Ablation Study
+
+We conduct an extensive ablation study in order to provide insights for the behavior of every component of CAL. We present all AL experiments on the AGNEWS dataset in Figure 3.
+
+
+Figure 3: In-domain (ID) test accuracy with different variants of CAL (ablation).
+
+Decision Boundary We first aim to evaluate our hypothesis that CAL acquires difficult examples that lie close to the model's decision boundary. Specifically, to validate that the ranking of the constructed neighborhoods is meaningful, we run an experiment where we acquire candidate examples that have the minimum divergence from their neighbors opposite to CAL (i.e. we replace $\mathrm{argmax}(.)$ with $\mathrm{argmin}(.)$ in line 8 of Algorithm 1). We observe (Fig. 3 - CAL opposite) that even after acquiring $15\%$ of unlabeled data, the performance remains unchanged compared to the initial model (of the first iteration), even degrades. In effect, this finding denotes that CAL does select informative data points.
+
+Neighborhood Next, we experiment with changing the way we construct the neighborhoods, aiming to improve computational efficiency. We thus modify our algorithm to create a neighborhood for each labeled example (instead of unlabeled). This way we compute a divergence score only for the neighbors of the training data points. However, we find this approach to slightly underperform (Fig. 3 - CAL per labeled example), possibly because only a small fraction of the pool is considered and thus the uncertainty of all the unlabeled data points is not taken into account.
+
+Scoring function We also experiment with several approaches for constructing our scoring function (cf. line 6 in Algorithm 1). Instead of computing the KL divergence between the predicted probabilities of each candidate example and its labeled neighbors (cf. line 5), we used cross entropy between the output probability distribution and the gold labels of the labeled data. The intuition is to evaluate whether information of the actual label is more useful than the model's predictive probability distribution. We observe this scoring function to result in a slight drop in performance (Fig. 3 - Cross Entropy). We also experimented with various pooling operations to aggregate the KL divergence scores for each candidate data point. We found maximum and median (Fig. 3 - Max/Median) to perform similarly with the average (Fig. 3 - CAL), which is the pooling operation we decided to keep in our proposed algorithm.
+
+Feature Space Since our approach is related to to acquiring data near the model's decision boundary, this effectively translates into using the [CLS] output embedding of BERT. Still, we opted to cover several possible alternatives to the representations, i.e. feature space, that can be used to find the neighbors with KNN. We divide our exploration into two categories: intrinsic representations from the current fine-tuned model and extrinsic using different methods. For the first category, we examine representing each example with the mean embedding layer of BERT (Fig. 3 - Mean embedding) or the mean output embedding (Fig. 3 - Mean output). We find both alternatives to perform worse than using the [CLS] token (Fig. 3 - CAL). The motivation for the second category is to evaluate whether acquiring contrastive examples in the input feature space, i.e. representing the raw text, is meaningful (Gardner et al., 2020)9. We thus examine contextual representations from a pretrained BERT language model (Fig. 3-BERT-pr [CLS]) (not fine-tuned in the task or domain) and non-contextualized TF-IDF vectors (Fig. 3-TF-IDF). We find both approaches, along with Mean embedding, to largely underperform compared to our approach that acquires ambiguous data near the model decision boundary.
+
+9This can be interpreted as comparing the effectiveness of selecting data near the model decision boundary vs. the task decision boundary, i.e. data that are similar for the task itself or for the humans (in terms of having the same raw input/vocabulary), but are from different classes.
+
+# 6 Analysis
+
+Finally, we further investigate CAL and all acquisition functions considered (bases), in terms of diversity, representativeness and uncertainty. Our aim is to provide insights on what data each method tends to select and what is the uncertainty-diversity trade-off of each approach. Table 3 shows the results of our analysis averaged across datasets. We denote with $L$ the labeled set, $U$ the unlabeled pool and $Q$ an acquired batch of data points from $U^{10}$ .
+
+# 6.1 Diversity & Uncertainty Metrics
+
+Diversity in input space (DIV.-I) We first evaluate the diversity of the actively acquired data in the input feature space, i.e. raw text, by measuring the overlap between tokens in the sampled sentences $Q$ and tokens from the rest of the data pool $U$ . Following Yuan et al. (2020), we compute DIV.-I as the Jaccard similarity between the set of tokens from the sampled sentences $Q$ , $\mathcal{V}_{\mathcal{Q}}$ , and the set of tokens from the unsampled sentences $\mathcal{U} \backslash \mathcal{Q}$ , $\mathcal{V}_{\mathcal{Q}'}$ , $\mathcal{J}(\mathcal{V}_{\mathcal{Q}}, \mathcal{V}_{\mathcal{Q}'}) = \frac{|\mathcal{V}_{\mathcal{Q}} \cap \mathcal{V}_{\mathcal{Q}'}|}{|\mathcal{V}_{\mathcal{Q}} \cup \mathcal{V}_{\mathcal{Q}'}|}$ . A high DIV.-I value indicates high diversity because the sampled and unsampled sentences have many tokens in common.
+
+Diversity in feature space (DIV.-F) We next evaluate diversity in the (model) feature space, using the [CLS] representations of a trained BERT model $^{11}$ . Following Zhdanov (2019) and Ein-Dor et al. (2020), we compute DIV.-F of a set $Q$ as $\left(\frac{1}{|U|}\sum_{x_i\in U}\min_{x_j\in Q}d(\Phi (x_i),\Phi (x_j))\right)^{-1}$ , where $\Phi (x_{i})$ denotes the [CLS] output token of example $x_{i}$ obtained by the model which was trained using $L$ , and $d(\Phi (x_i),\Phi (x_j))$ denotes the Euclidean distance between $x_{i}$ and $x_{j}$ in the feature space.
+
+Uncertainty (UNC.) To measure uncertainty, we use the model $\mathcal{M}_f$ trained on the entire training dataset (Figure 2 - Full supervision). As in Yuan et al. (2020), we use the logits from the fully trained model to estimate the uncertainty of an example, as it is a reliable estimate due to its high performance after training on many examples, while
+
+ | DIV.-I | DIV.-F | UNC. | REPR. |
| RANDOM | 0.766 | 0.356 | 0.132 | 1.848 |
| BERTKM | 0.717 | 0.363 | 0.145 | 2.062 |
| ENTROPY | 0.754 | 0.323 | 0.240 | 2.442 |
| ALPS | 0.771 | 0.360 | 0.126 | 2.038 |
| BADGE | 0.655 | 0.339 | 0.123 | 2.013 |
| CAL | 0.768 | 0.335 | 0.231 | 2.693 |
+
+Table 3: Uncertainty and diversity metrics across acquisition functions, averaged for all datasets.
+
+it offers a fair comparison across all acquisition strategies. First, we compute predictive entropy of an input $x$ when evaluated by model $\mathcal{M}_f$ and then we take the average over all sentences in a sampled batch $Q$ . We use the average predictive entropy to estimate uncertainty of the acquired batch $Q$ for each method $-\frac{1}{|Q|} \sum_{x \in Q} \sum_{c=1}^{C} p(y = c|x) \log p(y = c|x)$ . As a sampled batch $Q$ we use the full actively acquired dataset after completing our AL iterations (with 15% of the data).
+
+Representativeness (REPR.) We finally analyze the representativeness of the acquired data as in Ein-Dor et al. (2020). We aim to study whether AL strategies tend to select outlier examples that do not properly represent the overall data distribution. We rely on the KNN-density measure proposed by Zhu et al. (2008), where the density of an example is quantified by one over the average distance between the example and its K most similar examples (i.e., K nearest neighbors) within $U$ , based on the [CLS] representations as in Div.-F. An example with high density degree is less likely to be an outlier. We define the representativeness of a batch $Q$ as one over the average KNN-density of its instances using the Euclidean distance with $K = 10$ .
+
+# 6.2 Discussion
+
+We first observe in Table 3 that ALPS acquires the most diverse data across all approaches. This is intuitive since ALPS is the most linguistically-informed method as it essentially acquires data that are difficult for the language modeling task, thus favoring data with a more diverse vocabulary. All other methods acquire similarly diverse data, except BADGE that has the lowest score. Interestingly, we observe a different pattern when evaluating diversity in the model feature space (using the [CLS] representations). BERTKM has the highest
+
+DIV.-F score, as expected, while CAL and ENTROPY have the lowest. This supports our hypothesis that uncertainty sampling tends to acquire uncertain but similar examples, while CAL by definition constrains its search in similar examples in the feature space that lie close to the decision boundary (contrastive examples). As for uncertainty, we observe that ENTROPY and CAL acquire the most uncertain examples, with average entropy almost twice as high as all other methods. Finally, regarding representativeness of the acquired batches, we see that CAL obtains the highest score, followed by ENTROPY, with the rest AL strategies to acquire less representative data.
+
+Overall, our analysis validates assumptions on the properties of data expected to be selected by the various acquisition functions. Our findings show that diversity in the raw text does not necessarily correlate with diversity in the feature space. In other words, low DIV.-F does not translate to low diversity in the distribution of acquired tokens (DIV.-I), suggesting that CAL can acquire similar examples in the feature space that have sufficiently diverse inputs. Furthermore, combining the results of our AL experiments (Figure 2) and our analysis (Table 3) we conclude that the best performance of CAL, followed by ENTROPY, is due to acquiring uncertain data. We observe that the most notable difference, in terms of selected data, between the two approaches and the rest is uncertainty (UNC.), suggesting perhaps the superiority of uncertainty over diversity sampling. We show that CAL improves over ENTROPY because our algorithm "guides" the focus of uncertainty sampling by not considering redundant uncertain data that lie away from the decision boundary and thus improving representativeness. We finally find that RANDOM is evidently the worst approach, as it selects the least diverse and uncertain data on average compared to all methods.
+
+# 7 Related Work
+
+Uncertainty Sampling Uncertainty-based acquisition for AL focuses on selecting data points that the model predicts with low confidence. A simple uncertainty-based acquisition function is least confidence (Lewis and Gale, 1994) that sorts data in descending order from the pool by the probability of not predicting the most confident class. Another approach is to select samples that maximize the predictive entropy. Houlsby et al. (2011)
+
+propose Bayesian Active Learning by Disagreement (BALD), a method that chooses data points that maximize the mutual information between predictions and model's posterior probabilities. Gal et al. (2017) applied BALD for deep neural models using Monte Carlo dropout (Gal and Ghahramani, 2016) to acquire multiple uncertainty estimates for each candidate example. Least confidence, entropy and BALD acquisition functions have been applied in a variety of text classification and sequence labeling tasks, showing to substantially improve data efficiency (Shen et al., 2017; Siddhant and Lipton, 2018; Lowell and Lipton, 2019; Kirsch et al., 2019; Shelmanov et al., 2021; Margatina et al., 2021).
+
+On the other hand, diversity or representative sampling is based on selecting batches of unlabeled examples that are representative of the unlabeled pool, based on the intuition that a representative set of examples once labeled, can act as a surrogate for the full data available. In the context of deep learning, Geifman and El-Yaniv (2017) and Sener and Savarese (2018) select representative examples based on core-set construction, a fundamental problem in computational geometry. Inspired by generative adversarial learning, Gissin and Shalev-Shwartz (2019) define AL as a binary classification task with an adversarial classifier trained to not be able to discriminate data from the training set and the pool. Other approaches based on adversarial active learning, use out-of-the-box models to perform adversarial attacks on the training data, in order to approximate the distance from the decision boundary of the model (Ducoffe and Precioso, 2018; Ru et al., 2020).
+
+Hybrid There are several existing approaches that combine representative and uncertainty sampling. Such approaches include active learning algorithms that use meta-learning (Baram et al., 2004; Hsu and Lin, 2015) and reinforcement learning (Fang et al., 2017; Liu et al., 2018), aiming to learn a policy for switching between a diversity-based or an uncertainty-based criterion at each iteration. Recently, Ash et al. (2020) propose Batch Active learning by Diverse Gradient Embeddings (BADGE) and Yuan et al. (2020) propose Active Learning by Processing Surprisal (ALPS), a cold-start acquisition function specific for pretrained language models. Both methods construct representations for the unlabeled data based on uncertainty, and then use them for clustering; hence combining
+
+both uncertainty and diversity sampling. The effectiveness of AL in a variety of NLP tasks with pretrained language models, e.g. BERT (Devlin et al., 2019), has empirically been recently evaluated by Ein-Dor et al. (2020), showing substantial improvements over random sampling.
+
+# 8 Conclusion & Future Work
+
+We present CAL, a novel acquisition function for AL that acquires contrastive examples; data points which are similar in the model feature space and yet the model outputs maximally different class probabilities. Our approach uses information from the feature space to create neighborhoods for each unlabeled example, and predictive likelihood for ranking the candidate examples. Empirical experiments on various in-domain and out-of-domain scenarios demonstrate that CAL performs better than other acquisition functions in the majority of cases. After analyzing the actively acquired datasets obtained with all methods considered, we conclude that entropy is the hardest baseline to beat, but our approach improves it by guiding uncertainty sampling in regions near the decision boundary with more informative data.
+
+Still, our empirical results and analysis show that there is no single acquisition function to outperform all others consistently by a large margin. This demonstrates that there is still room for improvement in the AL field.
+
+Furthermore, recent findings show that in specific tasks, as in Visual Question Answering (VQA), complex acquisition functions might not outperform random sampling because they tend to select collective outliers that hurt model performance (Karamcheti et al., 2021). We believe that taking a step back and analyzing the behavior of standard acquisition functions, e.g. with Dataset Maps (Swayamdipta et al., 2020), might be beneficial. Especially, if similar behavior appears in other NLP tasks too.
+
+Another interesting future direction for CAL, related to interpretability, would be to evaluate whether acquiring contrastive examples for the task (Kaushik et al., 2020; Gardner et al., 2020) is more beneficial than contrastive examples for the model, as we do in CAL.
+
+# Acknowledgments
+
+KM and NA are supported by Amazon through the Alexa Fellowship scheme.
+
+# References
+
+Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gradient lower bounds. In Proceedings of the International Conference on Learning Representations.
+Yoram Baram, Ran El-Yaniv, and Kobi Luz. 2004. Online choice of active learning algorithms. Journal of Machine Learning Research, 5:255-291.
+Zalán Bodó, Zsolt Minier, and Lehel Csató. 2011. Active learning with clustering. In Proceedings of the Active Learning and Experimental Design workshop In conjunction with AISTATS 2010, volume 16, pages 127-139.
+Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the International Conference on Machine Learning, pages 59-66.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, volume 119, pages 1597-1607.
+David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. 1996. Active learning with statistical models. Journal of Artificial Intelligence Research, 4(1):129-145.
+Sanjoy Dasgupta. 2011. Two faces of active learning. Theoretical Computer Science, 412(19):1767-1781. Algorithmic Learning Theory (ALT 2009).
+Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308-313.
+Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 295-302.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186.
+Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. ArXiv.
+Melanie Ducoffe and Frederic Precioso. 2018. Adversarial active learning for deep networks: a margin based approach.
+
+Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active learning for BERT: An empirical study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 7949-7962.
+Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 595-605.
+Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, volume 48, pages 1050-1059.
+Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian active learning with image data. In Proceedings of the International Conference on Machine Learning, volume 70, pages 1183-1192.
+Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307-1323.
+Yonatan Geifman and Ran El-Yaniv. 2017. Deep active learning over the long tail. CoRR, abs/1711.00941.
+Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. CoRR, abs/1907.06347.
+Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In Proceedings of the International Conference on Learning Representations.
+Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 2744-2751.
+Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian active learning for classification and preference learning. ArXiv.
+Wei-Ning Hsu and Hsuan-Tien Lin. 2015. Active learning by learning. In Proceedings of the Conference of the Association for the Advancement of Artificial Intelligence, pages 2659-2665.
+
+Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher Manning. 2021. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing, pages 7265-7281.
+Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In Proceedings of the International Conference on Learning Representations.
+Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. 2019. BatchBALD: Efficient and diverse batch acquisition for deep bayesian active learning. In Proceedings of the Conference on Neural Information Processing Systems, pages 7026-7037.
+Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1224-1234.
+David D. Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Machine Learning Proceedings 1994, pages 148-156.
+David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In *In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval*.
+Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning how to actively learn: A deep imitation learning approach. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874-1883.
+David Lowell and Zachary C Lipton. 2019. Practical obstacles to deploying active learning. Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing, pages 21-30.
+Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150.
+Katerina Margatina, Loic Barrault, and Nikolaos Aletras. 2021. Bayesian active learning with pretrained language models. CoRR, abs/2104.08320.
+
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the International Conference on Neural Information Processing Systems, page 3111-3119.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024-8035.
+Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through sampling estimation of error reduction. In Proceedings of the International Conference on Machine Learning, page 441-448.
+Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Weinan Zhang, Yong Yu, and Lei Li. 2020. Active sentence learning by adversarial uncertainty sampling in discrete space. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4908-4917, Online. Association for Computational Linguistics.
+Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In Proceedings of the International Conference on Learning Representations.
+Burr Settles. 2009. Active learning literature survey. Computer sciences technical report.
+Artem Shelmanov, Dmitri Puzyrev, Lyubov Kupriyanova, Denis Belyakov, Daniil Larionov, Nikita Khromov, Olga Kozlova, Ekaterina Artemova, Dmitry V. Dylov, and Alexander Panchenko. 2021. Active learning for sequence tagging with deep pre-trained models and Bayesian uncertainty estimates. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 1698-1712.
+Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew-Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 589-596.
+Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the Workshop on Representation Learning for NLP, pages 252-256.
+Aditya Siddhant and Zachary C Lipton. 2018. Deep bayesian active learning for natural language processing: Results of a Large-Scale empirical study.
+
+In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2904-2909.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1631-1642.
+Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.
+Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 9275-9293.
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2019. Representation learning with contrastive predictive coding.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45.
+Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. 2020. Cold-start active learning through self-supervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935-7948, Online. Association for Computational Linguistics.
+Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28, pages 649-657. Curran Associates, Inc.
+Fedor Zhdanov. 2019. Diverse mini-batch active learning.
+
+Jingbo Zhu, Huizhen Wang, Tianshun Yao, and Benjamin K Tsou. 2008. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the International Conference on Computational Linguistics, pages 1137-1144.
+
+# A Appendix
+
+# A.1 Data & Hyperparameters
+
+In this section we provide details of all the datasets we used in this work and the hyperparparameters used for training the model. For QNLI, IMDB and SST-2 we randomly sample $10\%$ from the training set to serve as the validation set, while for AGNEWS and QQP we sample $5\%$ . For the DBPEDIA dataset we undersample both training and validation datasets (from the standard splits) to facilitate our AL simulation (i.e. the original dataset consists of 560K training and 28K validation data examples). For all datasets we use the standard test set, apart from SST-2, QNLI and QQP datasets that are taken from the GLUE benchmark (Wang et al., 2019) we use the development set as the held-out test set and subsample a development set from the training set.
+
+For all datasets we train BERT-BASE (Devlin et al., 2019) from the HuggingFace library (Wolf et al., 2020) in Pytorch (Paszke et al., 2019). We train all models with batch size 16, learning rate $2e - 5$ , no weight decay, AdamW optimizer with epsilon $1e - 8$ . For all datasets we use maximum sequence length of 128, except for IMDB that contain longer input texts, where we use 256. To ensure reproducibility and fair comparison between the various methods under evaluation, we run all experiments with the same five seeds that we randomly selected from the range [1, 9999]. We evaluate the model 5 times per epoch on the development set following Dodge et al. (2020) and keep the one with the lowest validation loss. We use the code provided by Yuan et al. (2020) for ALPS, BADGE and BERTKM.
+
+# A.2 Efficiency
+
+In this section we compare the efficiency of the acquisition functions considered in our experiments. We denote $m$ the number of labeled data in $\mathcal{D}_{\mathrm{lab}}$ , $n$ the number of unlabeled data in $\mathcal{D}_{\mathrm{pool}}$ , $C$ the number of classes in the downstream classification task, $d$ the dimension of embeddings, $t$ is fixed number of iterations for k-MEANS, $l$ the maximum sequence length and $k$ the acquisition size. In our experiments, following (Yuan et al., 2020), $k = 100$ , $d = 768$ , $t = 10$ , and $l = 128^{12}$ . ALPS requires $\mathcal{O}(tknl)$ considering that the surprisal embeddings are computed. BERTKM and BADGE, the
+
+most computationally heavy approaches, require $\mathcal{O}(knd)$ and $\mathcal{O}(Cknd)$ respectively, given that gradient embeddings are computed for BADGE $^{13}$ . On the other hand, ENTROPY only requires $n$ forward passes though the model, in order to obtain the logits for all the data in $\mathcal{D}_{\mathrm{pool}}$ . Instead, our approach, CAL, first requires $m + n$ forward passes, in order to acquire the logits and the CLS representations of the data (in $\mathcal{D}_{\mathrm{pool}}$ and $\mathcal{D}_{\mathrm{lab}}$ ) and then one iteration for all data in $\mathcal{D}_{\mathrm{pool}}$ to obtain the scores.
+
+We present the runtimes in detail for all datasets and acquisition functions in Tables 4 and 5. First, we define the total acquisition time as a sum of two types of times; inference and selection time. Inference time is the time that is required in order to pass all data from the model to acquire predictions or probability distributions or model encodings (representations). This is explicitly required for the uncertainty-based methods, like ENTROPY, and our method CAL. The remaining time is considered selection and essentially is the time for all necessary computations in order to rank and select the $b$ most important examples from $\mathcal{D}_{\mathrm{pool}}$ .
+
+We observe in Table 4 that the diversity-based functions do not require this explicit inference time, while for ENTROPY it is the only computation that is needed (taking the argmax of a list of uncertainty scores is negligible). CAL requires both inference and selection time. We can see that inference time of CAL is a bit higher than ENTROPY because we do $m + n$ forward passes instead of $n$ , that is equivalent to both $\mathcal{D}_{\mathrm{pool}}$ and $\mathcal{D}_{\mathrm{lab}}$ instead of only $\mathcal{D}_{\mathrm{pool}}$ . The selection time for CAL is the for-loop as presented in our Algorithm 1. We observe that it is often less computationally expensive than the inference step (which is a simple forward pass through the model). Still, there is room for improvement in order to reduce the time complexity of this step.
+
+In Table 5 we present the total time for all datasets (ordered with increasing $\mathcal{D}_{\mathrm{pool}}$ size) and the average time for each acquisition function, as a means to rank their efficiency. Because we do not apply all acquisition functions to all datasets we compute three different average scores in order to ensure fair comparison. AVG.-ALL is the average time across all 7 datasets and is used to compare RANDOM, ALPS, ENTROPY and CAL. AVG.-3 is the average time across the first 3 datasets (IMDB, SST-2 and DBPEDIA) and is used to compare all
+
+ | DBPEDIA | IMDB | SST-2 | QNLI | AGNEWS | PUBMED | QQP |
| RANDOM | (0,0) | (0,0) | (0,0) | (0,0) | (0,0) | (0,0) | (0,0) |
| ALPS | (0,181) | (0,222) | (0,733) | (0,1607) | (0,2309) | (0,5878) | (0,14722) |
| BERTKM | (0,467) | (0,431) | (0,4265) | (0,8138) | (0,9344) | (0,25965) | (-,-) |
| BADGE | (0,12871) | (0,3816) | (0,25640) | (-,-) | (-,-) | (-,-) | (-,-) |
| ENTROPY | (103,1) | (107,0) | (173,0) | (331,0) | (402,0) | (596,0) | (1070,0) |
| CAL | (133,49) | (212,61) | (464,244) | (528,376) | (656,628) | (1184,1445) | (1541,2857) |
+
+Table 4: Runtimes (in seconds) for all datasets and acquisition functions. In each cell of the table we present a tuple $(i,s)$ where $i$ is the inference time and $s$ the selection time. Inference time is the time for the model to perform a forward pass for all the unlabeled data in $\mathcal{D}_{\mathrm{pool}}$ and selection time is the time that each acquisition function requires to rank all candidate data points and select $b$ for annotation (for a single iteration). Since we cannot report the runtimes for every model in the AL pipeline (at each iteration the size of $\mathcal{D}_{\mathrm{pool}}$ changes), we provide the median.
+
+ | DBPEDIA | IMDB | SST-2 | QNLI | AGNEWS | PUBMED | QQP | AVG.-ALL | AVG.-3 | AVG.-6 |
| RANDOM | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| ALPS | 181 | 222 | 733 | 1607 | 2309 | 5878 | 14722 | 3664 | 378 | 1821 |
| BERTKM | 467 | 431 | 4265 | 8138 | 9344 | 25965 | - | - | 1721 | 8101 |
| BADGE | 12871 | 3816 | 25640 | - | - | - | - | - | 14109 | - |
| ENTROPY | 104 | 107 | 173 | 331 | 402 | 596 | 1070 | 397 | 128 | 285 |
| CAL | 182 | 273 | 708 | 904 | 1284 | 2629 | 4398 | 1482 | 387 | 996 |
+
+Table 5: Runtimes (in seconds) for all datasets and acquisition functions. In each cell of the table we present the total acquisition time (inference and selection). AVG.-ALL shows the average acquisition time for each acquisition function for all datasets, AVG.-6. for all datasets except QQP and AVG.-3 for the 3 first datasets only (DBPEDIA, IMDB, SST-2).
+
+acquisition functions. Finally, AVG.-6 is the average time across all datasets apart from QQP and is used to compare RANDOM, ALPS, BERTKM, ENTROPY and CAL.
+
+We first observe that ENTROPY is overall the most efficient acquisition function. According to the AVG.-ALL column, we observe that CAL is the second most efficient function, followed by ALPS. According to the AVG.-6 we observe the same pattern, with BERTKM to be the slowest method. Finally, we compare all acquisition functions in the 3 smallest (in terms of size of $\mathcal{D}_{\mathrm{pool}}$ ) datasets and find that ENTROPY is the fastest method followed by ALPS and CAL that require almost 3 times more computation time. The other clustering methods, BERTKM and BADGE, are significantly more computationally expensive, requiring respectively 13 and $100(!)$ times more time than ENTROPY.
+
+Interestingly, we observe the effect of the acquisition size (2% of $\mathcal{D}_{\mathrm{pool}}$ in our case) and the size of $\mathcal{D}_{\mathrm{pool}}$ in the clustering methods. As these parameters increase, the computation of the corresponding acquisition function increases dramatically. For example, we observe that in the 3 smallest datasets that ALPS requires similar time to CAL. However,
+
+when we increase $b$ and $m$ (i.e. as we move from DBPEDIA with $20K$ examples in $D_{\mathrm{pool}}$ to QNLI with $100K$ etc - see Table 1) we observe that the acquisition time of ALPS becomes twice as much as that of CAL. For instance, in QQP with acquisition size 3270 we see that ALPS requires 14722 seconds on average, while CAL 4398. This shows that even though our approach is more computationally expensive as the size of $D_{\mathrm{pool}}$ increases, the complexity is linear, while for the other hybrid methods that use clustering, the complexity grows exponentially.
+
+# A.3 Reproducibility
+
+All code for data preprocessing, model implementations, and active learning algorithms is made available at https://github.com/mourga/contrastive-active-learning. For questions regarding the implementation, please contact the first author.
\ No newline at end of file
diff --git a/activelearningbyacquiringcontrastiveexamples/images.zip b/activelearningbyacquiringcontrastiveexamples/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..69a5949a1fbff794e4160550dfc0604248611588
--- /dev/null
+++ b/activelearningbyacquiringcontrastiveexamples/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99b4fe9c74f5b62e32008959ff7b167ff1f0cf0ce5bb31dd7d1689093c1f429f
+size 374596
diff --git a/activelearningbyacquiringcontrastiveexamples/layout.json b/activelearningbyacquiringcontrastiveexamples/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0353ca400bf1f5bc9ae10e48d9a3e0a932266ceb
--- /dev/null
+++ b/activelearningbyacquiringcontrastiveexamples/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:275480ecad34c022ee4611b14967171f50931cd10a645a6546462d326931114d
+size 531096
diff --git a/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_content_list.json b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fff447d337dc40b3eeef94b65d2c4a07999e2627
--- /dev/null
+++ b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2d3c23221e0b2532d8ca53914db0944d261f020f21ab9664262b47dc60c641eb
+size 105134
diff --git a/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_model.json b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..95a29e6c8f53aca6be4b2ceaf1781e8decf94e08
--- /dev/null
+++ b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85368e47a9b999224613ba8c77fc0ce94110e76d381bd9efd0e631d824394e2e
+size 123827
diff --git a/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_origin.pdf b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..15bee8cf4666ecf1eb76ffdd1298b439dce1e617
--- /dev/null
+++ b/adapterdropontheefficiencyofadaptersintransformers/536dfedc-2365-49a3-845c-b13dc83a8e29_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07524abce34d126484f9ef8632e1a1a2a9d65c02041e0c1abbf96920f191c51a
+size 2701060
diff --git a/adapterdropontheefficiencyofadaptersintransformers/full.md b/adapterdropontheefficiencyofadaptersintransformers/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..10cb392b3dfaf3b5808075a65f7755e15e18bdab
--- /dev/null
+++ b/adapterdropontheefficiencyofadaptersintransformers/full.md
@@ -0,0 +1,394 @@
+# AdapterDrop: On the Efficiency of Adapters in Transformers
+
+Andreas Rücklé* and Gregor Geigle and Max Glockner, Tilman Beck and Jonas Pfeiffer and Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP) Department of Computer Science, Technische Universität Darmstadt www.ukp.tu-darmstadt.de
+
+# Abstract
+
+Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training light-weight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that AdapterDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.
+
+# 1 Introduction
+
+While transfer learning has become the go-to method for solving NLP tasks (Pan and Yang, 2010; Torrey and Shavlik, 2010; Ruder, 2019; Howard and Ruder, 2018; Peters et al., 2018), transformer-based models are notoriously deep requiring millions or even billions of parameters (Radford et al., 2018; Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Brown et al., 2020). This results in slow inference and large storage requirements.
+
+At least three independent lines of research have recently evolved to tackle these shortcomings. (1) Smaller and faster models that are either distilled or trained from scratch (Sanh et al., 2019; Sun et al., 2020; Bai et al., 2021; Wang et al., 2020). (2) Robustly trained transformers in which the model depth can be reduced at run-time, thereby decreasing inference time dynamically (Fan et al., 2020; Elbayad et al., 2020; Xin et al., 2020; Hou et al., 2020). (3) Adapters, which, instead of fully fine-tuning the model, only train a newly introduced set of weights at every layer, thereby sharing
+
+the majority of parameters between tasks (Houlsby et al., 2019; Bapna and First, 2019; Pfeiffer et al., 2020a). Adapters have been shown to work well for machine translation (Bapna and First, 2019), cross-lingual transfer (Pfeiffer et al., 2020b, 2021b; Üstün et al., 2020; Vidoni et al., 2020; Ansell et al., 2021), community QA (Rücklé et al., 2020), and task composition for transfer learning (Stickland and Murray, 2019; Pfeiffer et al., 2021a; Lauscher et al., 2020; Wang et al., 2021; Poth et al., 2021). Despite their recent popularity, the computational efficiency of adapters has not been explored beyond parameter efficiency.
+
+We close this gap and establish the computational efficiency of two adapter architectures at training and inference time. We investigate different strategies to further improve the efficiency of adapter-based models by incorporating ideas from all three directions mentioned above. Our strategies rely on dropping out adapters from transformers, at training and inference time, resulting in models that are dynamically adjustable regarding the available computational resources. Our approaches are agnostic to the pre-trained transformer model (e.g., base, large), which makes them broadly applicable.
+
+# Contributions:
+
+1. We are the first to establish the computational efficiency of adapters compared to full fine-tuning. We show that the training steps of adapters can be up to $60\%$ faster than full model fine-tuning with common hyperparameter choices, while being $4 - 6\%$ slower at inference. Hence, adapters are a suitable choice for researchers interested in achieving faster training times, or when requiring extensive hyperparameter tuning.
+
+2. We propose AdapterDrop, the efficient and dynamic removal of adapters with minimal impact on the task performances. We show that dropping adapters from lower transformer layers considerably improves the inference speed in
+
+| Setting | Adapter | Relative speed (for Seq.Len./Batch) |
| 128/16 | 128/32 | 512/16 | 512/32 |
| Training | Houlsby | 1.48 | 1.53 | 1.36 | 1.33 |
| Pfeiffer | 1.57 | 1.60 | 1.41 | 1.37 |
| Inference | Houlsby | 0.94 | 0.94 | 0.96 | 0.96 |
| Pfeiffer | 0.95 | 0.95 | 0.96 | 0.96 |
+
+Table 1: Relative speed of adapters compared to fully fine-tuned models. For example, 1.6 for training with the Pfeiffer adapter means that we can perform 1.6 training steps with this adapter in the time of one training step with full model fine-tuning.
+
+multi-task settings. For example, with adapters dropped from the first five layers, AdapterDrop is $39\%$ faster when performing inference on 8 tasks simultaneously. This can be beneficial for researchers working on models that need to make multiple predictions on each input.
+
+3. We prune adapters from adapter compositions in AdapterFusion (Pfeiffer et al., 2021a) and retain only the most important adapters after transfer learning, resulting in faster inference while maintaining the task performances entirely. This is suitable for settings with little labeled training data, where AdapterFusion can achieve ample improvements over standard single task models.
+
+# 2 Efficiency of Adapters
+
+We first establish the computational efficiency of adapters without AdapterDrop. As illustrated in Figure 1, significant differences exist in the forward and backward pass when fine-tuning adapters compared to fully fine-tuning the model. In the forward pass, adapters add complexity with the additional components; however, it is not necessary to backpropagate through the entire model during the backward pass. We compare the training and inference speed of full model fine-tuning against the adapter architectures of Houlsby et al. (2019) and Pfeiffer et al. (2021a) (depicted in Figure 1) using the AdapterHub.ml framework (Pfeiffer et al., 2020a). We conduct our measurements with the transformer configuration of BERT base and verify them with different GPUs. $^{1}$
+
+We provide measurements corresponding to common experiment configurations in Table 1.
+
+Training. Adapters can be considerably faster compared to full model fine-tuning— $60\%$ faster
+
+in some configurations. The two adapter architectures differ only marginally in terms of training efficiency: due to its simpler architecture, training steps of the Pfeiffer adapters are slightly faster. The magnitude of the differences depends on the input size; the available CUDA cores are the primary bottleneck. $^2$ We do not observe any particular differences between adapters and full fine-tuning regarding the training convergence. $^3$
+
+The training speedup can be explained by the decreased overhead of gradient computation. Most of the parameters are frozen when using adapters and it is not necessary to backpropagate through the first components (see Figure 1).
+
+Inference. The two adapter architectures are 94- $96\%$ as fast as fully fine-tuned models, which varies depending on the input size. This can have a considerable impact when deployed at scale.
+
+# 3 AdapterDrop
+
+We have established that adapters are more efficient in terms of training time, however, there is a perpetuate need for sustainable and efficient models (Strubell et al., 2019). Backpropagating through as few layers as possible would further improve the efficiency of training adapters. The efficiency for inference can be improved by sharing representations at lower transformer layers when simultaneously performing inference for multiple tasks—in other words, when performing multiple independent classifications on the same input. We establish this in Table 2, finding that models are up to $8.4\%$ faster with every shared layer (16 tasks).
+
+Motivated by these observations, we propose AdapterDrop: Dynamically removing adapters from lower transformer layers (depicted in Figure 1). AdapterDrop is similar to dropping out entire transformer layers (Fan et al., 2020), however, specialized to adapter settings—where lower layers often have a small impact on the task performances (Houlsby et al., 2019).
+
+We study two training methods for AdapterDrop: (1) Specialized AdapterDrop: Removing adapters from the first $n$ transformer layers, where $n$ is fixed during training. This yields separate models for each possible $n$ . (2) Robust AdapterDrop: Drawing the integer $n$ randomly from [0, 11] for each
+
+
+Figure 1: Standard adapter fine-tuning vs. Adapter-Drop fine-tuning. The left model includes adapters at every layer whereas the right model has adapters dropped at the first layer. The arrows to the right of each model indicate the information flow for the Forward and Backward pass through the model.
+
+| Simultaneous Tasks | 2 | 4 | 8 | 16 |
| Speedup (each layer) | 4.3% | 6.6% | 7.8% | 8.4% |
+
+Table 2: Speedup for each shared transformer layer when performing inference for multiple tasks simultaneously (details are given in Appendix G.2)
+
+training batch. $^{4}$ This yields one robust model that is applicable to a varying number of dropped layers. We study the effectiveness of AdapterDrop on the devsets of the GLUE benchmark (Wang et al., 2018) using RoBERTa base (Liu et al., 2019). $^{5}$
+
+Figure 2 shows that specialized AdapterDrop maintains good results even with several dropped layers. With the first five layers dropped, specialized AdapterDrop maintains $97.1\%$ of the original performance (averaged over all eight GLUE tasks; see Table 8). Moreover, robust AdapterDrop achieves comparable results, and with five layers dropped it maintains $95.4\%$ of the original performance (on avg). The advantage of robust over specialized AdapterDrop is that the robust variant can be dynamically scaled. Based on current available computational resources, robust AdapterDrop can (de)activate layers with the same set of parameters, whereas specialized AdapterDrop needs to be trained for every setting explicitly.
+
+The efficiency gains can be large. When performing inference for multiple tasks simultaneously, we measure inference speedups of $21 - 42\%$ with five
+
+
+Figure 2: Task performances in relation to dropped layers during evaluation (Figure 13 shows all tasks). 'Standard adapter' is trained with no dropped layers.
+
+dropped layers—depending on the number of simultaneous tasks (Table 2).6 Training of our robust adapters is also more efficient, which increases the speed of training steps by $26\%$ .7
+
+# 4 Efficiency of AdapterFusion
+
+AdapterFusion (Pfeiffer et al., 2021a) leverages the knowledge of several adapters from different tasks and learns an optimal combination of the adapters' output representations for a single target task (see Figure 3). AdapterFusion (AF) is particularly useful for small training sets where learning adequate models is difficult. Despite its effectiveness, AF is computationally expensive because all included adapters are passed through sequentially. $^{8}$
+
+Table 3 shows that the differences can be substantial for both training and inference. For instance, compared to a fully fine-tuned model, AF with eight adapters is around $47\%$ slower at training time and $62\%$ slower at inference.[9]
+
+# 5 AdapterDrop for AdapterFusion
+
+There exists considerable potential for improving the efficiency of AF, especially at inference time. We address this with two variants of AdapterDrop
+
+| Adapters | AF vs. Full FT | AF vs. Adapter |
| Training | Inference | Training | Inference |
| 2 | 0.92 | 0.64 | 0.57 | 0.68 |
| 8 | 0.53 | 0.38 | 0.33 | 0.40 |
| 16 | 0.33 | 0.24 | 0.21 | 0.26 |
+
+Table 3: Relative speed of AdapterFusion (with 2/8/16 adapters) compared to a fully fine-tuned model and compared to a single-task adapter (right). Measured with a batch size of 32, and a sequence length of 128.
+
+
+Figure 3: Standard AdapterFusion vs. AdapterFusion pruning, each with 3 adapters initially. The left model includes all adapters at every layer whereas the right model has one adapter pruned at every layer.
+
+for AF by (1) removing entire AF layers; (2) pruning the least important adapters from AF models.
+
+# 5.1 Removing AdapterFusion Layers
+
+We fuse the adapters from all eight GLUE tasks and observe the largest gains of AF on RTE and CoLA. We additionally train robust AF models with the same procedure as in §3. We investigate from how many lower layers we can remove AF at test time while still outperforming the corresponding single-task adapter (without AdapterDrop).
+
+Figure 4 shows that AF performs better than the
+
+
+Figure 4: Comparison of AdapterFusion with (orange) and without (blue) AdapterDrop training during inference when omitting early AF layers.
+
+
+Figure 5: Task performance of AdapterFusion Pruning. AF is trained with eight adapters, and we gradually remove the least important from the model.
+
+single-task adapter on RTE until removing AF from the first five layers. This improves the inference efficiency by $26\%$ . On CoLA, we observe a different trend. Removing AF from the first layer results in more noticeable performance decreases, achieving lower task performances than the single-task adapter. This is in line with recent work showing that some linguistic tasks heavily rely on information from the first layers (Vulic et al., 2020). We deliberately highlight that AdapterDrop might not be suitable for all tasks. However, Figure 13 shows that CoLA represents the most extreme case. Nevertheless, our results suggest that researchers need to be cautious when removing AdapterFusion layers as there may exist a considerable performance/efficiency tradeoff.
+
+# 5.2 AdapterFusion Pruning
+
+The inference efficiency of AF largely depends on the number of fused adapters, see Table 3. We can, therefore, achieve efficiency improvements by pruning adapters from the trained AF models (depicted in Figure 3). Our hypothesis is that we can safely remove adapters if they are not usually activated by AF, which means that they do not contribute much to the output representations. In each fusion layer, we record the average adapter activations—their relative importance—using all instances of the respective AF training set. We then remove the adapters with lowest activations.
+
+Figure 5 demonstrates that we can remove most adapters in AF without affecting the task performance. With two remaining adapters, we achieve comparable results to the full AF models with eight adapters and improve the inference speed by $68\%$ .
+
+We therefore recommend performing Adaper-Fusion pruning before deploying these models in practice. This is a simple yet effective technique
+
+to achieve efficiency gains even when aiming at maintaining performance entirely.
+
+# 6 Conclusion
+
+Adapters have emerged as a suitable alternative to full model fine-tuning, and their most widely claimed computational advantage is the small model size. In this work, we have demonstrated that the advantages of adapters go far beyond mere parameter efficiency. Even without our extensions, the training steps of two common adapter architectures are up to $60\%$ faster. However, these improvements come at the cost of $4 - 6\%$ slower inference speed. Thus, if training is more important, adapters can be advantageous over full model fine-tuning.
+
+AdapterDrop expands these advantages by dropping a variable number of adapters from lower transformer layers. We dynamically reduce the computational overhead at run-time when performing inference over multiple tasks and maintain task performances to a large extent. This benefits researchers working on models that need to make multiple independent predictions on a single input.
+
+Finally, we also investigated the computational efficiency of AdapterFusion models. We find that dropping entire AdapterFusion layers comes at a considerable performance/efficiency tradeoff, whereas pruning of the least activated adapters in each layer can improve the model efficiency while maintaining performance entirely.
+
+We believe that our work can be widely extended and that there exist many more directions to obtain efficient adapter-based models. For instance, we could explore more efficient pre-trained adapters, $^{11}$ sharing the adapter weights across layers, $^{12}$ or pruning adapters from AdapterFusion at training time. $^{13}$ In the Appendix to this paper, we present preliminary results for several related ideas, which may serve as a starting point for future work.
+
+# Acknowledgments
+
+This work has received financial support from multiple sources. (1) The German Federal Ministry of
+
+Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. (2) The European Regional Development Fund (ERDF) and the Hessian State Chancellery – Hessian Minister of Digital Strategy and Development under the promotional reference 20005482 (TexPrax). (3) The German Research Foundation (DFG) as part of the Research Training Group KRITIS No. GRK 2222. (4) The German Federal Ministry of Education and Research (BMBF) as part of the Software Campus program under the promotional reference 01|S17050. (5) The LOEWE initiative (Hesse, Germany) within the emergenCITY center. (6) The German Research Foundation (DFG) as part of the UKP-SQuARE project (grant GU 798/29-1). Finally, we gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
+
+# References
+
+Alan Ansell, Edoardo Maria Ponti, Jonas Pfeiffer, Sebastian Ruder, Goran Glavaš, Ivan Vulić, and Anna Korhonen. 2021. MAD-G: Multilingual Adapter Generation for Efficient Cross-Linguual Transfer. In Findings of the Association for Computational Linguistics: EMNLP 2021.
+Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jin Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. 2021. BinaryBERT: Pushing the limit of BERT quantization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL 2021), pages 4334-4348.
+Ankur Bapna and Orhan First. 2019. Simple, Scalable Adaptation for Neural Machine Translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP 2019), pages 1538-1548.
+Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 4171-4186.
+Maha Elbayad, Jiatao Gu, Edouard Grave, and Michael Auli. 2020. Depth-adaptive transformer. In 8th International Conference on Learning Representations (ICLR 2020).
+Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing Transformer Depth on Demand with Structured Dropout. In 8th International Conference on Learning Representations, (ICLR 2020).
+Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. 2020. Dynabert: Dynamic BERT with adaptive width and depth. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020).
+Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzkebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-Efficient Transfer Learning for NLP. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019).
+Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, (ACL 2018), pages 328-339.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations (ICLR 2020).
+Anne Lauscher, Olga Majewska, Leonardo F. R. Ribeiro, Iryna Gurevych, Nikolai Rozanov, and Goran Glavaš. 2020. Common sense or world knowledge? investigating adapter-based knowledge injection into pretrained transformers. In Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 43-49.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.
+Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359.
+
+Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (NAACL 2018), pages 2227-2237.
+Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021a. AdapterFusion: Non-Destructive Task Composition for Transfer Learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021), pages 487-503.
+Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020a. AdapterHub: A Framework for Adapting Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pages 46-54.
+Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2020b. MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 7654-7673.
+Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2021b. UNKs Everywhere: Adapting Multilingual Language Models to New Scripts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Online, November, 2021.
+Clifton Poth, Jonas Pfeiffer, Andreas Rückle, and Iryna Gurevych. 2021. What to pre-train on? efficient intermediate task selection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021).
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training. Technical report, OpenAI.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Technical report, OpenAI.
+Andreas Rücklé, Jonas Pfeiffer, and Iryna Gurevych. 2020. MultiCQA: Exploring the Zero-Shot Transfer of Text Matching Models on a Massive Scale. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 2471-2486.
+Sebastian Ruder. 2019. Neural Transfer Learning for Natural Language Processing. Ph.D. thesis, National University of Ireland, Galway.
+
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
+Asa Cooper Stickland and Iain Murray. 2019. BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning. In Proceedings of the 36th International Conference on Machine Learning, (ICML 2019), pages 5986-5995.
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Conference of the Association for Computational Linguistics, (ACL 2019), pages 3645-3650.
+Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pages 2158-2170.
+Lisa Torrey and Jude Shavlik. 2010. Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pages 242-264. IGI Global.
+Ahmet Üstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language Adaptation for Truly Universal Dependency Parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 2302-2315.
+Marko Vidoni, Ivan Vulić, and Goran Glavaš. 2020. Orthogonal language and task adapters in zero-shot cross-lingual transfer. arXiv preprint arXiv:2012.06460.
+Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing Pretrained Language Models for Lexical Semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 7222-7240.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355.
+Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418.
+Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression
+
+of pre-trained transformers. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020).
+Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. Deebert: Dynamic early exiting for accelerating BERT inference. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pages 2246-2251.
+
+# A Measuring Computational and Task Performance
+
+# A.1 Computational Efficiency
+
+We use Python 3.6, PyTorch 1.5.1, CUDA 10.1 for all measurements. We repeat them with two different GPUs: NVIDIA Tesla V100 PCIe (32GB) and a NVIDIA Titan X Pascal (12GB). We make use of the torch.cuda.Event class and torch.cuda.synchronize to measure only the exact period of time of a training (or inference) step. $^{14}$ For both inference and training, we repeat the respective step 300 times. We report the median to mitigate the impact of outliers caused by GPU warmup.
+
+Relativ speed. We define the relative speed of an adapter compared full model fine-tuning as: $\frac{S_a}{S_f}$ where $S_{a}$ and $S_{f}$ are the time of one step with the adapter model and the fully fine-tuned model, respectively. For example, a relative speed of 1.5 means that the adapter model can perform 1.5 steps in the time the fully fine-tuned model performs one step.
+
+Speedup. Speedup describes the positive change in relative speed of an adapter model when using AdapterDrop (or another method). A speedup of $p\%$ means that the adapter model with AdapterDrop requires only $(1 - p / 100) \times$ of the runtime than the adapter model without AdapterDrop.
+
+The speedup of AdapterDrop (and AdapterFusion) are additive. If dropping one layer results in $p\%$ speedup, dropping two layers results in $2p\%$ speedup, etc.
+
+# A.2 Task Performances
+
+We study the task performances of adapter models on the popular GLUE benchmark (Wang et al., 2018). Following Devlin et al. (2019), we exclude
+
+the WNLI because of the problematic data construction.15 We perform our analyses using RoBERTa base (Liu et al., 2019) as our pre-trained model and report the mean and standard deviation over three runs of the best development performance evaluated after every epoch. We train larger data sets (SST-2, MNLI, QNLI, and QQP) for 10 epochs and the rest of the data sets for 20 epochs. We use a batch size of 32 and, if not otherwise noted, the default hyperparameters for adapter fine-tuning as in (Pfeiffer et al., 2021a).
+
+# B Adapter Initialization and Convergence
+
+Besides measuring training and inference time, we are interested in (1) how using adapters compare to standard RoBERTa-base with regards to downstream task convergence, and (2) if initializing adapters with pre-trained weights using masked language modeling can lead to faster convergence.
+
+First, we compare RoBERTa-base with adapter models using the architecture proposed by Pfeiffer et al. (2021a). Second, we pretrain an adapter with masked language modeling (MLM) using documents from the English Wikipedia. $^{16}$ The results for both experiments are visualized in Figure 12. When comparing RoBERTa-base with randomly initialized adapters, We find that adapters do not come at the cost of requiring more training steps for convergence (1). For several of the eight GLUE tasks, we observe similar convergence behavior with the standard RoBERTa-base model and its counterpart using adapters.
+
+Further, we observe across all tasks that initializing the adapter weights with MLM pre-training does not have a substantial impact on the downstream task convergence (compared to a randomly initialized adapter). Thus, we find no evidence that pre-training of adapters with our masked language modeling objective leads to better convergence performance in our experiments (2).
+
+# C Detailed Results: AdapterDrop Task Performances
+
+We plot the detailed task performances of AdapterDrop with the different training strategies in Figure 13. The relative differences of AdapterDrop to
+
+a standard adapter with no AdapterDrop are given in Table 8.
+
+# D Adapter with Cross-Layer Parameter Sharing
+
+We can further reduce the number of parameters required for each task by sharing the weights of the adapters across all transformer layers. This is similar to weight sharing in ALBERT (Lan et al., 2020), but specialized on adapters and can therefore be applied to a wide range of pre-trained models.
+
+We use the Pfeiffer adapter architecture in our experiments with the same hyperparameters as in Appendix A.2. Because cross-layer parameter sharing reduces the capacity of adapter models, we study the impact of the adapter compression rate. The compression rate refers to the down-projection factor in the adapter's bottleneck layer and thus impacts the its capacity (the compression rate specifies by how much 'FF Down' in Figure 1 compresses the representations). The standard compression rate is 16, and smaller values result in a larger model capacity.
+
+Table 6 shows that cross-layer parameter sharing with the same compression rate of 16 largely maintains the performance compared to separate weights with an average difference of $2.35\%$ . With a smaller compression rate of 4, we close this gap by more than $50\%$ while still requiring $66\%$ fewer parameters.[17] The resulting models are lightweight: our shared adapter with a compression rate of 16 requires only 307KB storage space.
+
+# E Training AdapterFusion with Dropout
+
+We investigate the random dropout of adapters from AdapterFusion during training (using our eight task adapters as in §4) to improve the speed of training steps. Each layer randomly selects different adapters to drop out. This means that the model itself may still use the knowledge from all tasks, although not in the layers individually.
+
+Table 7 shows the results for the four smallest GLUE tasks in terms of training data size. The speedup that we achieve with AdapterFusion dropout can be substantial: with a dropout rate of $75\%$ (i.e., dropping out 6 out of our 8 adapters) each training step is $74\%$ faster on average (with a sequence length of 128, a batch size of 32). We observe no clear trend in terms of task performances. Fusion dropout leads to consistent decreases on
+
+RTE and CoLA, only a small impact on STS-B (no difference when dropping out $25\%$ of adapters), and yields improvements on MRPC.
+
+The effectiveness of Fusion dropout, thus, depends on the individual downstream task. Nevertheless, we believe that this methods could be suitable, e.g., for resource-constrained settings.
+
+# F Detailed Results: Removing AdapterFusion Layers
+
+The computational overhead of AF can be reduced during inference by decreasing the number of adapters. We investigate how dropping AF layers impacts the performance on the four smallest GLUE tasks (MRPC, STS-B, CoLA, RTE) and visualize the results in Figure 7.
+
+In this experiment we compare the performance of AF with and without AdapterDrop during training. For both, we use standard adapters as well as adapters created via AdapterDrop as basis for AF. Unsurprisingly, the performance of AF without AdapterDrop within the adapters or fusion drops fastest on all four datasets. Using AdapterDrop when creating the adapters, applying AdapterDrop on AF, or the combination of both significantly reduces the performance drop when omitting fusion layers during inference. On RTE and MRPC, multiple AF layers can be omitted while still performing en par with or better compared to a single task adapter. We further find this robustness to be task dependent. Even AF with AdapterDrop shows a steep fall in performance on RTE and CoLA, while being relatively stable on MRPC and STS-B, even with most layers omitted.
+
+# G Detailed Efficiency Measurements
+
+In this section, we present detailed results of our efficiency measurements for V100 and TitanX GPUs.
+
+# G.1 Adapters
+
+We present the efficiency results for adapters and fully fine-tuned models in Figure 6, where we plot the required time (absolute numbers) during training and inference. The relative speed of adapters compared to fully fine-tuned models is given in Table 9.
+
+# G.2 AdapterDrop
+
+Multi-task inference. In Figure 8, we plot the speed of adapters in a multi-task setting compared
+
+to fully fine-tuned models with sequential processing of inputs. In Table 11, we present the relative speed of adapters in this setting and show the speedup gained with AdapterDrop for each dropped layer. The average speedup in Table 2 is calculated as the average speedup over the batch sizes 16, 32 and 64 in Table 11.
+
+Training adapters with dropped layers. Table 5 shows the speedup of AdapterDrop when training a single adapter. The average speedup for training with AdapterDrop is $4.7\%$ per layer for the V100 and $4.5\%$ for the TitanX. This is the average result over batch sizes 16, 32, 64 and sequence length 64, 128, 256, and 256 (see Table 5).
+
+# G.3 AdapterFusion
+
+We plot the speed of AdapterFusion with different numbers of included adapters in Figure 9. In Table 10, we present the relative speed of AdapterFusion compared to a fully-finetuned model and a model with one adapter. This also shows the computational overhead (slowdown) that results from adding more adapters to AdapterFusion.
+
+# G.4 AdapterDrop for AdapterFusion
+
+Table 4 shows the speedup gained with AdapterDrop for AdapterFusion during training and inference. Figure 10 shows the required time as a function of the dropped layers.
+
+# H Parallel Implementation of AdapterFusion
+
+AdapterHub's implementation of AdapterFusion passes through each task adapter sequentially. We hypothesized that a better efficiency can be achieved with parallel processing of adapters. We implement the parallel computation of the different adapters by reformulation the linear layers as two convolutions.
+
+The first convolution is a convolution with a kernel size equal to the hidden dimension of the transformer and output channels equal to the number of adapters times the downprojection dimension of the adapters. The second convolution is a grouped convolution18 which processes the channels in blocks the size of the downprojection dimension. It outputs channels equal to the number of adapters times the hidden dimension.
+
+| Adapters | Speedup (per dropped layer) |
| Inference | Training |
| V100 | TitanX | V100 | TitanX |
| 2 | 3.0% | 3.1% | 6.3% | 6.4% |
| 4 | 4.0% | 4.1% | 6.8% | 6.8% |
| 8 | 5.2% | 5.2% | 7.3% | 7.3% |
| 16 | 6.3% | 6.3% | 7.8% | - |
+
+Table 4: The speedup for each dropped layer for AdapterFusion during training and inference. Measurements were conducted with a batch size of 32 and sequence length of 128. Missing values are due to insufficient GPU memory.
+
+| Batch Size | Seq. Len | Speedup |
| V100 | TitanX |
| 16 | 64 | 4.6% | 4.4% |
| 16 | 128 | 4.6% | 4.6% |
| 16 | 256 | 4.8% | 4.6% |
| 16 | 512 | 4.7% | - |
| 32 | 64 | 4.6% | 4.5% |
| 32 | 128 | 4.7% | 4.5% |
| 32 | 256 | 4.6% | 4.7% |
| 32 | 512 | 4.8% | - |
| 64 | 64 | 4.7% | 4.5% |
| 64 | 128 | 4.6% | 4.5% |
| 64 | 256 | 4.7% | - |
| 64 | 512 | - | - |
+
+We show in Figure 11 and in Table 12 that the iterative implementation is faster than the parallel implementation for larger input sizes (e.g., batch sizes greater than). This indicates that once the input can no longer be processed entirely in parallel on the GPU (due to limited CUDA cores) the iterative implementation seems to be more efficient.
+
+Table 5: Speedup for each dropped layer during training with AdapterDrop on the V100 and TitanX.
+
+ | Standard | Cross-Layer Parameter Sharing |
| Compression rate = 16 | 1.33 | 4 | 16 | |
| SST-2 | 94.7 ±0.3 | 94.2 ±0.3 | 94.2 ±0.1 | 94.1 ±0.4 |
| QNLI | 93.0 ±0.2 | 92.4 ±0.1 | 93.1 ±0.1 | 90.6 ±1.4 |
| MNLI | 87.3 ±0.1 | 87.0 ±0.1 | 87.1 ±0.0 | 86.2 ±0.2 |
| QQP | 90.6 ±0.0 | 90.8 ±0.1 | 90.2 ±0.0 | 88.6 ±0.5 |
| CoLA | 62.6 ±0.9 | 60.3 ±1.6 | 60.8 ±0.4 | 57.2 ±1.0 |
| MRPC | 88.4 ±0.1 | 88.2 ±0.7 | 88.5 ±1.1 | 86.8 ±0.5 |
| RTE | 75.9 ±2.2 | 69.4 ±0.5 | 71.5 ±2.7 | 71.5 ±1.0 |
| STS-B | 90.3 ±0.1 | 89.5 ±0.1 | 89.7 ±0.3 | 89.0 ±0.7 |
| Average | 85.35 | 83.98 | 84.39 | 83.0 |
| Params | 884k | 884k | 295k | 74k |
+
+Table 6: Task performance scores of the standard approach with separate adapter weights vs. cross-layer parameter sharing. The compression rate denotes the factor by which 'FF Down' in Figure 1 compresses the representations. The number of parameters is given without classification heads.
+
+ | Fusion Dropout |
| 0% | 25% | 50% | 75% |
| CoLA | 63.9 ±0.6 | 62.9 ±0.8 | 62.4 ±0.7 | 60.4 ±0.2 |
| MRPC | 88.4 ±0.1 | 89.2 ±0.5 | 89.2 ±0.4 | 89.3 ±0.1 |
| RTE | 85.4 ±0.7 | 82.8 ±1.9 | 82.1 ±0.3 | 80.9 ±1.1 |
| STS-B | 90.2 ±0.1 | 90.2 ±0.1 | 90.1 ±0.1 | 89.9 ±0.1 |
| Speedup (8) | - | 15.9% | 39.4% | 73.7% |
| Speedup (16) | - | 22.5% | 58.2% | 120.6% |
+
+Table 7: Development scores of AdapterFusion (compression rate 16x) with or without fusion dropout during training. Fusion dropout of $50\%$ means that each adapter has a $50\%$ chance of not being used as input to the fusion layer. The speedup depends on the total number of adapters used in AdapterFusion (8 adapters in our setting here, 16 used by Pfeiffer et al. (2021a))
+
+ | Dropped Layers |
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
| Standard adapter | 100.0 | 98.5 | 97.1 | 95.3 | 92.0 | 89.0 | 82.2 | 74.6 | 64.5 | 54.5 | 49.3 | 43.3 |
| Specialized AdapterDrop (12 models) | 100.0 | 99.5 | 98.9 | 98.2 | 97.6 | 97.1 | 95.9 | 95.3 | 95.1 | 94.3 | 92.5 | 82.9 |
| Robust AdapterDrop | 98.5 | 97.7 | 97.3 | 96.8 | 96.1 | 95.4 | 94.5 | 93.3 | 92.2 | 89.9 | 85.9 | 62.0 |
+
+Table 8: Model performances with AdapterDrop in relation to a standard adapter with no dropped layers. We report the percentage of retained task performance compared to the standard adapter with no dropped layers during evaluation. The results are averaged over all eight GLUE task. A value of 97.1 for specialized AdapterDrop with five dropped layers means that the model achieves $97.1\%$ of the performance compared to the standard adapter with no dropped layers. Performance scores for each task can be found in Figure 13.
+
+| Sequence Len. | Batch Size | V100 | TitanX |
| Training | Inference | Training | Inference |
| Houlsby | Pfeiffer | Houlsby | Pfeiffer | Houlsby | Pfeiffer | Houlsby | Pfeiffer |
| 64 | 16 | 0.98 | 1.70 | 0.92 | 0.94 | 1.61 | 1.69 | 0.93 | 0.94 |
| 64 | 32 | 1.70 | 1.81 | 0.94 | 0.95 | 1.48 | 1.55 | 0.93 | 0.94 |
| 64 | 64 | 1.46 | 1.54 | 0.94 | 0.95 | 1.40 | 1.46 | 0.94 | 0.94 |
| 64 | 128 | 1.48 | 1.55 | 0.95 | 0.96 | 1.37 | 1.42 | 0.94 | 0.94 |
| 128 | 16 | 1.48 | 1.57 | 0.94 | 0.95 | 1.45 | 1.52 | 0.93 | 0.94 |
| 128 | 32 | 1.53 | 1.60 | 0.94 | 0.95 | 1.38 | 1.44 | 0.94 | 0.95 |
| 128 | 64 | 1.47 | 1.53 | 0.95 | 0.96 | 1.35 | 1.40 | 0.94 | 0.95 |
| 128 | 128 | 1.42 | 1.48 | 0.95 | 0.96 | - | - | - | - |
| 256 | 16 | 1.42 | 1.49 | 0.94 | 0.95 | 1.34 | 1.38 | 0.94 | 0.95 |
| 256 | 32 | 1.40 | 1.46 | 0.95 | 0.96 | 1.31 | 1.36 | 0.94 | 0.96 |
| 256 | 64 | 1.40 | 1.45 | 0.95 | 0.96 | - | - | - | - |
| 256 | 128 | - | - | - | - | - | - | - | - |
| 512 | 16 | 1.36 | 1.41 | 0.96 | 0.96 | - | - | - | - |
| 512 | 32 | 1.33 | 1.37 | 0.96 | 0.96 | - | - | - | - |
| 512 | 64 | - | - | - | - | - | - | - | - |
| 512 | 128 | - | - | - | - | - | - | - | - |
+
+Table 9: Relative speed of adapters compared to fully fine-tuned models. Missing values are due to insufficient GPU memory.
+
+| Seq. Len | Batch Size | V100 | TitanX |
| vs. FF | vs. Adap. | Slowdown | vs. FF | vs. Adap | Slowdown |
| Tr. | Inf. | Tr. | Inf. | Tr. | Inf. | Tr. | Inf. | Tr. | Inf. | Tr. | Inf. |
| 64 | 16 | 0.77 | 0.62 | 0.45 | 0.66 | 8.2% | 10.6% | 0.88 | 0.62 | 0.52 | 0.66 | 10.3% | 10.2% |
| 64 | 32 | 1.03 | 0.64 | 0.57 | 0.68 | 12.0% | 11.1% | 0.80 | 0.61 | 0.52 | 0.64 | 11.2% | 11.0% |
| 64 | 64 | 0.87 | 0.64 | 0.57 | 0.67 | 12.6% | 12.0% | 0.76 | 0.61 | 0.52 | 0.65 | 11.6% | 11.4% |
| 128 | 16 | 0.91 | 0.65 | 0.58 | 0.69 | 12.0% | 11.0% | 0.80 | 0.61 | 0.53 | 0.65 | 10.9% | 10.8% |
| 128 | 32 | 0.92 | 0.64 | 0.57 | 0.68 | 12.5% | 11.8% | 0.76 | 0.62 | 0.53 | 0.66 | 11.4% | 11.1% |
| 128 | 64 | 0.87 | 0.65 | 0.57 | 0.68 | 12.5% | 11.6% | - | - | - | - | - | - |
| 256 | 16 | 0.88 | 0.66 | 0.59 | 0.69 | 12.1% | 11.3% | 0.77 | 0.65 | 0.56 | 0.68 | 10.8% | 10.4% |
| 256 | 32 | 0.86 | 0.68 | 0.59 | 0.70 | 11.9% | 11.3% | - | - | - | - | - | - |
| 256 | 64 | - | - | - | - | - | - | - | - | - | - | - | - |
| 512 | 16 | 0.87 | 0.69 | 0.62 | 0.72 | 11.2% | 10.1% | - | - | - | - | - | - |
| 512 | 32 | - | - | - | - | - | - | - | - | - | - | - | - |
| 512 | 64 | - | - | - | - | - | - | - | - | - | - | - | - |
+
+Table 10: Relative speed of AdapterFusion for different sequence lengths and batch sizes. We compute the training (Tr.) speed and inference (Inf.) speed with two adapters in AdapterFusion. We compare this to: FF, a fully fine-tuned model; Adap, an adapter model (Pfeiffer architecture). The slowdown denotes the computational overhead of each additional adapter composed in AdapterFusion (calculated as the average slowdown for adding one adapter to AF consisting of 2-16 adapters). Missing values are due to insufficient GPU memory.
+
+
+(a) V100 Inference
+
+
+(b) TitanX Inference
+
+
+(c) V100 Training
+Figure 6: The absolute time for each inference or training step. We compare a transformer model without adapters and an adapter model with Pfeiffer or Houlsby architectures. We note that for small inputs, i.e., batch size 1 or 8, the time does not increase with the sequence length because the GPU is not working at capacity. Figure (b) with batch size 1 shows the transition from working under and working at capacity.
+
+
+(d) TitanX Training
+
+
+Figure 7: Performance of AF by the number of dropped AF layers. We show the results for AF and the used adapters (both with and without AdapterDrop), and compare the performance with a standard single task adapter.
+
+
+
+
+Standard Fusion Adapter Training
+—— Single Task Adapter --- AdapterDrop
+
+
+AdapterDrop Training Standard
+
+| Device | Batch Size | Adapters | Inference | Speedup |
| V100 | 1 | 2 | 1.25 | 2.6% |
| 1 | 4 | 1.97 | 3.7% |
| 1 | 8 | 2.80 | 4.9% |
| 1 | 16 | 2.97 | 6.5% |
| 16 | 2 | 1.13 | 4.1% |
| 16 | 4 | 1.14 | 6.5% |
| 16 | 8 | 1.20 | 7.7% |
| 16 | 16 | 1.16 | 8.4% |
| 32 | 2 | 1.08 | 4.5% |
| 32 | 4 | 1.14 | 6.6% |
| 32 | 8 | 1.11 | 7.9% |
| 32 | 16 | 1.11 | 8.5% |
| 64 | 2 | 1.08 | 4.3% |
| 64 | 4 | 1.05 | 6.7% |
| 64 | 8 | 1.06 | 7.9% |
| 64 | 16 | 1.06 | 8.4% |
| TitanX | 32 | 2 | 1.07 | 4.4% |
| 32 | 4 | 1.09 | 6.6% |
| 32 | 8 | 1.09 | 7.8% |
| 32 | 16 | 1.06 | 8.4% |
| CPU | 1 | 2 | 0.98 | 4.2% |
| 1 | 4 | 1.03 | 6.5% |
| 1 | 8 | 1.05 | 7.7% |
| 1 | 16 | 1.06 | 8.4% |
+
+Table 11: The relative inference speed of simultaneous processing of multiple tasks with adapters compared to sequential processing of tasks with fully fine-tuned models. Gray columns show the speedup of AdapterDrop for every additional dropped layer. All measurements use a sequence length of 128. Batch size 1 for the V100 is an outlier in both speedup and relative speed compared to the other results due to the small input size (compare with Figure 8).
+
+| Adapters | Seq. Len | Batch Size | Rel. Speed |
| V100 | TitanX |
| 2 | 100 | 1 | 0.93 | 0.94 |
| 3 | 100 | 1 | 0.89 | 0.88 |
| 5 | 100 | 1 | 0.77 | 0.76 |
| 10 | 100 | 1 | 0.60 | 1.29 |
| 2 | 100 | 16 | 1.02 | 1.44 |
| 3 | 100 | 16 | 1.12 | 1.58 |
| 5 | 100 | 16 | 1.17 | 1.80 |
| 10 | 100 | 16 | 1.27 | 2.14 |
| 2 | 100 | 32 | 1.01 | 1.48 |
| 3 | 100 | 32 | 1.17 | 1.62 |
| 5 | 100 | 32 | 1.23 | 1.85 |
| 10 | 100 | 32 | 1.32 | 2.24 |
| 2 | 200 | 1 | 0.93 | 1.24 |
| 3 | 200 | 1 | 0.88 | 1.37 |
| 5 | 200 | 1 | 0.77 | 1.55 |
| 10 | 200 | 1 | 0.52 | 1.87 |
| 2 | 200 | 16 | 1.01 | 1.46 |
| 3 | 200 | 16 | 1.17 | 1.59 |
| 5 | 200 | 16 | 1.23 | 1.82 |
| 10 | 200 | 16 | 1.32 | 2.21 |
| 2 | 200 | 32 | 1.00 | 1.11 |
| 3 | 200 | 32 | 1.18 | 1.17 |
| 5 | 200 | 32 | 1.26 | - |
| 10 | 200 | 32 | 1.34 | - |
| 2 | 300 | 1 | 0.93 | 1.37 |
| 3 | 300 | 1 | 0.88 | 1.50 |
| 5 | 300 | 1 | 0.91 | 1.70 |
| 10 | 300 | 1 | 0.94 | 2.03 |
| 2 | 300 | 16 | 1.00 | 1.48 |
| 3 | 300 | 16 | 1.16 | 1.63 |
| 5 | 300 | 16 | 1.22 | 1.88 |
| 10 | 300 | 16 | 1.32 | - |
| 2 | 300 | 32 | 1.00 | - |
| 3 | 300 | 32 | 1.20 | - |
| 5 | 300 | 32 | 1.27 | - |
| 10 | 300 | 32 | 1.36 | - |
| 2 | 400 | 1 | 1.04 | 1.39 |
| 3 | 400 | 1 | 1.09 | 1.51 |
| 5 | 400 | 1 | 1.10 | 1.74 |
| 10 | 400 | 1 | 1.10 | 2.08 |
| 2 | 400 | 16 | 1.00 | - |
| 3 | 400 | 16 | 1.18 | - |
| 5 | 400 | 16 | 1.25 | - |
| 10 | 400 | 16 | 1.34 | - |
| 2 | 400 | 32 | 1.00 | - |
| 3 | 400 | 32 | 1.20 | - |
| 5 | 400 | 32 | 1.27 | - |
| 10 | 400 | 32 | - | - |
+
+Table 12: Relative speed of AdapterFusion with the iterative implementation versus the parallel implementation with different batch sizes, sequence lengths and numbers of adapters for the V100 and TitanX. The parallel implementation is faster if the input is sufficiently small (batch size 1 or 2 adapters) as the GPU is not working at capacity and is able to use the parallel implementation.
+
+
+
+
+
+Dropped Layers
+
+2 adapter 8 adapter Parallelized NFF models 4 adapter 16 adapter
+
+
+
+Figure 8: The absolute time required for performing inference for multiple tasks on the same input. The measurements are conducted with a sequence length of 128. N FF models denotes $N$ fully fine-tuned models, executed sequentially. Parallelized denotes the time required by N fully fine-tuned models running fully parallelized. Batch size 1 on the V100 is an outlier compared to the other results with a smaller speedup for each dropped layer but a higher relative speed compared to the fine-tuned models due to the small input size.
+
+AdapterFusion No adapter 1adapter(no fusion)
+
+
+
+(a) V100
+(b) TitanX
+
+Number of adapters AdapterFusion No adapter 1 adapter (no fusion)
+
+
+Figure 9: Absolute time measurements for AdapterFusion at inference (left) and training (right) as a function of the number of adapters. The measurements were conducted with a batch size of 32 (V100) and 16 (TitanX), and a sequence length of 128.
+
+
+
+
+Figure 10: Absolute time measurements for AdapterFusion with AdapterDrop at inference (left) and training (right) as a function of the number of dropped layers. The measurements were conducted with a batch size of 32 and a sequence length of 128. We additionally plot the time of an adapter (without AdapterDrop) and a model without adapters to provide a more thorough comparison.
+
+
+(a) V100
+
+
+(b) TitanX
+Figure 11: The difference in inference time between iterative and parallel implementations of AdapterFusion. Negative values indicate that the iterative implementation is faster. We calculate the difference as $t_i - t_p$ , where $t_i, t_p$ are the times for iterative and parallel implementation, respectively. In Figure (a), the parallel implementation is faster if the input is sufficiently small as the GPU is not working at capacity and is able to use the parallel implementation.
+
+
+
+
+
+
+
+
+
+
+
+
+Steps
+
+
+Figure 12: Evaluation performance of fine-tuning RoBERTA-base in comparison with different initialization strategies for adapters (randomly initialized vs. pre-trained on masked language modeling task). Training was conducted for 10k steps with a learning rate of 5e-05 for RoBERTa-base and 0.0001 for adapters, respectively.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 13: The AdapterDrop task performances for all eight GLUE tasks in relation to the dropped layers. '12 specialized adapters' refers to the performance of individual models trained for each AdapterDrop setting separately (i.e., 12 models); 'Standard adapter' refers to the adapter that is trained with no dropped layers; AdapterDrop training refers to the adapter that is trained with our proposed training procedure.
+
+
\ No newline at end of file
diff --git a/adapterdropontheefficiencyofadaptersintransformers/images.zip b/adapterdropontheefficiencyofadaptersintransformers/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ceaa7eaf85a9fdcb0a66560e2d3770e3d10b736f
--- /dev/null
+++ b/adapterdropontheefficiencyofadaptersintransformers/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9f7474e5644b0dde41ef1b0a0a2b1309efaa32324db27ded234345b647b4e3e3
+size 1255152
diff --git a/adapterdropontheefficiencyofadaptersintransformers/layout.json b/adapterdropontheefficiencyofadaptersintransformers/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6ac09dc16f146c0866fc20fa2bf1fabc549b62b1
--- /dev/null
+++ b/adapterdropontheefficiencyofadaptersintransformers/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cd62f23370e01fcad41060cd7e1319ab18afe0726d62019322be9be9ef9021a8
+size 467007
diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_content_list.json b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8be5068be08f1f9a6e8dfe4176f4c5e658b66310
--- /dev/null
+++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b42dd9cb1cf80be3a6245b2141f9f227a8c412f2c552421c75220bf93f7ae23
+size 69251
diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_model.json b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5b3fcac88c1d955e63e72a01fa53524077a146b8
--- /dev/null
+++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8dd99ffa73fb0a8ed3a9326ccebe337d99a68d40a3cd6aaebabca7f65f7a9002
+size 86675
diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_origin.pdf b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..32bfed8c00bc3bc383e42d791f639680904cec84
--- /dev/null
+++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/6fa3fe0e-62de-45f0-ba81-df46f9573b91_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:868c3dc1bf5f1426e4bc745fd8aa54897c486d1d006a7116436f9864e0c66dd9
+size 582603
diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/full.md b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c4322f0930a0e3fed9e4ec74f85decbdbfadf4a8
--- /dev/null
+++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/full.md
@@ -0,0 +1,297 @@
+# Adaptive Bridge between Training and Inference for Dialogue Generation
+
+Haoran Xu $^{1,2*}$ , Hainan Zhang $^{3\dagger}$ , Yanyan Zou $^{3}$ , Hongshen Chen $^{3}$ , Zhuoye Ding $^{3}$ , Yanyan Lan $^{4}$
+
+$^{1}$ Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
+
+2 University of Chinese Academy of Sciences, Beijing, China
+
+3Data Science Lab, JD.com, Beijing, China
+
+$^{4}$ Institute of AI Industry Research, Tsinghua University, Beijing, China
+
+xuhaoran18s@ict.ac.cn,zhanghainan1990@163.com,zouyanyan6@jd.com
+
+ac@chenhongshen.com,dingzhuoye@jd.com,langyanyan@tsinghua.edu.cn
+
+# Abstract
+
+Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics. Therefore, due to the much bigger gap between various ground-truth responses and the generated synthetic response, exposure bias is more challenging in dialogue generation task. What's more, as MLE encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may further lead to the common response generation problem, such as "I don't know" and "HaHa?" In this paper, we propose a novel adaptive switching mechanism, which learns to automatically transit between ground-truth learning and generated learning regarding the word-level matching score, such as the cosine similarity. Experimental results on both Chinese STC dataset and English Reddit dataset, show that our adaptive method achieves a significant improvement in terms of metric-based evaluation and human evaluation, as compared with the state-of-the-art exposure bias approaches. Further analysis on NMT task also shows that our model can achieve a significant improvement.
+
+# 1 Introduction
+
+Auto-regressive models(ARM) are widely used for natural language generation(NLG) tasks, such as machine translation (Sutskever et al., 2014; Wu et al., 2018), dialogue response generation (Li et al., 2017), image captioning (Lin et al., 2014; Vinyals et al., 2015) and video description (Donahue et al., 2015). They utilize the encoder-decoder framework to predict the next token conditioned
+
+| Dialogue 1 |
| context | 听说广州已成避暑胜地 (I heard that Guangzhou has become a summer resort) |
| response1 | 确实,这边很凉快。(Indeed, it's cool here.) |
| response2 | 晚上睡觉都没开风扇了。 (There is no need to turn on the fan at night.) |
| Dialogue 2 |
| context | 哈哈,看看可爱的小猫咪(Ha ha, look at this lovely kitten) |
| response1 | 这是什么品种的猫哇?好可爱,我也想要 (What kind of cat is this? So cute, I want it too.) |
| response2 | 好想要一只这样的猫,可以陪我儿子玩 (I really want a cat like this to play with my son.) |
| response3 | 哇,好可爱哇,我屋也有一只这样的小猫 (Wow, It's so cute, I have a kitten like this in my house.) |
+
+Table 1: The two Dialogues in STC dataset, and the red part of responses in Dialogue 2 are the common words.
+
+on the previous tokens, and minimize the cross-entropy between the generation and ground-truths as their objective function. Specifically, at training time, the ground-truth is utilized as the previous tokens, which forces the model directly to learn the distribution of ground truths. But at inference, the previous tokens come from the ARM decoder itself, which is different from the input distribution at training time.
+
+Although this discrepancy, named exposure bias, has been studied in some classic NLG tasks, such as neural machine translation(NMT) (Bengio et al., 2015; Venkatraman et al., 2015; Zhang et al., 2019), it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In human dialogue, given the context, people can reply many relevant and appropriate responses, not only with various expressions but also with different topics. Take the Dialogue 1 in Table 1 as an example, given the context "I heard that Guangzhou has become a summer resort", the response 1 and response 2 are in the same topic but with different tokens. In this various expression situation, like NMT task, data distribution and model distribution are easy to fit, relatively, even with exposure bias problem. However, in different topics situation, data distribution is often
+
+different from the model, because it is too divergent and covers various word distribution of each topic. Through our data analysis, we find that in dialogue generation task, the various ground-truth responses and the generated sentences have a bigger gap than in NMT tasks. We calculate the overlap measures at word-level and semantic-level, i.e., BLEU and cosine similarity, between the generated sentence and the ground-truth sentences. The results show that on NMT WMT'14 dataset, the BLEU and similarity are 27.38 and 0.96, respectively, while on dialogue Reddit dataset, the BLEU and similarity are 2.17 and 0.81, respectively. We can see that the overlap measures of the dialogue generation task are significantly lower than that of the NMT task, which indicates the severity of the exposure bias problem in the dialogue generation.
+
+What's more, as Maximum Likelihood Estimation(MLE) encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may aggravate the common response problem of the generation model, due to the strict matching between the generated response and the ground-truth responses. Take the Dialogue 2 in Table 1 as an example, the response 1 is "What kind of cat is this? So cute, I want it too.", the response 2 is "I really want a cat like this to play with my son." and the response 3 is "Wow, it's so cute, I have a kitten like this in my house". If we train the model with word-level strict matching between the generated response and the ground-truth, it can only learn the common words, i.e., "So cute, I want it", but ignore the specific parts, i.e., "What kind of cat is this?". Therefore, it is beneficial to improve the strict matching mechanism for the dialogue generation task.
+
+In this paper, we propose a novel Adaptive switch mechanism as a Bridge(AdapBridge), which introduces the generator distribution to the training phase and learns to automatically transit between the ground-truth learning and the generated learning, with respect to the word-level matching scores, such as the cosine similarity. Specifically, at each training step, we calculate the cosine similarity for each generated word with respect to all its ground-truths. If the matching score is bigger than the threshold, the generated word is fed to the decoder, while if lower, the ground-truth is fed for training. The threshold is increasing as the training epoch grows. With this adaptive sampling scheme,
+
+the switch mechanism can consider the generation quality of every word, i.e., relevance between the generated word and the ground-truth, to decide whether utilizing the generated learning or not.
+
+We evaluate the proposed models on two public datasets, i.e. the Chinese STC and the English Reddit dataset. Experimental results show that our models significantly outperform the state-of-the-art exposure bias models with respect to both metric-based evaluations and human judgments. Further analysis on NMT task also shows that our model can achieve a significant improvement.
+
+The main contributions of this paper include:
+
+- We study the exposure bias problem in dialogue generation task, one of the one-to-varying generation scenarios. And find that the exposure bias may further lead to the common response generation problem.
+- We propose the adaptive switch mechanism with word-level matching scores to determine the training input source, in order to resolve the common response problem.
+- We evaluate AdapBridge on two public dialogue datasets and conduct rigorous experiments to demonstrate the effectiveness of our proposed models. Further analysis on NMT task also shows that our model can achieve a significant improvement.
+
+# 2 Related Work
+
+This section briefly introduces recent research progresses related to this work in literature.
+
+To solve the exposure bias problem in autoregressive or seq2seq models (Sutskever et al., 2014; Welleck et al., 2019; Holtzman et al., 2019), Venkatraman et al. (2015) tried to use data as Demonstrator(DAD) to augment the training set through the tokens predicted by the model, so as to make the training set to meet the test distribution. The method of Scheduled Sampling(SS) proposed by Bengio et al. (2015) attempted to randomly sample the previous generated words to replace the ground-truth words for the model input during training time. Zhang et al. (2019) made a further exploration of this method by sampling the previous words with decay not only from word-level oracle but also from the sentence-level oracle with a semantic metric. The main idea of this kind of method is to introduce the model's prediction information to its input at training time, and reduce
+
+
+Figure 1: The illustration of our AdapBridge Model.
+
+the discrepancy between training and inference to alleviate the exposure bias problem. In comparison to those methods and related ideas (Qi et al., 2020; Goodman et al., 2020), our proposed method adaptively determines whether the input words of model during training are ground truth or predicted by scoring each generated word.
+
+Alternative based on Reinforcement Learning(RL) (Williams, 1992) methods have been explored for generation tasks, in particular for NMT. Mixed Incremental Cross-Entropy Reinforce (MIXER) (Ranzato et al., 2016) leverage hybrid loss function which combines both cross-entropy and reinforce to directly optimized the metrics used at test time, such as BLEU or ROUGE. There are many other similar works (Shen et al., 2016; Wu et al., 2016; Shao et al., 2018). More recently, text generation via Generative Adversarial Networks(GAN) (Goodfellow et al., 2014) called Text GANs has attracted of researchers (Nie et al., 2019; Zhou et al., 2019; Wu et al., 2021; Scialom et al., 2020). They framed the problem under the GAN paradigm, which uses the RL-Based (Williams, 1992) algorithms to get the gradient estimation, as the text generation is discrete. However, both RL and Text GANs cannot be avoided the high variance of gradient estimation caused by sparse rewards, which consequently makes the training process unstable and limits improvements.
+
+Different from traditional methods, our proposed model can adaptively determine whether the current input word is from ground truth or from generation with the word-level matching scores.
+
+# 3 Proposed Method
+
+Given a context sentence $X^{k} = \{x_{1}^{k}, x_{2}^{k}, \dots, x_{S_{k}}^{k}\}$ , and a target response sentence $Y^{k} = \{y_{1}^{k}, y_{2}^{k}, \dots, y_{T_{k}}^{k}\}$ , where $S_{k}$ and $T_{k}$ are the word length of context and response, respectively. The dialogue generation model based on sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) framework, directly models the response probability:
+
+$$
+P \left(Y ^ {k} \mid X ^ {k}, \theta\right) = \prod_ {t = 1} ^ {T _ {k}} p \left(y _ {t} ^ {k} \mid y _ {< t} ^ {k}, X ^ {k}, \theta\right) \tag {1}
+$$
+
+where $\theta$ are the parameters of model and $y_{ \beta)$ is a Indicator function, if input is upper $\beta$ , output will be 1, otherwise, output will be 0. $\alpha$ and $\beta$ are both increasing as the training epochs grows.
+
+- Decoders predict $t - 1$ words $y_{\beta}(s_i)$
+10: end for
+11: $y_{< t}^{k}\wedge \leftarrow y_{< t}^{k}{}^{*}\otimes P + y_{< t}^{k}\otimes (1 - P)$
+12: else
+13: $y_{< t}^{k}\wedge \leftarrow y_{< t}^{k}$
+14: end if
+15: return $y_{1, consists 4,391,266, 23,532 and 21,161 dialogue context-response pairs for training, validation and testing sets, respectively. We remove those pairs whose context contains response or contrary, and obtain 4,295,557, 23,039 and 20,749 pairs for three data sets. The average number of responses corresponding to each context
+
+in STC is 19.7. The English Reddit dialogue corpus is extracted from the Reddit comments-posts by the script $^2$ , named Reddit. The original data consists of 6 million dialogues from all even months of 2011. We use the official script to tokenize, and the duplicates and sentences with length less than 3 or longer than 64 are removed. If the number of responses of one context is more than 20, we will randomly select 20 responses and ignore others, and the average number of responses corresponding to each context in Reddit is 6.2. Finally, we randomly split the data to training, validation and testing sets, which contains 1,107,860, 23,183, 12,429 pairs, respectively.
+
+# 4.1.2 Baselines and Parameters Setting
+
+Three baselines are used for comparison, including Transformer-based model (Vaswani et al., 2017), Random Sampling with word(RS-word) and sentence(RS-Sentence) (Zhang et al., 2019).
+
+For STC, we utilize the Chinese word as input, and set the vocabulary size as 10,599. For Reddit, context-response pairs are encoded using byte-pair encoding(BPE) (Sennrich et al., 2016) with vocabularies of 11,527 tokens. For a fair comparison among all baseline models and our model, the dimension of all word embedding is 512, and beam size in testing is 5. The transformer model has 6 layers in both encoder and decoder, while 8 heads in multi-head attention. All parameters are initialized by the uniform distribution over $[-0.1, 0.1]$ . We adopt the optimizer Adam (Kingma and Ba, 2015) with $\beta_1 = 0.9$ , $\beta_2 = 0.98$ and with a weight decay of $\epsilon = 10^{-8}$ . We set the learning rate as 0.0007 and the maximum tokens of a batch as 8192 with the update frequency 2. We run all models on 4 Tesla P40 GPU cards with PyTorch3. The code will be released when this paper is accepted.
+
+# 4.1.3 Evaluation Measures
+
+The evaluation of quantitative metrics and human judgements are used in our experiments. Specifically, quantitative metrics contains traditional metrics, such as PPL and BLEU score (Papineni et al., 2002), and Distinct (Li et al., 2016) metric which is recently proposed to evaluate the degree of diversity of the generated responses by calculating the number of distinct unigrams and bigrams in the generated responses. We also evaluate each generated response by calculating BLEU score with
+
+all reference responses, and use the highest BLEU score to represent the quality of generated response. The average of all highest BLEU score in the testing set is named AH-BLEU. In addition, BLEU score is calculated by using the toolkit of NLTK4.
+
+For human evaluation, given 300 randomly sampled contexts and their responses which are generated by different models, three annotators (all CS majors students) are required to give the score of those context-response pairs, e.g. 3, 2, 1 means relevant, common and no-relevant, respectively, based on the coherence of the generated responses with respect to the contexts. The mean score is the average of all scores given by the three annotators with context-response pairs generated by a model. Meanwhile, in order to get the relative score of different models, we also evaluate the ground-truth context-response pairs by human evaluation.
+
+# 4.2 Experimental Results
+
+In this section, we demonstrate our experimental results on the two public datasets.
+
+# 4.2.1 Metric-based Evaluation
+
+Table 3 shows the quantitative evaluation results. From this table, we can see that models with switch mechanism, such as RS-Word, RS-Sentence and AdapBridge, outperform the traditional Transformer-based model in terms of BLEU, Distinct-1 and Distinct-2 evaluations. The results show that the switch mechanism plays an important role in the dialogue generation task.
+
+RS-Word and RS-Sentence both replace the ground truth tokens by the generated tokens with a random scheduled sampling. However, their performances are both worse than our proposed model, as our model considers the relevance between the generated words and the ground truth with word-level matching scores. For the BLEU score on STC dataset as an example, the BLEU-4 score of AdapBridge is 2.17, which is better than that of RS-Word and RS-Sentence, i.e., 2.05 and 2.12. Especially, our model achieves best AH-BLEU-2 score on both two datasets, which is the significant performance gains. It shows that the responses of our model have higher quantity than other baselines.
+
+The diversity of responses can be evaluated by Distinct score. As shows in Table 3, our AdapBridge achieves significant performance gains. Take the results of Reddit in Table 3 as an example, the proposed AdapBridge model improves the
+
+| STC Datasets |
| Model | PPL | BLEU-2(%) | BLEU-4(%) | DIS-1(%) | DIS-2(%) | AH-BLEU-2(%) |
| Transformer | 28.86 | 3.74 | 1.37 | 0.23 | 0.90 | 14.43 |
| RS-Word Oracle | 28.91 | 5.12 | 2.05 | 0.33 | 1.25 | 15.21 |
| RS-Sentence Oracle | 26.75 | 5.50 | 2.12 | 0.35 | 1.38 | 15.52 |
| AdapBridge | 29.36 | 5.35 | 2.17 | 0.43 | 1.74 | 16.38 |
+
+| Reddit Datasets |
| Model | PPL | BLEU-2(%) | BLEU-4(%) | DIS-1(%) | DIS-2(%) | AH-BLEU-2(%) |
| Transformer | 40.83 | 3.99 | 0.77 | 0.79 | 2.91 | 7.03 |
| RS-Word Oracle | 43.11 | 3.78 | 0.81 | 1.42 | 5.19 | 7.43 |
| RS-Sentence Oracle | 40.72 | 3.49 | 0.76 | 1.33 | 5.08 | 7.05 |
| AdapBridge | 48.01 | 3.56 | 0.83 | 1.56 | 5.56 | 7.60 |
+
+Table 3: The metric based evaluation results on STC and Reddit datasets. DIS represent the distinct score and AH-BLEU-2 represent the average of all highest BLEU-2 score.
+
+| STC Datasets |
| Model | 3(%) | 2(%) | 1(%) | Mean |
| Ground Truth | 82.23 | 13.56 | 4.23 | 2.78 |
| Transformer | 48.67 | 23.11 | 28.22 | 2.20 |
| RS-Word | 56.33 | 20.89 | 22.78 | 2.34 |
| RS-Sentence | 55.33 | 22.67 | 22.00 | 2.33 |
| AdapBridge | 59.56 | 24.33 | 16.11 | 2.43 |
| Reddit Datasets |
| Model | 3(%) | 2(%) | 1(%) | Mean |
| Ground Truth | 79.00 | 15.67 | 5.33 | 2.74 |
| Transformer | 49.78 | 21.89 | 28.33 | 2.21 |
| RS-Word | 52.44 | 23.00 | 24.56 | 2.28 |
| RS-Sentence | 53.11 | 24.67 | 22.22 | 2.31 |
| AdapBridge | 55.67 | 28.33 | 16.00 | 2.40 |
+
+Table 4: The human evaluation on STC and Reddit.
+
+Transformer, RS-Word and RS-Sentence models by 2.65, 0.37 and 0.48 Distinct-2 points, respectively. We can also note that our model has the highest Distinct score on both STC and Reddit datasets, which indicates that our model can generate more diverse response and avoid generating common responses. In summary, our proposed AdapBridge model has the ability to generate high quality and diverse responses, compared with baselines. We also conducted the significant test, and the result shows that the improvements of our model are significant on both two datatests, i.e., $p - value < 0.01$ .
+
+# 4.2.2 Human Evaluation
+
+The human evaluation results are shown in Table 4. The percentages of relevant, common and norelevant are given to evaluate the quality of responses generated by different models. From the results we can see that our AdapBridge gets the highest score in human evaluation. Take the STC as an example, compared with Transformer, RS-Word Oracle and RS-Sentence Oracle, the AdapBridge achieves performance gains $22.38\%$ , $5.73\%$ , $7.65\%$ on the relevant score. For the Mean score, we can observe that our AdapBridge generates the most relevant responses, while generates less no-relevant responses, which indicates that the responses generated by our model are attractive to annotators. We also conducted the significant test, and the result shows that the improvements of our model are significant on both two datatests, i.e., $p - value < 0.01$ .
+
+# 4.3 Case study
+
+In this section, we conduct case studies to show our model can generate more relevant and diverse responses than other baseline models.
+
+We give two examples as in Table 5. In the Example 1, the response of Transformer is "Is this a fish or a fish?", which is an unreasonable sentence, as the words in the positions of two "fish" should be different according to common sense. For the response of RS-Word, we can see that it repeats "want to eat fish" twice, which is a part of the context. Although the response of RS-Sentence "How lovely! I want to eat, too." is relevant, it conforms
+
+| Example 1 |
| context | 你不是天天嚷着要吃鱼嘛?鱼来了 | Don't you want to eat fish every day? The fish is coming. |
| Transformer | 这是鱼还是鱼啊? | Is this a fish or a fish? |
| RS-Word | 要吃鱼,要吃鱼。 | Want to eat fish, want to eat fish |
| RS-Sentence | 好可爱!我也想吃。 | How lovely! I want to eat, too. |
| AdapBridge | 这是什么鱼? | What kind of fish is this? |
| Example 2 |
| context | 水城威尼斯,梦里的城市。 | Venice, the city of dreams. |
| Transformer | 梦里的城市,梦里的城市。 | City in dreams, city in dreams. |
| RS-Word | 好美的城市,好想去。 | Beautiful city. I really want to go. |
| RS-Sentence | 这是哪呀!我也想去。 | Where is this? I want to go too. |
| AdapBridge | 想和我的爸妈一起去。 | I want to go here with my parents. |
+
+to a common response paradigm, such as "how . . . I want it, too" or "what's this, I want to . . ." If the context contains food, animal, locations etc., such responses all seem appropriate, which is not attractive to human. While the response generated by our AdapBridge is more specific and relevant, i.e. "What kind of fish is this?". We can also see the similar phenomenon in the Example 2 of Table 5, with the context "Venice, the city of water, the city of dreams.", Transformer repeats the same content that comes from the context, and the responses of RS-Word and RS-Sentence are both common responses as mentioned above. Compared with responses generated by baseline models, response of AdapBridge "I want to go here with my parents." is more relevant and attractive. Therefore, those results indicate that our proposed model can generate high quality and attractive responses with the adaptive switch mechanism.
+
+# 4.4 AdapBridge on NMT
+
+The method we propose can also be adapted for the neural machine translation(NMT) in an easy way. With this task we want to investigate if AdapBridge could be helping to improve the performance of NMT which is a classic natural language generation task. We perform experiments on the WMT'14 English $\rightarrow$ German(En $\rightarrow$ De) datasets, which contains 3,900,502, 39,414, 3,003 sentences for training, validation and testing sets, respectively. We train the Transformer-based model with the same setting described in Section 4.1.2, and then we measure the translation quality with BLEU. The evaluation results are listed in Table 6.
+
+From the results, we can see that our method can also achieve significant performance gains, and im
+
+Table 5: Two examples of generated responses on STC.
+
+| Model | BLEU-2(%) | BLEU-4(%) |
| Transformer | 43.40 | 26.43 |
| RS-Word | 43.66 | 26.84 |
| RS-Sentence | 44.08 | 27.21 |
| AdapBridge | 43.99 | 27.38 |
+
+Table 6: BLEU scores on $\mathrm{{En}} \rightarrow \mathrm{{De}}\mathrm{{NMT}}$ task.
+
+prove the Transformer-based model by 0.95 BLEU-4 points on average. For the BLEU-2 score, our model is slightly lower than RS-Sentences model same as the results in Table 3, it can attributed to the sentence-level information of RS-Sentence Oracles. In order to analyze the gap of ground-truth and the generated sentences, we calculate the cosine similarity between the hidden representations of ground truth sentences and generated sentences, with a trained Bert model (Wolf et al., 2020), and get the similarity score 0.96 and 0.81 on WMT'14 and Reddit datasets, respectively. At the same time, we can also notice that BLEU score of NMT is much higher than that of dialogue generation task. The results of overlap measures indicate the severity of the exposure bias problem in dialogue generation as analyzed in Section 1.
+
+# 5 Conclusion
+
+In this paper, we propose a novel adaptive switch mechanism with word-level matching scores to solve the problem of exposure bias for the dialogue generation task, named AdapBridge. Our core idea is to utilize the word-level matching scores to determine the input is from ground truth or from prediction at each step of training. Experimental results show that our model significantly outperforms pre
+
+vious baseline models. Further analysis on NMT also indicates that our model can achieve significant improvement on different generation tasks. In future work, we plan to further design different scoring methods, i.e. Bert score or BLEU, to guide the model selects better words. It is also interesting to extend our AdapBridge model to other generation tasks, such as abstractive summarization.
+
+# Acknowledgements
+
+This work is supported by the Beijing Academy of Artificial Intelligence (BAAI), and the National Natural Science Foundation of China (NSFC) (No.61773362).
+
+# References
+
+Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1, pages 1171-1179.
+Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2625-2634.
+Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS, pages 2672-2680.
+Sebastian Goodman, Nan Ding, and Radu Soricut. 2020. Teaform: Teacher-forcing with n-grams. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8704–8717.
+Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. In International Conference on Learning Representations.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,
+
+pages 110-119, San Diego, California. Association for Computational Linguistics.
+Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157-2169.
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
+Weili Nie, Nina Narodytska, and Ankit Patel. 2019. RelGAN: Relational generative adversarial networks for text generation. In International Conference on Learning Representations.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
+Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2401-2410.
+Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
+Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. Coldgans: Taming language gans with cautious sampling strategies. In Advances in Neural Information Processing Systems, volume 33, pages 18978-18989. Curran Associates, Inc.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Chenze Shao, Xilin Chen, and Yang Feng. 2018. Greedy search with probabilistic n-gram matching for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4778-4784.
+Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum
+
+risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683-1692.
+Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
+Arun Venkatraman, Martial Hebert, and J Bagnell. 2015. Improving multi-step prediction of learned time series models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29.
+Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156-3164.
+Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. In International Conference on Learning Representations.
+Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'Tmi Louf, Morgan Funtopicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Lijun Wu, Yingce Xia, Fei Tian, Li Zhao, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018. Adversarial neural machine translation. In *Asian Conference on Machine Learning*, pages 534-549. PMLR.
+Qingyang Wu, Lei Li, and Zhou Yu. 2021. Textgail: Generative adversarial imitation learning for text generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14067-14075.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith
+
+Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.
+Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019. Bridging the gap between training and inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4334-4343, Florence, Italy. Association for Computational Linguistics.
+Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, and Ming Zhou. 2019. Self-adversarial learning with comparative discrimination for text generation. In International Conference on Learning Representations.
\ No newline at end of file
diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/images.zip b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5de10a3f1c7959b11b181b07a672d546cac63703
--- /dev/null
+++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:290b72b1c7029820efd5a07eb1b5bf2fd78aa6fb2c96bdee3e46a90866771a76
+size 436064
diff --git a/adaptivebridgebetweentrainingandinferencefordialoguegeneration/layout.json b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d65a794576d9bb95ba2c711faa5fca602967cedf
--- /dev/null
+++ b/adaptivebridgebetweentrainingandinferencefordialoguegeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f58504eb091e9adea56c09adb2ebcf76511eafb9ef7ad91cd4086e19e6d9ec95
+size 368025
diff --git a/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_content_list.json b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..663e36745613e7c7a6e71a1ae3894f345ff68d2d
--- /dev/null
+++ b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:011685e7f0dc8320065bc23a284bc69ac3b6e9a4e273be5d0ba1d3568f7605d6
+size 96375
diff --git a/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_model.json b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..410dbb50ca9dc8ef015c3d05105817e23e1a28cd
--- /dev/null
+++ b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:519338a6abb09f2de906bf5015c592d0aa50faafb80481fec007169c51e16ff1
+size 116243
diff --git a/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_origin.pdf b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b90f8b6fbb21803928c2441e3e354386625be3a6
--- /dev/null
+++ b/adaptiveinformationseekingforopendomainquestionanswering/f4da35df-c204-42f1-83d2-a4b164644b84_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eeb2b6171162345432b1040404803cf54bd76ec56ff2d9e84e3f88ea19e04550
+size 1358209
diff --git a/adaptiveinformationseekingforopendomainquestionanswering/full.md b/adaptiveinformationseekingforopendomainquestionanswering/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0e75f42ea70db72bcc28bc5f9bab08be657cfdf9
--- /dev/null
+++ b/adaptiveinformationseekingforopendomainquestionanswering/full.md
@@ -0,0 +1,412 @@
+# Adaptive Information Seeking for Open-Domain Question Answering
+
+Yunchang Zhu†§, Liang Pang†*, Yanyan Lan◇*, Huawei Shen†§, Xueqi Cheng†§
+
+†Data Intelligence System Research Center
+
+and ${}^{ \ddagger }$ CAS Key Lab of Network Data Science and Technology,
+
+Institute of Computing Technology, Chinese Academy of Sciences
+
+$^{\S}$ University of Chinese Academy of Sciences
+
+$\diamond$ Institute for AI Industry Research, Tsinghua University
+
+{zhuyunchang17s, pangliang, shenhuawei, cxq}@ict.ac.cn
+
+lanyanyan@tsinghua.edu.cn
+
+# Abstract
+
+Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus. Recently, iterative approaches have been proven to be effective for complex questions, by recursively retrieving new evidence at each step. However, almost all existing iterative approaches use predefined strategies, either applying the same retrieval function multiple times or fixing the order of different retrieval functions, which cannot fulfill the diverse requirements of various questions. In this paper, we propose a novel adaptive information-seeking strategy for open-domain question answering, namely AISO. Specifically, the whole retrieval and answer process is modeled as a partially observed Markov decision process, where three types of retrieval operations (e.g., BM25, DPR, and hyperlink) and one answer operation are defined as actions. According to the learned policy, AISO could adaptively select a proper retrieval action to seek the missing evidence at each step, based on the collected evidence and the reformulated query, or directly output the answer when the evidence set is sufficient for the question. Experiments on SQuAD Open and HotpotQA fullwiki, which serve as single-hop and multi-hop open-domain QA benchmarks, show that AISO outperforms all baseline methods with predefined strategies in terms of both retrieval and answer evaluations.
+
+# 1 Introduction
+
+Open-domain question answering (QA) (Voorhees et al., 1999) is a task of answering questions using a large collection of texts (e.g., Wikipedia). It relies on a powerful information-seeking method to efficiently retrieve evidence from the given large corpus.
+
+Traditional open-domain QA approaches mainly follow the two-stage retriever-reader pipeline (Chen et al., 2017; Yang et al., 2018; Karpukhin
+
+Question.
+
+What movie directed by Pitof in 2004 has a tie-in electronic game?
+
+Passages:
+
+P1: Pitof
+
+Jean-Chri
+
+Pty".In 20
+
+P2:Catv
+
+12. Catw
+
+Catwoman
+
+same name
+
+P3: Catwoman (video game)
+
+Catwoman
+
+based on
+
+Strategies.
+
+1BM25C
+
+BM25(G)
+
+$^{2}\mathrm{DR}_{(\mathrm{MDR})}$
+
+invc
+
+$\text{一} \mathrm { B M } 2 5 + 1$
+
+Optimal
+
+Baidu)
+
+(Red level)
+
+…
+
+K(GRR)
+
+BM25(O)
+
+中
+
+DR(Q)
+
+RM25(0)
+
+BM23
+
+BM25(Q)
+
+)B
+
+$\because {P}_{1}\left( {-2,0}\right) .{P}_{2}\left( {0,2}\right)$
+
+P3
+
+(1)
+
+P1
+
+)
+
+M25(F)
+
+→>11
+
+OR([Q,P3])
+
+LINK(P1)
+
+LINK(1:1)
+
+LINK(P1)
+
+BM:
+
+A
+
+X
+
+LIN
+
+P2 Bin#
+
+DR[
+
+5(0) m
+
+C2 P2 3
+
+(P2)
+
+(1.2) X
+
+2,P2]) P3
+
+BM25
+
+→
+
+ANS(
+
+Figure 1: An example derived from HotpotQA development set. P1, P2 and P3 are the most relevant passages, of which P2 and P3 are supporting passages, which are essential to answer the question. Except for the adaptive strategy in the last row, fixed strategy methods such as using BM25 or dense retrieval multiple times and first using BM25 and then entity linking have failed, due to the rank of the remaining supporting passages larger than 1k. The number between two arrows indicates the highest rank of the remaining supporting passages in the retrieval list, unless ranked first.
+
+et al., 2020), in which the retriever uses a determinate sparse or dense retrieval function to retrieve evidence, independently from the reading stage. But these approaches have limitations in answering complex questions, which need multi-hop or logical reasoning (Xiong et al., 2021).
+
+To tackle this issue, iterative approaches have been proposed to recurrently retrieve passages and reformulate the query based on the original question and the previously collected passages. Nevertheless, all of these approaches adopt fixed information-seeking strategies in the iterative process. For example, some works employ a single retrieval function multiple times (Das et al., 2019a; Qi et al., 2019; Xiong et al., 2021), and the other works use a pre-defined sequence of retrieval functions (Asai et al., 2020; Dhingra et al., 2020).
+
+However, the fixed information-seeking strategies cannot meet the diversified requirements of
+
+various problems. Taking Figure 1 as an example, the answer to the question is 'Catwoman' in P3. Due to the lack of essential supporting passages, simply applying BM25/dense retrieval (DR) multiple times (strategy 1 (Qi et al., 2019) or 2 (Xiong et al., 2021)), or using the mixed but fixed strategy (strategy 3 (Asai et al., 2020)) cannot answer the question. Specifically, it is hard for Qi et al. (2019) to generate the ideal query 'Catwoman game' by considering P1 or P2, thus BM25 (Robertson and Zaragoza, 2009) suffers from the mismatch problem and fails to find the next supporting passage P3. The representation learning of salient but rare phrases (e.g. 'Pitof') still remains a challenging problem (Karpukhin et al., 2020), which may affect the effectiveness of dense retrieval, i.e., the supporting passage P3 is ranked 65, while P1 and P2 do not appear in the top-1000 list at the first step. Furthermore, link retrieval functions fail when the current passage, e.g., P2, has no valid entity links.
+
+Motivated by the above observations, we propose an Adaptive Information-Seeking approach for Open-domain QA, namely AISO. Firstly, the task of open-domain QA is formulated as a partially observed Markov decision process (POMDP) to reflect the interactive characteristics between the QA model (i.e., agent) and the intractable large-scale corpus (i.e., environment). The agent is asked to perform an action according to its state (belief module) and the policy it learned (policy module). Specifically, the belief module of the agent maintains a set of evidence to form its state. Moreover, there are two groups of actions for the policy module to choose, 1) retrieval action that consists of the type of retrieval function and the reformulated query for requesting evidence, and 2) answer action that returns a piece of text to answer the question, then completes the process. Thus, in each step, the agent emits an action to the environment, which returns a passage as the observation back to the agent. The agent updates the evidence set and generates the next action, step by step, until the evidence set is sufficient to trigger the answer action to answer the question. To learn such a strategy, we train the policy in imitation learning by cloning the behavior of an oracle online, which avoids the hassle of designing reward functions and solves the POMDP in the fashion of supervised learning.
+
+Our experimental results show that our approach achieves better retrieval and answering performance than the state-of-the-art approaches
+
+on SQuAD Open and HotpotQA fullwiki, which are the representative single-hop and multi-hop datasets for open-domain QA. Furthermore, AISO significantly reduces the number of reading steps in the inference stage.
+
+In summary, our contributions include:
+
+- To the best of our knowledge, we are the first to introduce the adaptive information-seeking strategy to the open-domain QA task;
+- Modeling adaptive information-seeking as a POMDP, we propose AISO, which learns the policy via imitation learning and has great potential for expansion.
+- The proposed AISO achieves state-of-the-art performance on two public dataset and wins the first place on the HotpotQA fullwiki leaderboard. Our code is available at https://github.com/zycdev/AISO.
+
+# 2 Related Work
+
+Traditional approaches of open-domain QA mainly follow the two-stage retriever-reader pipeline (Chen et al., 2017): a retriever first gathers relevant passages as evidence candidates, then a reader reads the retrieved candidates to form an answer. In the retrieval stage, most approaches employ a determinate retrieval function and treat each passage independently (Wang et al., 2018; Lin et al., 2018; Lee et al., 2018; Yang et al., 2018; Pang et al., 2019; Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020; Izacard and Grave, 2021). As an extension, some approaches further consider the relations between passages through hyperlinks or entity links and extend evidence with the linked neighbor passages (Nie et al., 2019; Das et al., 2019b; Zhao et al., 2020). However, pipeline approaches retrieve evidence independently from reader, leading to 1) introduce less-relevant evidence to the question, and 2) hard to model the complex question which has high-order relationship between question and evidence.
+
+Instead, recent iterative approaches sequentially retrieve new passages by updating the query inputted to a specific retrieval function at each step, conditioned on the information already gathered. At each step, Das et al. (2019a); Feldman and ElYaniv (2019); Xiong et al. (2021) reformulate the dense query vector in a latent space, while Ding et al. (2019); Qi et al. (2019); Zhang et al. (2020);
+
+
+Figure 2: The overview of the AISO.
+
+Qi et al. (2020) update the natural language query. After the first step retrieval using TF-IDF, Asai et al. (2020) and Li et al. (2021) recursively select subsequent supporting passages on top of a hyperlinked passage graph. Nevertheless, all of these approaches adopt fixed information-seeking strategies, employing the same retrieval function multiple times (Das et al., 2019a; Feldman and ElYaniv, 2019; Xiong et al., 2021; Ding et al., 2019; Qi et al., 2019; Zhang et al., 2020; Qi et al., 2020) or pre-designated sequence of applying retrieval functions (Asai et al., 2020; Li et al., 2021). Due to the diversity of questions, these fixed strategies established in advance may not be optimal for all questions, or even fail to collect evidence.
+
+# 3 Method
+
+In this section, we first formulate the open-domain QA task as a partially observed Markov decision process (POMDP) and introduce the dynamics of the environment. Then, we elaborate on how the agent interacts with the environment to seek evidence and answer a question. Finally, to solve the POMDP, we describe how to train the agent via imitation learning.
+
+# 3.1 Open-Domain QA as a POMDP
+
+Given a question $q$ and a large corpus $\mathcal{P}$ composed of passages, the task of open-domain QA is to col
+
+lect a set of evidence $E\subset \mathcal{P}$ and answer the question based on the gathered evidence.
+
+The fashion of iterative evidence gathering, proven effective by previous works (Das et al., 2019a; Asai et al., 2020; Xiong et al., 2021), is essentially a sequential decision-making process. Besides, since the corpus is large, ranging from millions (e.g., Wikipedia) to billions (e.g., the Web), and the input length of a QA model is limited, the QA model can only observe a part of the corpus. Owing to the above two reasons, we model open-domain QA as a partially observed Markov decision process.
+
+In the POMDP we designed, as shown in Figure 2, the agent is the QA model that needs to issue actions to seek evidence from the largescale corpus hidden in the environment and finally respond to the question. By executing the received action, the environment can return a retrieved passage to the agent as an observation of the corpus. Formally, the POMDP is defined by $(S, \mathcal{A}, \mathcal{O}, \Omega, Z, R)$ , where $R$ is the reward function.
+
+Actions: At timestep $t = 0,1,\dots ,T$ , the action $a_{t}$ in the action space $\mathcal{A} = \mathcal{F}\times \mathcal{U}$ is a request for an executable function $f\in \mathcal{F}$ , expressed as $\langle f,u\rangle$ , where $u\in \mathcal{U}$ is the text argument that gets passed to $f$ . The space of executable functions $\mathcal{F}$ includes two groups of functions, 1) retrieval function that takes the query $u$ and corpus $\mathcal{P}$ as
+
+input and ranks a retrieval list of passages as $\mathcal{P}_{f(u)}$ , 2) answer function that replies to the question $q$ with the answer $u$ and ends the process. The action $a_{t}$ is performed following the policy $\Pi$ described in Subsection 3.2.2.
+
+States: The environment state $s_t$ in the state space $S$ contains revealing states of retrieval lists of all history retrieval actions. When the agent issues an action $a_t = \langle f, u \rangle$ , $s_t$ will transfer to $s_{t+1}$ governed by a deterministic transition dynamics $\Omega(s_t, a_t)$ . Specifically, $\Omega$ will mark the topmost unrevealed passage in the retrieval list $\mathcal{P}_{f(u)}$ as revealed. If the environment has never executed $a_t$ before, it will first search and cache $\mathcal{P}_{f(u)}$ for possible repeated retrieval actions in the future.
+
+Observations: On reaching the new environment state $s_{t+1}$ , the environment will return an observation $o_{t+1}$ from the observation space $\mathcal{O} = \{q\} \cup \mathcal{P}$ , governed by the deterministic observation dynamics $Z$ . At the initial timestep, the question $q$ will be returned as $o_0$ . In other cases, $Z$ is designed to return only the last passage marked as revealed in $\mathcal{P}_{f(u)}$ at a time. For example, if the action $\langle f, u \rangle$ is received for the $k$ th time, the $k$ th passage in $\mathcal{P}_{f(u)}$ will be returned.
+
+# 3.2 Agent
+
+The agent interacts with the environment to collect evidence for answering the question. Without access to the environment state $s_t$ , the agent can only perform sub-optimal actions based on current observations. It needs to build its belief $b_t$ in the state that the environment may be in, based on its experience $h_t = (o_0, a_0, o_1, \dots, a_{t-1}, o_t)$ . Therefore, the agent consists of two modules: belief module $\Phi$ that generates the belief state $b_t = \Phi(h_t)$ from the experience $h_t$ , and policy module $\Pi$ that prescribes the action $a_t = \Pi(b_t)$ to take for current belief state $b_t$ .
+
+Both belief and policy modules are constructed based on pretrained Transformer encoders (Clark et al., 2020), respectively denoted as $\Psi^{belief}$ and $\Psi^{policy}$ , which encode each inputted token into a $d$ -dimensional contextual representation. The input of both encoders is a belief state, formatted as "[CLS] [YES] [NO] [NONE] question [SEP] title $_o$ [SOP] content $_o$ [SEP] title $_1$ [SOP] ... content $_{|E|$ [SEP]", where the subscript $_o$ denotes the observation passage, and the others passages come from the collected evidence set $E$ , [SOP] is a special token to separate the title and con
+
+tent of a passage, [YES] and [NO] are used to indicate yes/no answer, and [NONE] is generally used to indicate that there is no desired answer/query/evidence. In this way, the self-attention mechanism across the concatenated sequence allows each passage in the input to interact with others, which has been shown crucial for multi-hop reasoning (Wang et al., 2019a).
+
+# 3.2.1 Belief Module
+
+The belief module $\Phi$ transforms the agent's experience $h_t$ into a belief state $b_t$ by maintaining a set of evidence $E_{t-1}$ . At the end of the process, the evidence set $E$ is expected to contain sufficient evidence necessary to answer the question and no irrelevant passage. In the iterative process, the agent believes that all the passages in $E$ may help answer the question. In other words, those passages that were observed but excluded from the evidence set, i.e., $o_{1:t-1} \setminus E_{t-1}$ , are believed to be irrelevant to the question.
+
+For simplicity, assuming that the negative passages $o_{1:t-1} \setminus E_{t-1}$ and action history $a_{ \phi \left(p _ {0} \mid b _ {t}\right), p _ {i} \in C _ {t} \right\}. \tag {2}
+$$
+
+It is worth noting that these evidence candidates are scored jointly since encoded together in the same input, different from conventional rerankers that score separately.
+
+# 3.2.2 Policy Module
+
+The policy module $\Pi$ decides the next action $a_{t}$ to be taken based on the current belief state $b_{t}$ . In this paper, we equipped the agent with three retrieval functions and one answer function, which means that the action space $\mathcal{A}$ consists of three types of retrieval actions and one type of answer actions. However, unlike the finite space of executable functions $\mathcal{F}$ , the space of function arguments $\mathcal{U}$ includes all possible natural-language queries and answers. To narrow the search space, for each executable function, we employ a suggester to propose a plausible query or answer as the argument passed to the function. Finally, we apply an action scoring function in the narrowed action space and select the action with the highest score.
+
+Equipped Functions Formally, the space of executable functions is defined as $\mathcal{F} = \{f_s, f_d, f_l, f_o\}$ .
+
+Among them, except $f_{o}$ is the answer function used to reply to the question, the rest are three distinct off-the-shelf retrieval functions (RF) used to explore the corpus. $f_{s}$ is a sparse RF, implemented as BM25 (Robertson and Zaragoza, 2009). It performs well when the query is concise and contains highly selective keywords but often fails to capture the semantics of the query. $f_{d}$ is a dense RF, implemented as MDR (Xiong et al., 2021) for multi-hop questions, and DPR (Karpukhin et al., 2020) for single-hop questions. Dense RFs can capture lexical variations and semantic relationships, but they struggle when encountering out-of-vocabulary words. $f_{l}$ is a link RF, implemented as hyperlink. When hyperlink markings are available in a source passage, it can readily map a query (i.e., anchor text) to the target passage.
+
+Argument Generation The space of function arguments $\mathcal{U}$ , composed of textual queries and answers, is too large to perform an exhaustive search due to the complexity of natural language. To reduce the search complexity, inspired by Yao et al. (2020), we employ four argument generators to generate the most plausible query/answer for the equipped functions.
+
+$g_{o}$ is a trainable reading comprehension model for $f_{o}$ . It is a span extractor built upon the contextual representations outputted by the encoder $\Psi^{policy}$ . Like conventional extractive reading comprehension models (Yang et al., 2018; Clark et al., 2020), $g_{o}$ uses the contextual representations to
+
+calculate the start and end positions of the most plausible answer $u_{o}$ . If the current context $C_t$ is insufficient to answer the question, the special token [NONE] will be extracted.
+
+$g_{s}$ is a query reformulation model for $f_{s}$ . In this work, we directly employ the well-trained query reformulator from Qi et al. (2019) for multi-hop questions, which takes the belief state $b_{t}$ as input and outputs a span of the input sequence as the sparse query $u_{s}$ . As for single-hop questions, since there exists no off-the-shelf multi-step query reformulator, we leave $g_{s}$ as an identity function that returns the original question directly. In this case, requesting the same RF multiple times is equivalent to traverse the retrieval list of original question.
+
+$g_{d}$ is a query reformulator for $f_{d}$ . For multi-hop questions, $g_{d}$ concatenates the question $q$ and the passage with the highest score in evidence set $E_{t}$ as the dense query $u_{d}$ , the same as the input of MDR (Xiong et al., 2021). If $E_{t}$ is empty, $u_{d}$ is equal to the question $q$ . Similar to $g_{s}$ , $g_{d}$ for single-hop questions also leaves original questions unchanged.
+
+$g_{l}$ is a trainable multi-class classifier for $f_{l}$ . It selects the most promising anchor text from the belief state $b_{t}$ . To enable rejecting all anchors, [NONE] is also treated as a candidate anchor. $g_{l}$ shares the encoder $\Psi^{policy}$ , where each anchor is represented by the average of contextual representations of its tokens. Upon $\Psi^{policy}$ , we use a linear layer to project the hidden representations of candidate anchors to real values and select the anchor with the highest value as the link query $u_{l}$ .
+
+In this way, the action space is narrowed down to $\check{A} = \{\langle f_s,u_s\rangle ,\langle f_d,u_d\rangle ,\langle f_l,u_l\rangle ,\langle f_o,u_o\rangle \}$
+
+Action Selection The action scoring function $\pi$ is also built upon the output of $\Psi^{policy}$ . To score an action $\langle f, u \rangle$ for current belief state $b_{t}$ , an additional two-layer $(3d \times 4d \times 1)$ MLP, with a ReLU activation in between, projects the concatenated representation of $b_{t}$ , executable function $f$ , and function argument $u$ , i.e., $\mathbf{v}_{[\mathrm{CLS}]}$ , $\mathbf{w}_{f}$ , and $\mathbf{v}_{u}$ , into a real value. $\mathbf{w}_{f} \in \mathbb{R}^{d}$ is a trainable embedding for each executable function, the same dimension as the token embedding. $\mathbf{v}_{u}$ is specific for each function. Since $u_{s}$ , $u_{l}$ and $u_{o}$ have explicit text span in the $b_{t}$ , thus their $\mathbf{v}_{u}$ are the averages of their token representations. As for $u_{d}$ , if $g_{d}$ does not expand the original question, $\mathbf{v}_{u_{d}}$ is the contextual representation of [NONE]. Otherwise, $\mathbf{v}_{u_{d}}$ is the [SOP] of the passage concatenated to the question.
+
+In short, the next action is selected from the narrowed action space $\check{A}$ by the scoring function $\pi$ ,
+
+$$
+a _ {t} = \Pi (b _ {t}) = \underset {a \in \tilde {A}} {\arg \max } \pi (a | b _ {t}). \tag {3}
+$$
+
+# 3.3 Training
+
+In the agent, in addition to the encoders $\Psi^{belief}$ and $\Psi^{policy}$ , we need to train the evidence scoring function $\phi$ , link classifier $g_{l}$ , answer extractor $g_{o}$ , and action scoring function $\pi$ , whose losses are $L_{\phi}, L_{l}, L_{o}$ , and $L_{\pi}$ . Since the policy module is dependent on the belief module, we train the agent jointly using the following loss function,
+
+$$
+L = L _ {\phi} + L _ {l} + L _ {o} + L _ {\pi}. \tag {4}
+$$
+
+Unlike $\phi$ , $g_{l}$ and $g_{o}$ that can be trained in supervised learning through human annotations in QA datasets, the supervision signal for $\pi$ is hard to be derived directly from QA datasets. Even though policies are usually trained via reinforcement learning, reinforcement learning algorithms (Sutton et al., 2000; Mnih et al., 2015) are often sensitive to the quality of reward functions. For a complex task, the reward function $R$ is often hard to specify and exhaustive to tune. Inspired by Choudhury et al. (2017), we explore the use of imitation learning (IL) by querying a model-based oracle online and imitating the action $a^{\star}$ chose by the oracle, which avoids the hassle of designing $R$ and solves the POMDP in the fashion of supervised learning. Thus, the loss of $\pi$ is defined as the cross entropy,
+
+$$
+L _ {\pi} = - \log \frac {e ^ {\pi (a ^ {\star} | b)}}{\sum_ {a \in \check {A}} e ^ {\pi (a | b)}}, \tag {5}
+$$
+
+where $b$ is the belief state of the agent.
+
+The link classifier $g_{l}$ and the answer extractor $g_{o}$ are also optimized with multi-class cross-entropy losses. For $g_{l}$ , denoting its loss as $L_{l}$ , the classification label is set to the anchor text that links to a gold supporting passage, if there is no such anchor, then the pseudo hyperlink [NONE] is labeled. $g_{o}$ is trained as a classifier of start and end position following previous work (Clark et al., 2020), denoting its loss as $L_{o}$ . Considering the belief state $b = \langle q, \{p_{1}, p_{2}, \dots, p_{|C|}\} \rangle$ , the ListMLE (Xia et al., 2008) ranking loss of the evidence scoring function $\phi$ is defined as the negative log likelihood of the ground truth permutation,
+
+$$
+L _ {\phi} (\boldsymbol {y}, b) = - \log P \left(\tau_ {\boldsymbol {y}} \mid \left\{\phi \left(p _ {i} | b\right) \right\} _ {i = 0} ^ {| C |}\right), \tag {6}
+$$
+
+where $\pmb{y}$ is the relevance label of $\{p_0,p_1,\dots ,p_{|C|}\}$ and $\tau_{\pmb{y}}$ is their ground truth permutation. To learn the dynamic threshold $\phi (p_0|b)$ , we set the relevance label of the pseudo passage $p_0$ to $\pmb{y}_0 = 0.5$ . And passages in $C$ are labeled as $1 / 0$ according to whether they are gold supporting passages.
+
+Model-based Oracle The model-based oracle has full access to the environment and can foresee the gold evidence and answer of every question, which means that the oracle can infer the rank of a supporting passage in the retrieval list of any retrieval action. Thus, given a state, the oracle can easily select a near-optimal one from candidate actions according to a greedy policy $\pi^{\star}$ . Specifically, if all gold evidence is collected and the argument of an answer action is a correct answer, the oracle will select the answer action. Otherwise, the oracle will use a greedy algorithm to select the retrieval action that helps to gather a missing passage of evidence in the fewest steps.
+
+Belief States Sampling We train the agent on sampled belief states instead of long trajectories. In every epoch, one belief state is sampled for each question. To sample a belief state $\langle q,C\rangle$ , we first uniformly sample a subset from $q$ 's gold evidence as $C$ , which could be an empty set. However, at testing time, it is impossible for the candidate evidence set $C$ to contain only gold evidence. To alleviate the mismatch of the state distribution between training and testing, we inject a few negative passages into $C$ and shuffle them. We treat the first passage in the candidate set as the observation, and the others as evidence collected before.
+
+The distribution of injected negative passages can affect the test performance. In this work, to make it simple, we sample 0~2 passages from all top-ranked negative passages in retrieval lists of $f_{s}$ , $f_{d}$ , and $f_{l}$ .
+
+# 4 Experiments
+
+We evaluate AISO and baselines on two Wikipedia-sourced benchmarks. We first introduce the experimental setups, then describe the experimental results on evidence gathering and question answering. Furthermore, detailed analyses are discussed.
+
+# 4.1 Experimental Setup
+
+Data HotpotQA (Yang et al., 2018), a multi-hop QA benchmark. We focus on its fullwiki (open-
+
+domain) setting1. It requires gathering two supporting passages (paragraphs) to answering a question, given the introductory (first) paragraphs of 5M Wikipedia articles dumped on October 1, 2017.
+
+SQuAD Open (Chen et al., 2017), a single-hop QA benchmark, whose questions are from the SQuAD dataset (Rajpurkar et al., 2016) and can be answered based on a single passage. We preprocess the Wikipedia dump on December 21, 2016 and extract hyperlinks using WikiExtractor2. Following Karpukhin et al. (2020), we split articles into some disjoint passages, resulting in 20M passages in total. We add two extra hyperlinks to each passage, one linking to its previous passage in the article, the other to the next passage.
+
+Metrics To test whether the top-2 passages in the evidence set exactly cover both gold supporting passages, we use Supporting Passage Exact Match (P EM) as the evaluation metric following (Asai et al., 2020). To test the performance of answer extraction, we use EM and F1 as our metrics following (Yang et al., 2018).
+
+Implementation Details For sparse retrieval, we index all passages in the corpus with Elucidsearch and implement BM25 following Qi et al. $(2019)^{3}$ . For dense retrieval, we leverage the trained passage encoder and query encoder from Karpukhin et al. $(2020)^{4}$ and Xiong et al. $(2021)^{5}$ and index all passage vectors using FAISS (Johnson et al., 2019) offline. During training, we use the HNSW-based index for efficient low-latency retrieval; in test time, we use the exact inner product search index for better retrieval results. For link retrieval, the filtered hyperlinks are used, whose targets have to be another article from this dump.
+
+Based on Huggingface Transformers (Wolf et al., 2020), we use ELECTRA (Clark et al., 2020) $(d = 768 / 1024$ for base/large) $^{6}$ as the initializations for our encoders $\Psi^{belief}$ and $\Psi^{policy}$ . The maximum number of passages inputted into the encoders is set to 3 and the length of input tokens is limited to
+
+| Strategy | Method | P EM | # read |
| fs | BM25 | 11.11 | 2 |
| BM25 + Reranker | 29.60 | 20 |
| fd | DPR (Karpukhin et al., 2020) | 14.18 | 2 |
| fs○fl | Semantic Retrieval*◇ | 69.35 | 39.4 |
| Entity Centric IR*○ | 34.90 | - |
| fs○fs | GoldEn Retriever | 47.77 | 10 |
| fd○fd | MDR (Xiong et al., 2021) | 64.52 | 2 |
| MDR + Reanker†* | 81.20 | ≥200 |
| Ballen†* (Khattab et al., 2021) | 86.70 | - |
| fsn | CogQA* (Ding et al., 2019) | 57.80 | - |
| DDRQA†* (Chen et al., 2017) | 79.80 | - |
| IRRR†* (Qi et al., 2020) | 84.10 | ≥150 |
| fs○fln-1 | GRR†* (Asai et al., 2020) | 75.70 | ≥500 |
| HopRetriever†* (Li et al., 2021) | 82.54 | ≥500 |
| HopRetriever-plus†* | 86.94 | >500 |
| TPRR†* (Xinyu et al., 2021) | 86.19 | ≥500 |
| (fs||fd)n | DrKit* (Dhingra et al., 2020) | 38.30 | - |
| (fs|fd|fl)n | AISObase | 85.69 | 36.7 |
| AISOLarge | 88.17 | 35.7 |
+
+Table 1: Evidence gathering performance and reading cost on the HotpotQA fullwiki development set. The symbol $\dagger$ denotes the baseline methods use the large version of pretrained language models comparable to our $\mathrm{AISO}_{\mathrm{large}}$ . The results with $*$ are from published papers, otherwise they are our implementations. The symbol $\circ$ denotes sequential apply RFs, $f^n$ denotes apply the RF $f$ multiple times, $||$ denotes combining the results of different RFs, and $(\cdot|\cdot)_{\Pi}$ means choosing one of RFs to use according to the policy II. $\diamond$ : (Nie et al., 2019), $\heartsuit$ : (Qi et al., 2019), $\clubsuit$ : (Qi et al., 2019)
+
+512. To avoid the high confidence passages from being truncated, we input the passages of evidence in descending order of their belief scores from the previous step.
+
+To accelerate the model training, for the first 24 epochs, $\Psi^{belief}$ and $\Psi^{policy}$ share parameters, for the next 6 epochs, they are trained separately. The batch size is 32. We use Adam optimization with learning rate $2 \times 10^{-5}$ . To select the best agent (QA model), we first save several checkpoints that perform well on heuristic single-step metrics, such as action accuracy. Then we choose the one that performs best in the whole process on the development set. In test time, the number of interaction steps is limited to $T$ . We set the maximum number of steps to $T = 1000$ if not specified. Once the agent has exhausted its step budget, it is forced to answer the question.
+
+# 4.2 Results
+
+Evidence Gathering We first evaluate the performance and reading cost on the evidence gathering, illustrating the effectiveness and efficiency of AISO. In Table 1, we split evidence gathering methods into different groups according to their
+
+| Method | Dev | Test |
| Ans | Sup | Joint | Ans | Sup | Joint |
| EM | F1 | EM | F1 | EM | F1 | EM | F1 | EM | F1 | EM | F1 |
| Semantic Retrieval (Nie et al., 2019) | 46.5 | 58.8 | 39.9 | 71.5 | 26.6 | 49.2 | 45.3 | 57.3 | 38.7 | 70.8 | 25.1 | 47.6 |
| GoldEn Retriever (Qi et al., 2019) | - | - | - | - | - | - | 37.9 | 49.8 | 30.7 | 64.6 | 18.0 | 39.1 |
| CogQA (Ding et al., 2019) | 37.6 | 49.4 | 23.1 | 58.5 | 12.2 | 35.3 | 37.1 | 48.9 | 22.8 | 57.7 | 12.4 | 34.9 |
| \( DDRQA^† \) (Zhang et al., 2020) | 62.9 | 76.9 | 51.3 | 79.1 | - | - | 62.5 | 75.9 | 51.0 | 78.9 | 36.0 | 63.9 |
| \( IRRR+^†* \) (Qi et al., 2020) | - | - | - | - | - | - | 66.3 | 79.9 | 57.2 | 82.6 | 43.1 | 69.8 |
| MUPPET (Feldman and El-Yaniv, 2019) | 31.1 | 40.4 | 17.0 | 47.7 | 11.8 | 27.6 | 30.6 | 40.3 | 16.7 | 47.3 | 10.9 | 27.0 |
| \( MDR^† \) (Xiong et al., 2021) | 62.3 | 75.1 | 56.5 | 79.4 | 42.1 | 66.3 | 62.3 | 75.3 | 57.5 | 80.9 | 41.8 | 66.6 |
| \( GRR^† \) (Asai et al., 2020) | 60.5 | 73.3 | 49.2 | 76.1 | 35.8 | 61.4 | 60.0 | 73.0 | 49.1 | 76.4 | 35.4 | 61.2 |
| \( HopRetriever^† \) (Li et al., 2021) | 62.2 | 75.2 | 52.5 | 78.9 | 37.8 | 64.5 | 60.8 | 73.9 | 53.1 | 79.3 | 38.0 | 63.9 |
| \( HopRetriever-plus^† \) (Li et al., 2021) | 66.6 | 79.2 | 56.0 | 81.8 | 42.0 | 69.0 | 64.8 | 77.8 | 56.1 | 81.8 | 41.0 | 67.8 |
| EBS-Large* | - | - | - | - | - | - | 66.2 | 79.3 | 57.3 | 84.0 | 42.0 | 70.0 |
| \( TPRR^†* \) (Xinyu et al., 2021) | 67.3 | 80.1 | 60.2 | 84.5 | 45.3 | 71.4 | 67.0 | 79.5 | 59.4 | 84.3 | 44.4 | 70.8 |
| \( AISO_{base} \) | 63.5 | 76.5 | 55.1 | 81.9 | 40.2 | 66.9 | - | - | - | - | - | - |
| \( AISO_{large} \) | 68.1 | 80.9 | 61.5 | 86.5 | 45.9 | 72.5 | 67.5 | 80.5 | 61.2 | 86.0 | 44.9 | 72.0 |
+
+Table 2: Answer extraction and supporting sentence identification performance on HotpotQA fullwiki. The methods with $\dagger$ use the large version of pretrained language models comparable to $\mathrm{AISO}_{\mathrm{large}}$ . The results marked with $*$ are from the official leaderboard otherwise originated from published papers.
+
+| Method | EM | F1 | # read |
| DrQA (Chen et al., 2017) | 27.1 | - | 5 |
| Multi-passage BERT (Wang et al., 2019b) | 53.0 | 60.9 | 100 |
| DPR (Karpukhin et al., 2020) | 29.8 | - | 100 |
| BM25+DPR (Karpukhin et al., 2020) | 36.7 | - | 100 |
| Multi-step Reasoner (Das et al., 2019a) | 31.9 | 39.2 | 5 |
| MUPPET (Feldman and El-Yaniv, 2019) | 39.3 | 46.2 | 45 |
| GRR† (Asai et al., 2020) | 56.5 | 63.8 | ≥ 500 |
| SPARTA† (Zhao et al., 2021) | 59.3 | 66.5 | - |
| IIRR† (Qi et al., 2020) | 56.8 | 63.2 | ≥ 150 |
| AISOLarge | 59.5 | 67.6 | 24.8 |
+
+Table 3: Question answering performance on SQuAD Open benchmark. † denotes the methods use the large pretrained language models comparable to AISOlarge.
+
+strategies. Moreover, the first three groups are the traditional pipeline approaches, and the others are iterative approaches.
+
+For effectiveness, we can conclude that 1) almost all the iterative approaches perform better than the pipeline methods, 2) the proposed adaptive information-seeking approach $\mathrm{AISO}_{\mathrm{large}}$ outperforms all previous methods and achieves the state-of-the-art performance. Moreover, our $\mathrm{AISO}_{\mathrm{base}}$ model outperforms some baselines that use the large version of pretrained language models, such as HopRetriever, GRR, IRRR, DDRQA, and MDR.
+
+For efficiency, the cost of answering an open-domain question includes the retrieval cost and reading cost. Since the cost of reading a passage along with the question online is much greater than the cost of a search, the total cost is linear in # read, reported in the last column of Table 1. # read means
+
+the total number of passages read along with the question throughout the process, which is equal to the adaptive number of steps. We can find that the number of read passages in AISO model, i.e., the is about 35, which is extremely small than the competitive baselines (P EM $>80$ ) that need to read at least 150 passages. That is to say, our AISO model is efficient in practice.
+
+Question Answering Benefit from high-performance evidence gathering, as shown in Tables 2 and 3, AISO outperforms all existing methods across the evaluation metrics on the HotpotQA fullwiki and SQuAD Open benchmarks. This demonstrates that AISO is applicable to both multi-hop questions and single-hop questions. Notably, on the HotpotQA fullwiki blind test set7, $\mathrm{AISOLarge}$ significantly outperforms the second place TPRR (Xinyu et al., 2021) by $2.02\%$ in Sup F1 (supporting sentence identification) and $1.69\%$ on Joint F1.
+
+# 4.3 Analysis
+
+We conduct detailed analysis of $\mathrm{AISO}_{\mathrm{base}}$ on the HotpotQA fullwiki development set.
+
+The effect of the belief and policy module As shown in the second part of Table 4, we examine the variations of AISO with the oracle evidence scoring function $\phi^{\star}$ or oracle action scoring function $\pi^{\star}$ , which are key components of the belief
+
+| Model | P EM | Ans F1 | # read |
| AISObase | 85.69 | 76.45 | 36.64 |
| w. φ* | 97.52 | 79.99 | 40.01 |
| w. φ* + π* | 98.88 | 80.34 | 8.92 |
| fs^t | 68.51 | 67.33 | 58.74 |
| fd^t | 79.80 | 72.91 | 68.63 |
| (f_d|f_l)_{\Pi}^n | 83.97 | 74.93 | 61.41 |
| (f_s|f_l)_{\Pi}^n | 82.44 | 74.44 | 37.76 |
| (f_s|f_d)_{\Pi}^n | 79.66 | 73.36 | 42.01 |
+
+Table 4: Analysis experiments on HotpotQA fullwiki.
+
+and policy module. When we replace our learned evidence scoring function with $\phi^{\star}$ that can identify supporting passage perfectly, the performance increase a lot while the reading cost do not change much. This means that the belief module has a more impact on the performance than the cost. If we further replace the learned $\pi$ with $\pi^{\star}$ , the cost decreases a lot. This shows that a good policy can greatly improve the efficiency.
+
+The impact of retrieval functions As shown in the last part Table 4, the use of a single RF, such as $f_{s}^{t}$ and $f_{d}^{t}$ , leads to poor performance and low efficiency. Moreover, lack of any RF will degrade performance, which illustrates that all RFs contribute to performance. Specifically, although the link RF $f_{l}$ cannot be used alone, it contributes the most to performance and efficiency. Besides, the sparse RF $f_{s}$ may be better at shortening the information-seeking process than the dense RF $f_{d}$ , since removing $f_{s}$ from the action space leads to the number of read passages increase from 36.64 to 61.41. We conjecture this is because $f_{s}$ can rank the evidence that matches the salient query very high.
+
+The impact of the maximum number of steps As shown in Figure 3, with the relaxation of the step limit $T$ , $\mathrm{AISO}_{\mathrm{base}}$ can filter out negative passages and finally observe low-ranked evidence through more steps, so its performance improves and tends to converge. However, the cost is more paragraphs to read. Besides, once $T$ exceeds 1000, only a few questions (about $1\%$ ) can benefit from the subsequent steps.
+
+The ability to recover from mistakes We count three types of mistakes in gathering evidence on the HotpotQA development set. In the process of collecting evidence for 7405 questions, false evidence was added into the evidence set for 1061 questions, true evidence was missed for 449 questions, and
+
+
+Figure 3: Performance and cost of $\mathrm{AISO_{base}}$ on the HotpotQA development set with different step limits.
+
+true evidence was deleted from the evidence set for 131 questions. And we find that AISO recovered from $17.7\%$ , $43.9\%$ , and $35.9\%$ of these three types of errors respectively, which implies that even without beam search, $\mathrm{AISO}_{\mathrm{base}}$ can make up for previous mistakes to some extent. Besides, we can see that false evidence is the most harmful to evidence gathering and the most difficult to remedy.
+
+# 5 Conclusion and Future Work
+
+This work presents an adaptive information-seeking approach for open-domain question answering, called AISO. It models the open-domain QA task as a POMDP, where the environment contains a large corpus and the agent is asked to sequentially select retrieval function and reformulate query to collect the evidence. AISO achieves state-of-the-art results on two public datasets, which demonstrates the necessity of different retrieval functions for different questions. In the future, we will explore other adaptive retrieval strategies, like directly optimizing various information-seeking metrics by using reinforcement learning techniques.
+
+# Ethical Considerations
+
+We honor and support the ACL code of Ethics. The paper focuses on information seeking and question answering tasks, which aims to answer the question in the open-domain setting. It can be widely used in search engine and QA system, and can help people find the information more accuracy and efficiency. Simultaneously, the datasets we used in this paper are all from previously published works and do not involve privacy or ethical issues.
+
+# Acknowledgements
+
+This work was supported by National Natural Science Foundation of China (NSFC) under Grants No. 61906180, No. 61773362 and No. 91746301, National Key R&D Program of China under Grants 2020AAA0105200. The authors would like to thank Changying Hao for valuable suggestions on this work.
+
+# References
+
+Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph for question answering. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics.
+Sanjiban Choudhury, Ashish Kapoor, Gireeja Ranade, Sebastian A. Scherer, and Debadeepta Dey. 2017. Adaptive information gathering via imitation learning. In Robotics: Science and Systems 2017, volume 13.
+Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019a. Multi-step retriever-reader interaction for scalable open-domain question answering. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Rajarshi Das, Ameya Godbole, Dilip Kavarthapu, Zhiyu Gong, Abhishek Singhal, Mo Yu, Xiaoxiao Guo, Tian Gao, Hamed Zamani, Manzil Zaheer, and Andrew McCallum. 2019b. Multi-step entity-centric information retrieval for multi-hop question answering. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 113-118, Hong Kong, China. Association for Computational Linguistics.
+Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, and William W. Cohen. 2020. Differentiable reasoning over a virtual knowledge base. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+
+Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2694-2703, Florence, Italy. Association for Computational Linguistics.
+Yair Feldman and Ran El-Yaniv. 2019. Multi-hop paragraph retrieval for open-domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2296-2309, Florence, Italy. Association for Computational Linguistics.
+Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasapat, and Ming-Wei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 3929-3938. PMLR.
+Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics.
+Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data.
+Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.
+Omar Khattab, Christopher Potts, and Matei Zaharia. 2021. Baleen: Robust multi-hop reasoning at scale via condensed retrieval. arXiv preprint arXiv:2101.00436.
+Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 565-569, Brussels, Belgium. Association for Computational Linguistics.
+Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics.
+Shaobo Li, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Chengjie Sun, Zhenzhou Ji, and Bingquan Liu. 2021. Hopretriever: Retrieve hops over wikipedia
+
+to answer complex questions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13279-13287.
+Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736-1745, Melbourne, Australia. Association for Computational Linguistics.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature, 518(7540):529-533.
+Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Revealing the importance of semantic retrieval for machine reading at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2553-2566, Hong Kong, China. Association for Computational Linguistics.
+Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, and Xueqi Cheng. 2019. Has-qa: Hierarchical answer spans model for open-domain question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6875-6882.
+Peng Qi, Haejun Lee, Oghenetegiri Sido, Christopher D Manning, et al. 2020. Retrieve, rerank, read, then iterate: Answering open-domain questions of arbitrary complexity from text. arXiv preprint arXiv:2010.12527.
+Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. 2019. Answering complex open-domain questions through iterative query generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2590-2602, Hong Kong, China. Association for Computational Linguistics.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
+Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3(4):333-389.
+Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057-1063.
+
+Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, volume 99, pages 77-82. Citeseer.
+Haoyu Wang, Mo Yu, Xiaoxiao Guo, Rajarshi Das, Wenhan Xiong, and Tian Gao. 2019a. Do multi-hop readers dream of reasoning chains? In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 91-97, Hong Kong, China. Association for Computational Linguistics.
+Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
+Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019b. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5878-5882, Hong Kong, China. Association for Computational Linguistics.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: theory and algorithm. In Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of ACM International Conference Proceeding Series, pages 1192-1199. ACM.
+Zhang Xinyu, Zhan Ke, Hu Enrui, Fu Chengzhen, Luo Lan, Jiang Hao, Jia Yantao, Yu Fan, Dou Zhicheng, Cao Zhao, and Chen Lei. 2021. Answer complex questions: Path ranker is all you need. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '21, New York, NY, USA. Association for Computing Machinery.
+Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain questions with multi-hop dense retrieval. In International Conference on Learning Representations.
+
+Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.
+Shunyu Yao, Rohan Rao, Matthew Hausknecht, and Karthik Narasimhan. 2020. Keep CALM and explore: Language models for action generation in text-based games. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8736-8754, Online. Association for Computational Linguistics.
+Yuyu Zhang, Ping Nie, Arun Ramamurthy, and Le Song. 2020. Ddrqa: Dynamic document reranking for open-domain multi-hop question answering. arXiv preprint arXiv:2009.07465.
+Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul N. Bennett, and Saurabh Tiwary. 2020. Transformer-xh: Multi-evidence reasoning with extra hop attention. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2021. SPARTA: Efficient open-domain question answering via sparse transformer matching retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 565-575, Online. Association for Computational Linguistics.
\ No newline at end of file
diff --git a/adaptiveinformationseekingforopendomainquestionanswering/images.zip b/adaptiveinformationseekingforopendomainquestionanswering/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0f8b0f2bf206b29776481a2bfa1e997683c9db00
--- /dev/null
+++ b/adaptiveinformationseekingforopendomainquestionanswering/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f3f5b006bcfbe5f61d47347dc49b0681e3703551a7fc3b67ddfaabc53cb75a57
+size 378677
diff --git a/adaptiveinformationseekingforopendomainquestionanswering/layout.json b/adaptiveinformationseekingforopendomainquestionanswering/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..dfeb7af0829828cd2681020013b80fdcfc2c9c0c
--- /dev/null
+++ b/adaptiveinformationseekingforopendomainquestionanswering/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3925653a3d5ae35ece96d6b8fefb6ac8bed48b141bf6628f70f0c0f9c56b511f
+size 574977
diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_content_list.json b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ea2e5aa14d7e1d10f46dd53b591e57e848866bae
--- /dev/null
+++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7f1b5f33b841996434ac16c492b0cae68191354144900cd00bf9e51c7cedcbc2
+size 77429
diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_model.json b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ce2c39aa88a783f4f35dae5f984bb1d10e836fd7
--- /dev/null
+++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae4b8c8be2723cfcfc97fa339d7124d41fd7d1eec591160c2a765f3cb283ca6d
+size 92029
diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_origin.pdf b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..02e033ff0f125ba8aed644bee830e5bf55939f11
--- /dev/null
+++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/51a8b223-472d-4ecb-986e-b1002054a829_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aa48eb1c354585d5abd194a2333ccdaa7276d4665fc4f79b59b4148403e8ac5f
+size 1545440
diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/full.md b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6f6e1a3ada3dc2b5d675709fe446036e62f41e9a
--- /dev/null
+++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/full.md
@@ -0,0 +1,338 @@
+# Adaptive Proposal Generation Network for Temporal Sentence Localization in Videos
+
+Daizong Liu $^{1,2*}$ , Xiaoye Qu $^{3*}$ , Jianfeng Dong $^{4}$ , Pan Zhou $^{1\dagger}$
+
+1The Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology
+2School of Electronic Information and Communication, Huazhong University of Science and Technology
+3Huawei Cloud Zhejiang Gongshang University
+
+{dzliu,panzhou}@hust.edu.cn, quxiaoye@huawei.com, dongjf24@gmail.com
+
+# Abstract
+
+We address the problem of temporal sentence localization in videos (TSLV). Traditional methods follow a top-down framework which localizes the target segment with predefined segment proposals. Although they have achieved decent performance, the proposals are handcrafted and redundant. Recently, bottom-up framework attracts increasing attention due to its superior efficiency. It directly predicts the probabilities for each frame as a boundary. However, the performance of bottom-up model is inferior to the top-down counterpart as it fails to exploit the segment-level interaction. In this paper, we propose an Adaptive Proposal Generation Network (APGN) to maintain the segment-level interaction while speeding up the efficiency. Specifically, we first perform a foreground-background classification upon the video and regress on the foreground frames to adaptively generate proposals. In this way, the handcrafted proposal design is discarded and the redundant proposals are decreased. Then, a proposal consolidation module is further developed to enhance the semantic of the generated proposals. Finally, we locate the target moments with these generated proposals following the top-down framework. Extensive experiments on three challenging benchmarks show that our proposed APGN significantly outperforms previous state-of-the-art methods.
+
+# 1 Introduction
+
+Temporal sentence localization in videos is an important yet challenging task in natural language processing, which has drawn increasing attention over the last few years due to its vast potential applications in information retrieval (Dong et al., 2019; Yang et al., 2020) and human-computer interaction (Singha et al., 2018). It aims to ground the most relevant video segment according to a given
+
+
+(a) An example of temporal sentence localization in video
+
+
+Figure 1: (a) An example of temporal sentence localization in videos. (b) The Top-Down framework predicts the confidence scores of a large number of pre-defined proposals for ranking. (c) The Bottom-Up framework regresses the probabilities of all frames as start or end boundaries.
+
+
+
+sentence query. As shown in Figure 1 (a), most parts of video contents are irrelevant to the query (background) while only a short segment matches it (foreground). Therefore, video and query information need to be deeply incorporated to distinguish the fine-grained details of different video segments.
+
+Most previous works (Gao et al., 2017; Chen et al., 2018; Zhang et al., 2019; Yuan et al., 2019a; Zhang et al., 2020b; Liu et al., 2021, 2020a,b) follow the top-down framework which pre-defines a large set of segment candidates (a.k.a proposals) in the video with sliding windows, and measures the similarity between the query and each candidate. The best segment is then selected according to the similarity. Although these methods achieve significant performance, they are sensitive to the proposal quality and present slow localization speed due to redundant proposals. Recently, several works (Rodriguez et al., 2020; Zhang et al., 2020a; Yuan et al., 2019b) exploit the bottom-up framework which directly predicts the probabilities of each frame as the start or end boundaries of segment. These methods are proposal-free and much more efficient. However, they neglect the rich information between start and end boundaries without capturing the segment-level interaction. Thus, the performance of bottom-up models is behind the performance of top-down counterpart thus far.
+
+To avoid the inherent drawbacks of proposal design in the top-down framework and maintain the localization performance, in this paper, we propose an adaptive proposal generation network (APGN) for an efficient and effective localization approach. Firstly, we perform boundary regression on the foreground frames to generate proposals, where foreground frames are obtained by a foreground-background classification on the entire video. In this way, the noisy responses on the background frames are attenuated, and the generated proposals are more adaptive and discriminative compared to the pre-defined ones. Secondly, we perform proposal ranking to select target segment in a top-down manner upon these generative proposals. As the number of proposals is much fewer than the predefined methods, the ranking stage is more efficient. Furthermore, we additionally consider the proposal-wise relations to distinguish their fine-grained semantic details before the proposal ranking stage.
+
+To achieve the above framework, APGN first generates query-guided video representations after encoding video and query features and then predicts the foreground frames using a binary classification module. Subsequently, a regression module is utilized to generate a proposal on each foreground frame by regressing the distances from itself to start and end segment boundaries. After that, each generated proposal contains independent coarse semantic. To capture higher-level interactions among proposals, we encode proposal-wise features by incorporating both positional and semantic information, and represent these proposals as nodes to construct a proposal graph for reasoning correlations among them. Consequently, each updated proposal obtains more fine-grained details for following boundary refinement process.
+
+Our contributions are summarized as follows:
+
+- We propose an adaptive proposal generation network (APGN) for TSLV task, which adaptively generates discriminative proposals without handcrafted design, thus making localization both effective and efficient.
+- To further refine the semantics of the generated proposals, we introduce a proposal graph to consolidate proposal-wise features by reasoning their higher-order relations.
+- We conduct experiments on three challenging datasets (ActivityNet Captions, TACoS, and Charades-STA), and results show that our proposed APGN significantly outperforms the existing state-of-the-art methods.
+
+# 2 Related Work
+
+Temporal sentence localization in videos is a new task introduced recently (Gao et al., 2017; Anne Hendricks et al., 2017), which aims to localize the most relevant video segment from a video with sentence descriptions. Various algorithms (Anne Hendricks et al., 2017; Gao et al., 2017; Chen et al., 2018; Zhang et al., 2019; Yuan et al., 2019a; Zhang et al., 2020b; Qu et al., 2020; Yang et al., 2021) have been proposed within the top-down framework, which samples candidate segments from a video first, then integrates the sentence representation with those video segments individually and evaluates their matching relationships. Some of them (Anne Hendricks et al., 2017; Gao et al., 2017) propose to use the sliding windows as proposals and then perform a comparison between each proposal and the input query in a joint multi-modal embedding space. To improve the quality of the proposals, (Zhang et al., 2019; Yuan et al., 2019a) pre-cut the video on each frame by multiple pre-defined temporal scale, and directly integrate sentence information with fine-grained video clip for scoring. (Zhang et al., 2020b) further build a 2D temporal map to construct all possible segment candidates by treating each frame as the start or end boundary, and match their semantics with the query information. Although these methods achieve great performance, they are severely limited by the heavy computation on proposal matching/ranking, and sensitive to the quality of pre-defined proposals.
+
+Recently, many methods (Rodriguez et al., 2020; Chen et al., 2020; Yuan et al., 2019b; Mun et al., 2020; Zeng et al., 2020; Zhang et al., 2020a; Nan et al., 2021) propose to utilize the bottom-up framework to overcome above drawbacks. They do not rely on the segment proposals and directly select the starting and ending frames by leveraging cross-modal interactions between video and query. Specifically, they predict two probabilities at each frame, which indicate whether this frame is a start or end frame of the ground truth video segment. Although these methods perform segment localization more efficiently, they lose the segment-level interaction, and the redundant regression on background frames may provide disturbing noise for boundary decision, leading to worse localization performance than top-down methods.
+
+In this paper, we propose to preserve the segment-level interaction while speeding up the
+
+
+Figure 2: Overall architecture of APGN. (a) Given a video and a query, we first encode and interact them to obtain query-guided video features. (b) Then, along with regressing boundaries on each frame, we perform foreground-background classification to identify the foreground frames whose corresponding predicted boundaries are further taken as the generated segment proposals. (c) We further encode each proposal and refine them using a graph convolutional network. (d) At last, we predict the confidence score and boundary offset for each proposal.
+
+localization efficiency. Specifically, we design a binary classification module on the entire video to filter out the background responses, which helps model focus more on the discriminative frames. At the same time, we replace the pre-defined proposals with the generated ones and utilize a proposal graph for refinement.
+
+# 3 The Proposed Method
+
+# 3.1 Overview
+
+Given an untrimmed video $V$ and a sentence query $Q$ , the TSLV task aims to localize the start and end timestamps $(\tau_s, \tau_e)$ of a specific video segment referring to the sentence query. We focus on addressing this task by adaptively generating proposals. To this end, we propose a binary classification module to filter out the redundant responses on background frames. Then, each foreground frame with its regressed start-end boundaries are taken as the generated segment proposal. In this way, the number of the generated proposals is much smaller than the number of pre-defined ones, making the model more efficient. Besides, a proposal graph is further developed to refine proposal features by learning their higher-level interactions. Finally, the confidence score and boundary offset are predicted for each proposal. Figure 2 illustrates the overall architecture of our APGN.
+
+# 3.2 Feature Encoders
+
+Video encoder. Given a video $V$ , we represent it as $V = \{v_{t}\}_{t=1}^{T}$ , where $v_{t}$ is the $t$ -th frame and $T$ is the length of the entire video. We first extract the features by a pre-trained network, and then employ a self-attention (Vaswani et al., 2017) module to capture the long-range dependencies among video frames. We also utilize a Bi-GRU (Chung et al.,
+
+2014) to learn the sequential characteristic. The final video features are denoted as $\mathbf{V} = \{\mathbf{v}_t\}_{t=1}^T \in \mathbb{R}^{T \times D}$ , where $D$ is the feature dimension.
+
+Query encoder. Given a query $Q = \{q_{n}\}_{n=1}^{N}$ , where $q_{n}$ is the $n$ -th word and $N$ is the length of the query. Following previous works (Zhang et al., 2019; Zeng et al., 2020), we first generate the word-level embeddings using Glove (Pennington et al., 2014), and also employ a self-attention module and a Bi-GRU layer to further encode the query features as $\mathbf{Q} = \{\mathbf{q}_{n}\}_{n=1}^{N} \in \mathbb{R}^{N \times D}$ .
+
+Video-Query interaction. After obtaining the encoded features $V, Q$ , we utilize a co-attention mechanism (Lu et al., 2019) to capture the cross-modal interactions between video and query features. Specifically, we first calculate the similarity scores between $V$ and $Q$ as:
+
+$$
+\boldsymbol {S} = \boldsymbol {V} \left(\boldsymbol {Q} \boldsymbol {W} _ {S}\right) ^ {\mathrm {T}} \in \mathbb {R} ^ {T \times N}, \tag {1}
+$$
+
+where $\mathbf{W}_S\in \mathbb{R}^{D\times D}$ projects the query features into the same latent space as the video. Then, we compute two attention weights as:
+
+$$
+\boldsymbol {A} = \boldsymbol {S} _ {r} (\boldsymbol {Q} \boldsymbol {W} _ {S}) \in \mathbb {R} ^ {T \times D}, \boldsymbol {B} = \boldsymbol {S} _ {r} \boldsymbol {S} _ {c} ^ {\mathrm {T}} \boldsymbol {V} \in \mathbb {R} ^ {T \times D}, \tag {2}
+$$
+
+where $S_{r}$ and $S_{c}$ are the row- and column-wise softmax results of $S$ , respectively. We compose the final query-guided video representation by learning its sequential features as follows:
+
+$$
+\widetilde {\boldsymbol {V}} = \operatorname {B i G R U} ([ \boldsymbol {V}; \boldsymbol {A}; \boldsymbol {V} \odot \boldsymbol {A}; \boldsymbol {V} \odot \boldsymbol {B} ]) \in \mathbb {R} ^ {T \times D}, \tag {3}
+$$
+
+where $\widetilde{\boldsymbol{V}} = \{\widetilde{\boldsymbol{v}}_t\}_{t=1}^T$ , BiGRU( $\cdot$ ) denotes the Bi-GRU layers, $[;]$ is the concatenate operation, and $\odot$ is the element-wise multiplication.
+
+# 3.3 Proposal Generation
+
+Given the query-guided video features $\widetilde{\pmb{V}}$ , we aim to generate the proposal tuple $(t,l_s^t,l_e^t)$ based on
+
+each foreground frame $v_{t}$ , where $l_s^t, l_e^t$ denotes the distances from frame $v_{t}$ to the starting and ending segment boundaries, respectively. To this end, we first perform binary classification on the whole frames to distinguish the foreground and background frames, and then treat the foreground ones as positive samples and regress the segment boundaries on these frames as generated proposals.
+
+Foreground-Background classification. In the TSLV task, most videos are more than two minutes long while the lengths of annotated target segments only range from several seconds to one minute (e.g. on ActivityNet Caption dataset). Therefore, there exists much noises from the background frames which may disturb the accurate segment localization. To alleviate it, we first classify the background frames and filter out their responses in latter regression. By distinguishing the foreground and background frames with annotations, we design a binary classification module with three full-connected (FC) layers to predict the class $y_{t}$ on each video frame. Considering the unbalanced foreground/background distribution, we formulate the balanced binary cross-entropy loss as:
+
+$$
+\mathcal {L} _ {\text {c l a s s}} = - \sum_ {t = 1} ^ {T _ {\text {b a c k}}} \frac {T _ {\text {b a c k}}}{T} \log \left(y _ {t}\right) - \sum_ {t = 1} ^ {T _ {\text {f o r e}}} \frac {T _ {\text {f o r e}}}{T} \log \left(1 - y _ {t}\right), \tag {4}
+$$
+
+where $T_{\text{fore}}, T_{\text{back}}$ are the numbers of foreground and background frames. $T$ is the number of total video frames. Therefore, we can differentiate between frames from foreground and background during both training and testing.
+
+Boundary regression. With the query-guided video representation $\widetilde{V}$ and the predicted binary sequence of 0-1, we then design a boundary regression module to predict the distance from each foreground frame to the start (or end) frame of the video segment that corresponds to the query. We implement this module by three 1D convolution layers with two output channels. Given the predicted distance pair $(l_s^t,l_e^t)$ and ground-truth distance $(g_s^t,g_e^t)$ , we define the regression loss as:
+
+$$
+\mathcal {L} _ {r e g} = \frac {1}{T _ {f o r e}} \sum_ {t = 1} ^ {T _ {f o r e}} \left(1 - \operatorname {I o U} \left(\left(t, l _ {s} ^ {t}, l _ {e} ^ {t}\right), \left(t, g _ {s} ^ {t}, g _ {e} ^ {t}\right)\right)\right), \tag {5}
+$$
+
+where $\mathrm{IoU}(\cdot)$ computes the Intersection over Union (IoU) score between the predicted segment and its ground-truth. After that, we can represent the generated proposal as tuples $\{(t,l_s^t,l_e^t)\}_{t = 1}^{T_{\text{fore}}}$ based on the regression results of the foreground frames.
+
+
+Figure 3: To distinguish above three proposals, both positional and semantic relations among proposals needs to be considered.
+
+# 3.4 Proposal Consolidation
+
+So far, we have generated a certain number of proposals that are significantly less than the predefined ones in existing top-down framework, making the final scoring and ranking process much efficient. To further refine the proposal features for more accurate segment localization, we explicitly model higher-order interactions between the generated proposals to learn their relations. As shown in Figure 3, proposal 1 and proposal 2 contain same semantics of "blue" and "hops", we need to model their positional distance to distinguish them and refine their features for better understanding the phrase "second time". Also, for the proposals (proposal 2 and 3) which are local neighbors, we have to learn their semantic distance to refine their representations. Therefore, in our APGN, we first encode each proposal feature with both positional embedding and frame-wise semantic features, and then define a graph convolutional network (GCN) over the proposals for proposal refinement.
+
+Proposal encoder. For each proposal tuple $(t, l_s^t, l_e^t)$ , we represent its segment boundary as $(t - l_s^t, t + l_e^t)$ . Before aggregating the features of its contained frames within this segment boundary, we first concatenate a position embedding $\boldsymbol{emb}_t^{pos}$ to each frame-wise feature $\widetilde{\boldsymbol{v}}_t$ , in order to inject position information on frame $t$ as follows:
+
+$$
+\widetilde {\boldsymbol {v}} _ {t} ^ {\prime} = \left[ \widetilde {\boldsymbol {v}} _ {t}; \boldsymbol {e m b} _ {t} ^ {\text {p o s}} \right] \in \mathbb {R} ^ {1 \times (D + d)}, \tag {6}
+$$
+
+where $emb_t^{pos}$ denotes the position embedding of the $t$ -th position, and $d$ is the dimension of $emb_t^{pos}$ . We follow (Vaswani et al., 2017) and use the sine and cosine functions of different frequencies to compose position embeddings:
+
+$$
+\boldsymbol {e m} \boldsymbol {b} _ {t} ^ {\text {p o s}} [ 2 j ] = \sin \left(\frac {t}{1 0 0 0 0 ^ {2 j / d}}\right), \tag {7}
+$$
+
+$$
+\boldsymbol {e m} \boldsymbol {b} _ {t} ^ {\text {p o s}} [ 2 j + 1 ] = \cos \left(\frac {t}{1 0 0 0 0 ^ {2 j / d}}\right), \tag {8}
+$$
+
+where $2j$ and $2j + 1$ are the even and odd indices of the position embedding. In this way, each dimension of the positional encoding corresponds to
+
+a sinusoid, allowing the model to easily learn to attend to absolute positions. Given the frame features $\{\widetilde{\pmb{v}}_t^{\prime}\}_{t = 1}^{T_{\text{fore}}}$ and a proposal segment $(t - l_s^t,t + l_e^t)$ we encode the vector feature $\pmb{p}_t$ of $t$ -th proposal by aggregating the features of the contained frames in the segment as:
+
+$$
+\boldsymbol {p} _ {t} = \operatorname {M L P} _ {2} (\operatorname {P o o l} \left(\operatorname {M L P} _ {1} \left([ \widetilde {\boldsymbol {v}} _ {\lceil t - l _ {s} ^ {t} \rceil}, \dots , \widetilde {\boldsymbol {v}} _ {\lceil t + l _ {e} ^ {t} \rceil} ]\right)\right)), \tag {9}
+$$
+
+where each MLP has two FC layers, $\mathrm{Pool}(\cdot)$ denotes the max-pooling. The frames from each proposal are independently processed by $\mathrm{MLP}_1$ before being pooled (channel-wise) to a single feature vector and passed to $\mathrm{MLP}_2$ where information from different frames are further combined. Thus, we can represent the encoded proposal feature as $p_t \in \mathbb{R}^{1 \times (D + d)}$ .
+
+Proposal graph. We construct a graph over the proposal features $\{\pmb{p}_t\}_{t=1}^{T_{\text{fore}}}$ , where each node of the graph is a proposal associated with both positions and semantic features. We full connect all node pairs, and define relations between each proposal-pair $(\pmb{p}_t, \pmb{p}_{t'})$ for edge convolution (Wang et al., 2018) as:
+
+$$
+\boldsymbol {e} _ {t, t ^ {\prime}} = \operatorname {R e l u} \left(\boldsymbol {p} _ {t} \boldsymbol {\theta} _ {1} + \left(\boldsymbol {p} _ {t ^ {\prime}} - \boldsymbol {p} _ {t}\right) \boldsymbol {\theta} _ {2}\right), \tag {10}
+$$
+
+where $\theta_{1}$ and $\theta_{2}$ are learnable parameters. We update each proposal feature $p_t$ to $\widehat{p}_t$ as follows:
+
+$$
+\widehat {\boldsymbol {p}} _ {t} = \operatorname {M a x P o o l} \left(\boldsymbol {e} _ {t}\right), \quad \boldsymbol {e} _ {t} = \left\{\boldsymbol {e} _ {t, t ^ {\prime}} \right\} _ {t ^ {\prime} = 1} ^ {T _ {\text {f o r e}}}. \tag {11}
+$$
+
+This GCN module consists of $k$ stacked graph convolutional layers. After the above proposal consolidation with graph, we are able to learn the refined proposal features.
+
+# 3.5 Localization Head
+
+After proposal consolidation, we feed the refined features $\widehat{P} = \{\widehat{p}_t\}_{t=1}^{T_{fore}}$ into two separate heads to predict their confidence scores and boundary offsets for proposal ranking and refinement. Specifically, we employ two MLPs on each feature $\widehat{p}_t$ as:
+
+$$
+r _ {t} = \operatorname {S i g m o i d} \left(\mathrm {M L P} _ {3} \left(\widehat {\boldsymbol {p}} _ {t}\right)\right), \tag {12}
+$$
+
+$$
+\left(\delta_ {s} ^ {t}, \delta_ {e} ^ {t}\right) = \mathrm {M L P} _ {4} (\widehat {\boldsymbol {p}} _ {t}), \tag {13}
+$$
+
+where $r_t \in (0,1)$ is the confidence score, and $(\delta_s^t, \delta_e^t)$ is the offsets. Therefore, the final predicted segment of proposal $t$ can be represented as $(t - l_s^t + \delta_s^t, t + l_e^t + \delta_e^t)$ . To learn the confidence
+
+scoring rule, we first compute the IoU score $o_t$ between each proposal segment with the ground-truth $(\tau_s, \tau_e)$ , then we adopt the alignment loss function as below:
+
+$$
+\mathcal {L} _ {\text {a l i g n}} = - \frac {1}{T _ {\text {f o r e}}} \sum_ {t = 1} ^ {T _ {\text {f o r e}}} o _ {t} \log \left(r _ {t}\right) + \left(1 - o _ {t}\right) \log \left(1 - r _ {t}\right). \tag {14}
+$$
+
+Given the ground-truth boundary offsets $(\hat{\delta}_s^t,\hat{\delta}_e^t)$ of proposal $t$ , we also fine-tune its offsets by a boundary loss as:
+
+$$
+\mathcal {L} _ {b} = \frac {1}{T _ {\text {f o r e}}} \sum_ {t = 1} ^ {T _ {\text {f o r e}}} \mathrm {S L} _ {1} \left(\hat {\delta} _ {s} ^ {t} - \delta_ {s} ^ {t}\right) + \mathrm {S L} _ {1} \left(\hat {\delta} _ {e} ^ {t} - \delta_ {e} ^ {t}\right), \tag {15}
+$$
+
+where $\mathrm{SL}_1(\cdot)$ denotes the smooth L1 loss function.
+
+At last, our APGN model is trained end-to-end from scratch using the multi-task loss:
+
+$$
+\mathcal {L} = \lambda_ {1} \cdot \mathcal {L} _ {\text {c l a s s}} + \lambda_ {2} \cdot \mathcal {L} _ {\text {r e g}} + \lambda_ {3} \cdot \mathcal {L} _ {\text {a l i g n}} + \lambda_ {4} \cdot \mathcal {L} _ {b}. \tag {16}
+$$
+
+# 4 Experiments
+
+# 4.1 Datasets and Evaluation
+
+ActivityNet Captions. It is a large dataset (Krishna et al., 2017) which contains 20k videos with 100k language descriptions. This dataset pays attention to more complicated human activities in daily life. Following public split, we use 37,417, 17,505, and 17,031 sentence-video pairs for training, validation, and testing, respectively.
+
+TACoS. This dataset (Regneri et al., 2013) collects 127 long videos, which are mainly about cooking scenarios, thus lacking the diversity. We use the same split as (Gao et al., 2017), which has 10146, 4589 and 4083 sentence-video pairs for training, validation, and testing, respectively.
+
+Charades-STA. (Gao et al., 2017) consists of 9,848 videos of daily life indoors activities. There are 12,408 sentence-video pairs for training and 3,720 pairs for testing.
+
+Evaluation Metric. Following (Zhang et al., 2019; Zeng et al., 2020), we adopt “R@n, IoU=m” as our evaluation metrics, which is defined as the percentage of at least one of top-n selected moments having IoU larger than m.
+
+# 4.2 Implementation Details
+
+Following (Zhang et al., 2020b; Zeng et al., 2020), for video input, we apply a pre-trained C3D network for all three datasets to obtain embedded features. We also extract the I3D (Carreira and Zisserman, 2017) and VGG (Simonyan and Zisserman,
+
+| Method | Feature | R@1, IoU=0.5 | R@1, IoU=0.7 | R@5, IoU=0.5 | R@5 IoU=0.7 |
| TGN | C3D | 28.47 | - | 43.33 | - |
| CTRL | C3D | 29.01 | 10.34 | 59.17 | 37.54 |
| QSPN | C3D | 33.26 | 13.43 | 62.39 | 40.78 |
| CBP | C3D | 35.76 | 17.80 | 65.89 | 46.20 |
| SCDM | C3D | 36.75 | 19.86 | 64.99 | 41.53 |
| GDP | C3D | 39.27 | - | - | - |
| LGI | C3D | 41.51 | 23.07 | - | - |
| VSLNet | C3D | 43.22 | 26.16 | - | - |
| CMIN | C3D | 43.40 | 23.88 | 67.95 | 50.73 |
| DRN | C3D | 45.45 | 24.36 | 77.97 | 50.30 |
| 2DTAN | C3D | 44.51 | 26.54 | 77.13 | 61.96 |
| APGN | C3D | 48.92 | 28.64 | 78.87 | 63.19 |
+
+2014) features on Charades-STA. After that, we apply PCA to reduce their feature dimension to 500 for decreasing the model parameters. We set the length of video to 200 for ActivityNet Caption and TACoS, 64 for Charades-STA. For sentence input, we utilize Glove model to embed each word to 300 dimension features. The dimension $D$ is set to 512, $d$ is set to 256. The number of graph layer is $k = 2$ . We set the batchsize as 64. We train our model with an Adam optimizer for 100 epochs. The initial learning rate is set to 0.0001 and it is divided by 10 when the loss arrives on plateaus. $\lambda_1, \lambda_2, \lambda_3, \lambda_4$ in the loss function are 0.1, 1, 1, 1 and decided by the weight magnitude.
+
+# 4.3 Performance Comparison
+
+Compared methods. We compare our proposed APGN with state-of-the-art methods. We group them into: (1) top-down methods: TGN (Chen et al., 2018), CTRL (Gao et al., 2017), QSPN (Xu et al., 2019), CBP (Wang et al., 2020), SCDM (Yuan et al., 2019a), CMIN (Zhang et al., 2019), and 2DTAN (Zhang et al., 2020b). (2) bottom-up methods: GDP (Chen et al., 2020), LGI (Mun et al., 2020), VSLNet (Zhang et al., 2020a), DRN (Zeng et al., 2020).
+
+Quantitative comparison. As shown in Table 1, 2 and 3, our APGN outperforms all the existing methods by a large margin. Specifically, on ActivityNet Caption dataset, compared to the previous best top-down method 2DTAN, we do not rely on large numbers of pre-defined and outperform it by $4.41\%$ , $2.10\%$ , $1.74\%$ , $1.23\%$ in all metrics, respectively. Compared to the previous best bottom-up method DRN, our APGN brings significant improvement of $4.28\%$ and $12.89\%$ in the strict “R@1, IoU=0.7” and “R@5, IoU=0.7” metrics, respectively. Al
+
+Table 1: Performance compared with the state-of-the-art TSLV models on ActivityNet Captions dataset.
+
+| Method | Feature | R@1, IoU=0.3 | R@1, IoU=0.5 | R@5, IoU=0.3 | R@5, IoU=0.5 |
| TGN | C3D | 21.77 | 18.90 | 39.06 | 31.02 |
| CTRL | C3D | 18.32 | 13.30 | 36.69 | 25.42 |
| QSPN | C3D | 20.15 | 15.23 | 36.72 | 25.30 |
| CBP | C3D | 27.31 | 24.79 | 43.64 | 37.40 |
| SCDM | C3D | 26.11 | 21.17 | 40.16 | 32.18 |
| GDP | C3D | 24.14 | - | - | - |
| VSLNet | C3D | 29.61 | 24.27 | - | - |
| CMIN | C3D | 24.64 | 18.05 | 38.46 | 27.02 |
| DRN | C3D | - | 23.17 | - | 33.36 |
| 2DTAN | C3D | 37.29 | 25.32 | 57.81 | 45.04 |
| APGN | C3D | 40.47 | 27.86 | 59.98 | 47.12 |
+
+Table 2: Performance compared with the state-of-the-art TSLV models on TACoS datasets.
+
+| Method | Feature | R@1, IoU=0.5 | R@1, IoU=0.7 | R@5, IoU=0.5 | R@5, IoU=0.7 |
| 2DTAN | VGG | 39.81 | 23.25 | 79.33 | 51.15 |
| APGN | VGG | 44.23 | 25.64 | 89.51 | 57.87 |
| CTRL | C3D | 23.63 | 8.89 | 58.92 | 29.57 |
| QSPN | C3D | 35.60 | 15.80 | 79.40 | 45.40 |
| CBP | C3D | 36.80 | 18.87 | 70.94 | 50.19 |
| GDP | C3D | 39.47 | 18.49 | - | - |
| APGN | C3D | 48.20 | 29.37 | 89.05 | 58.49 |
| DRN | I3D | 53.09 | 31.75 | 89.06 | 60.05 |
| SCDM | I3D | 54.44 | 33.43 | 74.43 | 58.08 |
| LGI | I3D | 59.46 | 35.48 | - | - |
| APGN | I3D | 62.58 | 38.86 | 91.24 | 62.11 |
+
+Table 3: Performance compared with the state-of-the-art TSLV models on Charades-STA datasets.
+
+though TACoS suffers from similar kitchen background and cooking objects among the videos, it is worth noting that our APGN still achieves significant improvements. On Charades-STA dataset, for fair comparisons with other methods, we perform experiments with same features (i.e., VGG, C3D, and I3D) reported in their papers. It shows that our APGN reaches the highest results over all evaluation metrics.
+
+Comparison on efficiency. We compare the efficiency of our APGN with previous methods on a single Nvidia Titan XP GPU on the TACoS dataset. As shown in Table 4, it can be observed that we achieve much faster processing speeds and relatively less learnable parameters. The reason mainly owes to two folds: First, APGN generates proposals without processing overlapped sliding windows as CTRL, and generates less proposals than pre-defined methods such as 2DTAN and CMIN, thus is more efficient; Second, APGN does not apply many convolution layers like 2DTAN or multi-level feature fusion modules as DRN for cross-modal interaction, thus has less parameters.
+
+ | ACRN | CTRL | TGN | 2DTAN | CMIN | DRN | APGN |
| VPS ↑ | 0.23 | 0.45 | 1.09 | 1.75 | 81.29 | 133.38 | 146.67 |
| Para. ↓ | 128 | 22 | 166 | 363 | 78 | 214 | 91 |
+
+Table 4: Efficiency comparison in terms of video per second (VPS) and parameters (Para.), where our method APGN is much efficient.
+
+| Model | class. | reg. | p.e. | graph | R@1, IoU=0.5 | R@1, IoU=0.7 |
| ① | × | × | × | × | 39.16 | 19.68 |
| ② | ✓ | × | × | × | 40.84 | 21.30 |
| ③ | ✓ | ✓ | × | × | 42.77 | 23.52 |
| ④ | ✓ | ✓ | ✓ | × | 43.95 | 24.66 |
| ⑤ | ✓ | ✓ | × | ✓ | 45.81 | 26.34 |
| ⑥ | ✓ | ✓ | ✓ | ✓ | 48.92 | 28.64 |
+
+# 4.4 Ablation Study
+
+Main ablation. As shown in Table 5, we verify the contribution of each part in our model. Starting from the backbone model (Figure 2 (a)), we first implement the baseline model ① by directly adding the top-down localization head ((Figure 2 (d))). In this model, we adopt pre-defined proposals as (Zhang et al., 2019). After adding the binary classification module in ②, we can find that classification module effectively filters out redundant predefined proposals on large number of background frames. When further applying adaptive proposal generation as ③, the generated proposals perform better than the pre-defined one ②. Note that, in ③, we directly encode proposal-wise features by max-pooling, and the classification module also makes the contribution for filtering out the negative generated proposals. To capture more fine-grained semantics for proposal refinement, we introduce a proposal encoder (model ④) for discriminative feature aggregation and a proposal graph (model ⑤) for proposal-wise feature interaction. Although each of them can only bring about $1 - 3\%$ improvement, the performance increases significantly when utilizing both of them (model ⑥).
+
+Investigation on the video/query encoder. To investigate whether a Transformer (Vaswani et al., 2017) can boost our APGN, we replace the GRU in video/query encoder with a simple Transformer and find some improvements. However, it brings
+
+Table 5: Main ablation studies on ActivityNet Caption dataset, where 'class.' and 'reg.' denotes the classification and regression modules (Sec. 3.3), 'p.e' denotes the proposal encoder (Sec. 3.4), 'graph' denotes the proposal graph (Sec. 3.4).
+
+| Components | VPS ↑ | Para. ↓ | R@1, IoU=0.5 | R@1, IoU=0.7 |
| w/. GRU | 146.67 | 91 | 48.92 | 28.64 |
| w/. Transformer | 129.38 | 138 | 50.11 | 29.43 |
+
+Table 6: Investigation on video and query encoders on ActivityNet Caption dataset.
+
+| Components | Module | R@1, IoU=0.5 | R@1, IoU=0.7 |
| binary classification | w/o balanced loss | 46.88 | 27.13 |
| w/ balanced loss | 48.92 | 28.64 |
+
+Table 7: Investigation on binary classification on ActivityNet Caption dataset.
+
+| Components | Module | R@1, IoU=0.5 | R@1, IoU=0.7 |
| proposal encoder | w/o position | 46.46 | 26.69 |
| w/ position | 48.92 | 28.64 |
| w/ mean pooling | 47.41 | 27.86 |
| w/ max pooling | 48.92 | 28.64 |
+
+Table 8: Investigation on proposal encoder on ActivityNet Caption dataset.
+
+larger model parameters and lower speed.
+
+Effect of unbalanced loss. In the binary classification module, we formulate the typical loss function into a balanced one. As shown in Table 7, the model w/ balanced loss has great improvement (2.04%, 1.51%) compared to the w/o variant, which demonstrates that it is important to consider the unbalanced distribution in the classification process.
+
+Investigation on proposal encoder. In proposal encoder, we discard the positional embedding as w/o position, and also replace the max-pooling with the mean-pooling as w/ mean pooling. From the Table 8, we can observe that positional embedding helps to learn the temporal distance (boost $2.46\%$ , $1.95\%$ ), and the max-pooling can aggregate more discriminative features (boost $1.49\%$ , $0.78\%$ ) than the mean-pooling.
+
+Investigation on proposal graph. In the table 9, we also give the analysis on the proposal graph. Compared to w/ edge convolution model (Wang et al., 2018), w/ edge attention directly utilizes coattention (Lu et al., 2016) to compute the similarity of each node-pair and updates them by a weighted summation strategy, which performs worse than the former one.
+
+Number of graph layer. As shown in Table 9, the model achieves the best result with 2 graph layers, and the performance will drop when the number of
+
+| Components | Module | R@1, IoU=0.5 | R@1, IoU=0.7 |
| proposal graph | w/ edge attention | 46.63 | 26.90 |
| w/ edge convolution | 48.92 | 28.64 |
| graph layer | 1 layer | 47.60 | 27.57 |
| 2 layers | 48.92 | 28.64 |
| 3 layers | 48.83 | 28.39 |
+
+Table 9: Investigation on proposal graph on ActivityNet Caption dataset.
+
+| Methods | Localization Type | R@1, IoU=0.5 | R@1, IoU=0.7 |
| SCDM | top-downours | 36.7543.86 | 19.8626.42 |
| CMIN | top-downours | 43.4050.33 | 23.8829.75 |
| LGI | bottom-upours | 41.5149.20 | 23.0730.64 |
| DRN | bottom-upours | 45.4553.72 | 24.3631.01 |
+
+Table 10: Our proposed adaptive proposal generation can serve as a "plug-and-play" module for existing methods. The experiments are conducted on the ActivityNet Captions dataset.
+
+layers grows up. We give the analysis is that more graph layers will result in over-smoothing problem (Li et al., 2018) since the propagation between the nodes will be accumulated.
+
+Plug-and-play. Our proposed adaptive proposal generation can serve as a plug-and-play for existing methods. As shown in Table 10, for top-down methods, we maintain their feature encoders and video-query interaction, and add the proposal generation and proposal consolidation before the localization heads. For bottom-up methods, we first replace their regression heads with our proposal generation process and then add the proposal consolidation process. It shows that our proposal generation and proposal consolidation can bring large improvement on both two types of methods.
+
+# 4.5 Qualitative Results
+
+To qualitatively validate the effectiveness of our APGN, we display two typical examples in Figure 4. It is challenging to accurately localize the semantic "for a second time" in the first video, because there are two separate segments corresponding to the same object "girl in the blue dress" performing the same activity "hops". For comparison, previous method DRN fails to understand the meaning of phrase "second time", and ground both two seg
+
+
+
+
+Figure 4: Typical examples of the localization results on the ActivityNet Caption dataset.
+
+ment parts. By contrast, our method has a strong ability to distinguish these two segments in temporal dimension thanks to the positional embedding in the developed proposal graph, thus achieves more accurate localization results. Furthermore, we also display the foreground/background class of each frame in this video. With the help of the proposal consolidation module, the segment proposals of "first time" are filtered out, and all the final ranked top 10 positive frames fall in the target segment.
+
+# 5 Conclusion
+
+In this paper, we introduce APGN, a new method for temporal sentence localization in videos. Our core idea is to adaptively generates discriminative proposals and achieve both effective and efficient localization. That is, we first introduce binary classification before the boundary regression to distinguish the background frames, which helps to filter out the corresponding noisy responses. Then, the regressed boundaries on the predicted foreground frames are taken as segment proposals, which decreases a large number of poor quality proposals compared to the pre-defined ones in top-down framework. We further learn higher-level feature interactions between the generated proposals for refinement via a graph convolutional network. Our framework achieves state-of-the-art performance on three challenging benchmarks, demonstrating the effectiveness of our proposed APGN.
+
+# 6 Acknowledgments
+
+This work was supported in part by the National Key Research and Development Program of China under No. 2018YFB1404102, and the National Natural Science Foundation of China under No. 61972448.
+
+# References
+
+Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
+Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
+Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally grounding natural sentence in video. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
+Long Chen, Chujie Lu, Siliang Tang, Jun Xiao, Dong Zhang, Chilie Tan, and Xiaolin Li. 2020. Rethinking the bottom-up framework for query-based video localization. In Proceedings of the AAAI Conference on Artificial Intelligence.
+Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Advances in Neural Information Processing Systems (NIPS).
+Jianfeng Dong, Xirong Li, Chaoxi Xu, Shouling Ji, and Xun Wang. 2019. Dual encoding for zero-example video retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
+Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. Tall: Temporal activity localization via language query. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
+Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
+Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence.
+Daizong Liu, Xiaoye Qu, Jianfeng Dong, and Pan Zhou. 2020a. Reasoning step-by-step: Temporal
+
+sentence localization in videos via deep rectification-modulation network. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1841-1851.
+Daizong Liu, Xiaoye Qu, Jianfeng Dong, Pan Zhou, Yu Cheng, Wei Wei, Zichuan Xu, and Yulai Xie. 2021. Context-aware biaffine localizing network for temporal sentence grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 11235-11244.
+Daizong Liu, Xiaoye Qu, Xiao-Yang Liu, Jianfeng Dong, Pan Zhou, and Zichuan Xu. 2020b. Jointly cross-and self-modal graph attention network for query-based moment localization. In Proceedings of the ACM International Conference on Multimedia (ACM MM).
+Chujie Lu, Long Chen, Chilie Tan, Xiaolin Li, and Jun Xiao. 2019. DEBUG: A dense bottom-up grounding approach for natural language video localization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
+Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances in Neural Information Processing Systems (NIPS).
+Jonghwan Mun, Minsu Cho, and Bohyung Han. 2020. Local-global video-text interactions for temporal grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
+Guoshun Nan, Rui Qiao, Yao Xiao, Jun Liu, Sicong Leng, Hao Zhang, and Wei Lu. 2021. Interventional video grounding with dual contrastive learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
+Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
+Xiaoye Qu, Pengwei Tang, Zhikang Zou, Yu Cheng, Jianfeng Dong, Pan Zhou, and Zichuan Xu. 2020. Fine-grained iterative attention network for temporal language localization in videos. In Proceedings of the 28th ACM International Conference on Multimedia, pages 4280-4288.
+Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
+Cristian Rodriguez, Edison Marrese-Taylor, Fatehneh Sadat Saleh, Hongdong Li, and Stephen Gould. 2020. Proposal-free temporal moment localization
+
+of a natural-language query in video using guided attention. In The IEEE Winter Conference on Applications of Computer Vision (WACV).
+Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
+Joyeeta Singha, Amarjit Roy, and Rahul Hussain Laskar. 2018. Dynamic hand gesture recognition using vision-based approach for human-computer interaction. Neural Computing and Applications.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS).
+Jingwen Wang, Lin Ma, and Wenhao Jiang. 2020. Temporally grounding language queries in videos by contextual boundary-aware prediction. In Proceedings of the AAAI Conference on Artificial Intelligence.
+Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. 2018. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics.
+Huijuan Xu, Kun He, Bryan A Plummer, Leonid Sigal, Stan Sclaroff, and Kate Saenko. 2019. Multi-level language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence.
+Xun Yang, Jianfeng Dong, Yixin Cao, Xun Wang, Meng Wang, and Tat-Seng Chua. 2020. Tree-augmented cross-modal encoding for complex-query video retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 1339-1348.
+Xun Yang, Fuli Feng, Wei Ji, Meng Wang, and TatSeng Chua. 2021. Deconfounded video moment retrieval with causal intervention. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR).
+Yitian Yuan, Lin Ma, Jingwen Wang, Wei Liu, and Wenwu Zhu. 2019a. Semantic conditioned dynamic modulation for temporal sentence grounding in videos. In Advances in Neural Information Processing Systems (NIPS).
+Yitian Yuan, Tao Mei, and Wenwu Zhu. 2019b. To find where you talk: Temporal sentence localization in video with attention based location regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33.
+Runhao Zeng, Haoming Xu, Wenbing Huang, Peihao Chen, Mingkui Tan, and Chuang Gan. 2020. Dense regression network for video grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
+
+Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. 2020a. Span-based localizing network for natural language video localization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
+Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo. 2020b. Learning 2d temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence.
+Zhu Zhang, Zhijie Lin, Zhou Zhao, and Zhenxin Xiao. 2019. Cross-modal interaction networks for query-based moment retrieval in videos. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR).
\ No newline at end of file
diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/images.zip b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..aa29ff3bea75dbf60275ee962299fe7723422f3a
--- /dev/null
+++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4eeabf8185bca38cbd502fc259ea357e51d624d14711b88f33a30c6f3ae841f7
+size 593549
diff --git a/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/layout.json b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d377a89a864a470b9959b17c6d172e9de49be6fd
--- /dev/null
+++ b/adaptiveproposalgenerationnetworkfortemporalsentencelocalizationinvideos/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d177910cc2d6cfb16031ce3166d4c3418961590a715b75596a6e13e707d983b1
+size 370906
diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_content_list.json b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3e9e4fec4bfd74f5c45a0bbdec7abed7f58b17b4
--- /dev/null
+++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d048afd58b71b82eaefbe03e8fe0584791a382cc8a38cce95548a1ee245f17c5
+size 132225
diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_model.json b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c6fcea3289c6298da6d68009c2f480f2e2df0f66
--- /dev/null
+++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c4b5bc9b93b4f5dcf8084136e8046743f986da4cf6477e6e0398bfda12bc5982
+size 176004
diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_origin.pdf b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6169b5d2aadd01a6eb1a36d6e5b0e8abed1bff23
--- /dev/null
+++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/e36b4ac8-9a38-4219-858d-047e5ed2caa4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0f7ac9e7f501bf80ef4f6c561fef4f191796914c875f0bea9a1633b55dd0892
+size 696842
diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/full.md b/adversarialattackagainstcrosslingualknowledgegraphalignment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e23d6ff21f1f00afdc471edc5d8ff8b7338c0ecf
--- /dev/null
+++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/full.md
@@ -0,0 +1,528 @@
+# Adversarial Attack against Cross-lingual Knowledge Graph Alignment
+
+Zeru Zhang $^{1}$ , Zijie Zhang $^{1}$ , Yang Zhou $^{1}$ , Lingfei Wu $^{2}$ , Sixing Wu $^{3}$ , Xiaoying Han $^{1}$ , Dejing Dou $^{4,5}$ , Tianshi Che $^{1}$ , Da Yan $^{6}$
+
+1Auburn University, 2JD.COM Silicon Valley Research Center, 3Peking University,
+
+4University of Oregon, 5Baidu Research, 6University of Alabama at Birmingham
+
+{zeruzhang, zijiezhang, yangzhou, xzh0003, tianshiche} @auburn.edu,
+
+lwu@email.wm.edu, wusixing@pku.edu.cn, dou@cs.uoregon.edu, yanda@uab.edu
+
+# Abstract
+
+Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks. However, there is still a paucity of vulnerability analyses of cross-lingual entity alignment under adversarial attacks. This paper proposes an adversarial attack model with two novel attack techniques to perturb the KG structure and degrade the quality of deep cross-lingual entity alignment. First, an entity density maximization method is employed to hide the attacked entities in dense regions in two KGs, such that the derived perturbations are unnoticeable. Second, an attack signal amplification method is developed to reduce the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness.
+
+# 1 Introduction
+
+Today, multilingual knowledge graphs (KGs), such as WordNet (Miller, 1992), DBpedia (Auer et al., 2007), YAGO (Hoffart et al., 2011), and ConceptNet (Speer et al., 2017), are becoming essential sources of knowledge for various AI-related applications, e.g., personal assistants, medical diagnosis, and online question answering. Cross-lingual entity alignment between multilingual KGs is a powerful tool that align the same entities in different monolingual KGs together, automatically synchronize different language-specific KGs and revolutionize the understanding of these ubiquitous multilingual KGs in a transformative manner (Xu et al., 2020b; Sun et al., 2020a; Berrendorf et al., 2021b,a).
+
+Unfortunately, real-world KGs are typically noisy due to two main reasons: (1) massive fake information injected by malicious parties and users on online encyclopedia websites (e.g., Wikipedia (Wik) and Answers.com (Ans)), social networks (e.g., Twitter and Facebook), online communities (e.g., Reddit and Yahoo Answers), news websites, and search engines that usually serve as
+
+data sources of the KGs; and (2) direct adversarial attacks on the KGs. Google Knowledge Graph has been criticized for providing answers without source attribution or citation, and thus undermines people's ability to verify information and to develop well-informed opinions (Dewey, 2016).
+
+Recent studies have shown that KG learning models remain highly sensitive to adversarial attacks, i.e., carefully designed small perturbations in KGs can cause the models to produce wrong prediction results, including knowledge graph embedding (Minervini et al., 2017; Pujara et al., 2017; Pezeshkpour et al., 2019; Zhang et al., 2019; Banerjee et al., 2021) and knowledge graph-based dialogue generation (Xu et al., 2020a). However, existing techniques focus on the adversarial attacks on single KG learning tasks. These techniques cannot be directly utilized to attack the cross-lingual entity alignment models, as they have to analyze relations within and across KGs. Two critical questions still keep unsolved: (1) Can small perturbations on KGs defeat cross-lingual entity alignment models? (2) How to design effective and unnoticeable perturbations against cross-lingual entity alignment?
+
+The majority of cross-lingual entity alignment techniques aim to train the model by minimizing the distance between pre-aligned entity pairs in training data, such that the corresponding entity embeddings across KGs are close to each other, and the entity pairs with the smallest distance in test data are output as alignment results (Mao et al., 2020a; Wu et al., 2020b; Mao et al., 2020b; Tang et al., 2020; Yan et al., 2021; Zhu et al., 2021; Mao et al., 2021; Pei et al., 2020).
+
+In terms of the distribution of entities in a KG, one idea of perturbing an entity unobtrusively is to move the entity to a dense region in the KG with many similar entities by adding/deleting relations to/from it is able to move it to a dense region in the KG with many similar entities, such that it is non-trivial to recognize the modified entity in the
+
+dense region with many similar entities.
+
+Existing gradient-based adversarial attack methods (Goodfellow et al., 2015; Madry et al., 2018) search for the weakest input features to attack by calculating the loss gradient. However, the vanishing gradient problem is often encountered when training neural networks with poor backward signal propagation and thus leads to the attack failures (Athalye et al., 2018). Can we enhance the attack signal propagation for improving the attack effectiveness?
+
+In this work, an entity density estimation and maximization method is employed to first estimate the distribution of entities in KGs. Based on the estimated KG distributions, the entities to be attacked are then moved to dense regions in two KGs by maximizing their densities. The attacked entities are hidden in dense regions in two KGs, such that they are surrounded by many neighbors in dense regions as well as indistinguishable from these neighbors. In addition, the surrounding of many neighbors makes it difficult to identify the correctly aligned entity pairs among many similar candidate entities.
+
+We comprehensively study how poor signal propagation on neural networks leads to vanishing gradients in adversarial attacks over cross-lingual entity alignment. An attack signal amplification method is developed to secure informative attack signals with both well-conditioned Jacobian and competent signal propagation from the alignment loss. This reduces the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness.
+
+Extensive experiments over real-world KG datasets validate the superior attack performance of the EAA model against several state-of-the-art cross-lingual entity alignment models. To our best knowledge, this work is the first to study adversarial attacks on cross-lingual entity alignment.
+
+# 2 Problem Formulation
+
+Given two input knowledge graphs $G^{1}$ and $G^{2}$ . Each is denoted as $G^{k} = (E^{k}, R^{k}, T^{k})$ ( $1 \leq k \leq 2$ ), where $E^{k} = \{e_{1}^{k}, \dots, e_{N^{k}}^{k}\}$ is the set of $N^{k}$ entities, $R^{k} = \{r_{ij}^{k} = (e_{i}^{k}, e_{j}^{k}) : 1 \leq i, j \leq N^{k}, i \neq j\}$ is the set of relations, and $T^{k} = E^{k} \times R^{k} \times E^{k}$ is the set of triples. Each triple $t_{l}^{k} = (e_{i}^{k}, r_{ij}^{k}, e_{j}^{k}) \in T^{k}$ in $G^{K}$ denotes head entity $e_{i}^{k}$ connected to tail entity $e_{j}^{k}$ through relation $r_{ij}^{k}$ . $\mathbf{A}^{k}$ is an $N^{k} \times N^{k}$ adjacency matrix that
+
+denotes the structure information of $G^K$ . By using knowledge graph embedding (KGE), each triple can be presented as $(\mathbf{e}_i^k, \mathbf{r}_{ij}^k, \mathbf{e}_j^k)$ , where boldfaced $\mathbf{e}_i^k, \mathbf{r}_{ij}^k$ , and $\mathbf{e}_j^k$ represent the embedding vectors of head $e_i^k$ , relation $r_{ij}^k$ , and tail $e_j^k$ respectively.
+
+$D$ contains a set of pre-aligned entity pairs $D = \{(e_i^1,e_j^2)|e_i^1\leftrightarrow e_j^2,e_i^1\in E^1,e_j^2\in E^2\}$ where $e_i^1\leftrightarrow e_j^2$ indicates that two entities $e_i^1$ and $e_j^2$ are the equivalent ones in different language-specific KGs. The cross-lingual entity alignment aims to utilize $D$ as the training data to identify the one-to-one entity alignments between entities $e_i^1$ and $e_j^2$ in two cross-lingual KGs $G^{1}$ and $G^{2}$ in the test data.
+
+Most of existing cross-lingual entity alignment models are supervised learning methods with minimizing the distances (or maximizing the similarities) between the embeddings of pre-aligned entity pairs $e_i^1$ and $e_j^2$ in $D$ (Wang et al., 2018; Sun et al., 2020d; Wu et al., 2020b; Pei et al., 2020; Tang et al., 2020; Yan et al., 2021). The entity pairs $e_i^1$ and $e_j^2$ in the test data with the largest similarities are selected as the alignment results. The following loss function is minimized to learn a KGE model $h: e_i^k \in E^k \mapsto \mathbf{e}_i^k$ . $h$ is often implemented as a graph convolutional network (GCN) for deep KGE.
+
+$$
+\begin{array}{l} \min _ {h} \mathcal {L} = - \sum_ {\left(e _ {i} ^ {1}, e _ {j} ^ {2}\right) \in D} \log \sigma \left(\left(\mathbf {e} _ {i} ^ {1}\right) ^ {T} \cdot \mathbf {e} _ {j} ^ {2}\right) \\ + \sum_ {\left(e _ {i ^ {\prime}} ^ {1}, e _ {j ^ {\prime}} ^ {2}\right) \notin D} \log \sigma \left(\left(\mathbf {e} _ {i ^ {\prime}} ^ {1}\right) ^ {T} \cdot \mathbf {v} _ {j ^ {\prime}} ^ {2}\right) \end{array} \tag {1}
+$$
+
+where $(e_i^1,e_j^2)$ and $(e_{i^{\prime}}^{1},e_{j^{\prime}}^{2})$ are positive and negative entity pairs. $(\mathbf{e}_i^1)^T$ is the transpose of $\mathbf{e}_i^1$ . $\sigma (\cdot)$ is the sigmoid function. The inner product $\cdot$ denotes the similarity between two embedding vectors.
+
+Given a trained deep KGE model $\mathbf{e}_i^k = h(e_i^k)$ , an adversarial attacker aims to maximally degrade the alignment performance of $h$ by injecting effective and unnoticeable relation perturbations (including relation addition and deletion) into two clean KGs $G^{k}$ ( $1 \leq k \leq 2$ ), leading to two perturbed KGs $\hat{G}^{k} = (\hat{E}^{k}, \hat{R}^{k}, \hat{T}^{k})$ .
+
+$$
+\max _ {\hat {\mathbf {A}} ^ {k}} \mathcal {L} s. t. | \hat {\mathbf {A}} ^ {k} - \mathbf {A} ^ {k} | \leq \Delta , 1 \leq k \leq 2 \tag {2}
+$$
+
+where $\mathbf{A}^k$ and $\hat{\mathbf{A}}^k$ are clean and perturbed adjacency matrices respectively. $\Delta$ is the allowed attack budget, i.e., allowed relation modifications.
+
+# 3 Unnoticeable Adversarial Attacks
+
+Existing GCN-based entity alignment methods often initialize entity features with random initialization or pre-trained word embeddings of entity names and utilize adjacency matrix of KGs to learn the entity embeddings (Wang et al., 2018; Sun et al., 2020d; Wu et al., 2020b; Yan et al., 2021). Thus, the embedding of an entity mainly depends on the embeddings of its neighbor entities. In order to modify the embedding of a target entity for the purpose of adversarial attacks, we need to remove some positive (i.e., existing) relations and add some negative (i.e., non-existing) relations between the target entity and its neighbors in adjacency matrix, and thus degrade the accuracy of entity embedding and alignment. We use the $i^{th}$ row of adjacency matrix $\mathbf{A}^k$ (i.e., $\mathbf{A}_i^k$ ) to represent structure features of each entity $e_i^k$ and analyze the impact of each structure feature (i.e., positive or negative relation) on the alignment accuracy.
+
+As shown in Figure 1, assuming that $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ are pre-aligned entity embeddings, if we hide an entity $e_i^1$ in a dense region with many similar $e_k^1\mathrm{s}$ by modifying its associated relations, then the surrounding of many $e_k^1\mathrm{s}$ makes it difficult to differentiate $e_i^1$ from many similar $e_k^1\mathrm{s}$ and identify the correctly aligned entity pairs $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ among many similar candidate entities $e_k^1\mathrm{s}$ . In addition, if another pair of entity embeddings $\mathbf{e}_k^1$ and $\mathbf{e}_j^2$ are more similar than the pre-aligned entity embeddings $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ , i.e., $(\mathbf{e}_k^1)^T \cdot \mathbf{e}_j^2 > (\mathbf{e}_i^1)^T \cdot \mathbf{e}_j^2$ , then we will obtain an incorrect alignment result $(e_k^1, e_j^2)$ .
+
+In this work, we will leverage our proposed kernel density estimation method (Zhang et al., 2020b) to estimate the distribution of perturbed KGs and maximize the distance between pre-aligned entity pairs for degrading the performance of entity alignment as well as for hiding the attacked entities in dense regions in two KGs. The kernel density estimation method is essentially to estimate a probability density function (PDF) $f(x)$ of a random variable $x$ for revealing the intrinsic distribution of $x$ (Parzen, 1962). Let $\mathbf{x}^k$ be a $N^k$ -dimensional random variable to denote the structure features of all entities $\{\mathbf{A}_i^k,\dots ,\mathbf{A}_{N^K}^K\}$ in $\mathrm{KG}G^{k}$ for estimating a PDF $f(\mathbf{x}^k)$ .
+
+
+Figure 1: Unnoticeable Adversarial Attacks
+
+where $\operatorname{det}(\cdot)$ denotes the determinant operation. $\mathbf{B} > 0$ is a bandwidth to be estimated. It is an $N^k\times N^k$ diagonal matrix $\mathbf{B} = diag(b_1,\dots ,b_{N^k})$ , which has strong influence on the density estimation $f(\mathbf{x}^{k})$ . A good $\mathbf{B}$ should be as small as the data can allow. $\mathcal{K}$ is a product symmetric kernel that satisfies $\int \mathcal{K}(x)dx = 1$ and $\int x\mathcal{K}(x)dx = 0$ . The vector form $f(\mathbf{x}^k)$ can be rewritten as an element form, where $\mathbf{x}_j^k$ denotes the $j^{th}$ dimension in $\mathbf{x}^k$ .
+
+$$
+f \left(\mathbf {x} ^ {k}\right) = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \prod_ {j = 1} ^ {N ^ {k}} \frac {1}{b _ {j}} \mathcal {K} \left(\frac {\mathbf {x} _ {j} ^ {k} - \mathbf {A} _ {i j} ^ {k}}{b _ {j}}\right) \tag {4}
+$$
+
+We then calculate the derivative $\frac{\partial f(\mathbf{x}^k)}{\partial b_j}$ about each $b_{j}$ in $\mathbf{B}$ .
+
+$$
+\begin{array}{l} \frac {\partial f (\mathbf {x} ^ {k})}{\partial b _ {j}} = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\partial \left[ \prod_ {l = 1} ^ {N ^ {k}} \frac {1}{b _ {l}} \mathcal {K} \left(\frac {\mathbf {x} _ {l} ^ {k} - \mathbf {A} _ {i l} ^ {k}}{b _ {l}}\right) \right]}{\partial b _ {j}} = \\ - \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \left(\frac {1}{b _ {j}} + \frac {\mathbf {x} _ {l} ^ {k} - \mathbf {A} _ {i l} ^ {k}}{b _ {j} ^ {2}} K \big (\frac {\mathbf {x} _ {l} ^ {k} - \mathbf {A} _ {i l} ^ {k}}{b _ {j}} \big)\right) \\ \prod_ {l = 1} ^ {N ^ {k}} \frac {1}{b _ {l}} \mathcal {K} \left(\frac {\mathbf {x} _ {l} ^ {k} - \mathbf {A} _ {i l} ^ {k}}{b _ {l}}\right) \tag {5} \\ \end{array}
+$$
+
+$$
+f \left(\mathbf {x} ^ {k}\right) = \frac {1}{N ^ {k} \det (\mathbf {B})} \sum_ {i = 1} ^ {N ^ {k}} \mathcal {K} \left(\mathbf {B} ^ {- 1} \left(\mathbf {x} ^ {k} - \mathbf {A} _ {i} ^ {k}\right)\right) \tag {3}
+$$
+
+We make use of a greedy search method to determine bandwidths in the kernel density estimation method. For a non-trivial/trivial dimension $j$ , updating the bandwidth $b_{j}$ will have a
+
+strong/weak influence over $f(\mathbf{x}^k)$ . We greedily reduce $b_{j}$ with a sequence $b_{0}, b_{0}s, b_{0}s^{2}, \dots$ for a parameter $0 < s < 1$ , until $b_{j}$ is smaller than a certain threshold $\tau_{j}$ , to validate whether a small update in $b_{j}$ is able to lead to a large update in $f(\mathbf{x}^k)$ .
+
+We use an initial $\mathbf{B} = \text{diag}(b_0, \dots, b_0)$ for a large $b_0$ to estimate $\frac{\partial f(\mathbf{x}^k)}{\partial b_j}$ , and reduce $b_j$ when $\frac{\partial f(\mathbf{x}^k)}{\partial b_j}$ is larger than a certain threshold.
+
+$$
+\begin{array}{l} \frac {\partial f \left(\mathbf {x} ^ {k}\right)}{\partial b _ {j}} = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\partial \left[ \prod_ {l = 1} ^ {N ^ {1}} \frac {1}{b _ {l}} \mathcal {K} \left(\frac {\mathbf {x} _ {l} ^ {1} - \mathbf {A} _ {i l} ^ {1}}{b _ {l}}\right) \right]}{\partial b _ {j}} \\ = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\mathcal {K} \left(\frac {\mathbf {x} _ {j} ^ {1} - \mathbf {A} _ {i j} ^ {1}}{b _ {j}}\right)}{\mathcal {K} \left(\frac {\mathbf {x} _ {j} ^ {1} - \mathbf {A} _ {i j} ^ {1}}{b _ {j}}\right)} \prod_ {l = 1} ^ {N ^ {k}} \mathcal {K} \left(\frac {\mathbf {x} _ {l} ^ {1} - \mathbf {A} _ {i l} ^ {1}}{b _ {l}}\right) \\ = \frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\partial f \left(\mathbf {x} _ {i} ^ {k}\right)}{\partial b _ {j}} \tag {6} \\ \end{array}
+$$
+
+We derive the corresponding variance $\operatorname{Var}\left(\frac{\partial f(\mathbf{x}^k)}{\partial b_j}\right)$ as follows.
+
+$$
+\operatorname {V a r} \left(\frac {\partial f \left(\mathbf {x} ^ {k}\right)}{\partial b _ {j}}\right) = \operatorname {V a r} \left(\frac {1}{N ^ {k}} \sum_ {i = 1} ^ {N ^ {k}} \frac {\partial f \left(\mathbf {x} _ {i} ^ {k}\right)}{\partial b _ {j}}\right) \tag {7}
+$$
+
+According to the estimated bandwidth $\mathbf{B}$ by Algorithm 1, we can calculate density $f(\mathbf{x}^k)$ of $\mathbf{x}^k$ in Eq.(3). The perturbation process is to maximize the following attack loss $\mathcal{L}_A$ for producing unnoticeable perturbations, in terms of the estimations $f(\mathbf{x}^{1})$ and $f(\mathbf{x}^{2})$ in two KGs $G^{1}$ and $G^{2}$ .
+
+$$
+\begin{array}{l} \max _ {\hat {\mathbf {A}} ^ {k}} \mathcal {L} _ {A} = \left[ \sum_ {(e _ {i} ^ {1}, e _ {j} ^ {2}) \in D} - \log \sigma \left(\left(\hat {\mathbf {e}} _ {i} ^ {1}\right) ^ {T} \cdot \hat {\mathbf {e}} _ {j} ^ {2}\right) \right. \\ \left. + f \left(\hat {\mathbf {A}} _ {i} ^ {1}\right) + f \left(\hat {\mathbf {A}} _ {j} ^ {2}\right) \right] \tag {8} \\ + \sum_ {(e _ {i ^ {\prime}} ^ {1}, e _ {j ^ {\prime}} ^ {2}) \notin D} \\ s. t. | \hat {\mathbf {A}} _ {i} ^ {k} - \mathbf {A} _ {i} ^ {k} | \leq \Delta , 1 \leq k \leq 2 \\ \end{array}
+$$
+
+where $\hat{\mathbf{A}}_i^1 = \mathbf{A}_i^1 +\delta_i^1$ (and $\hat{\mathbf{A}}_j^2 = \mathbf{A}_j^2 +\delta_j^2)$ denote perturbations of clean structure features $\mathbf{A}_i^1$ (and $\mathbf{A}_j^2$ ) in $G^{1}$ (and $G^{2}$ ) by adding a small amount of relation perturbations $\delta_i^1$ (and $\delta_j^2$ ), such that $\hat{\mathbf{e}}_i^1$
+
+# Algorithm 1 Kernel Density Estimation
+
+Input: KG $G^{k} = (E^{k}, R^{k}, T^{k})$ , parameter $0 < s < 1$ , initial bandwidth $b_{0}$ , and parameter $c$ .
+
+Output: Bandwidth matrix B.
+
+1: Initialize all $b_{1}, \dots, b_{N^{k}}$ with $b_{0}$ ;
+2: for each $j = 1$ to $N^k$
+3: do
+4: Estimate derivative $\frac{\partial f(\mathbf{x}^k)}{\partial b_j}$ and variance $\operatorname{Var}\left(\frac{\partial f(\mathbf{x}^k)}{\partial b_j}\right)$ ;
+
+5: Compute $\tau_{j} = \sqrt{2\cdot\mathrm{Var}(\frac{\partial f(\mathbf{x}^{k})}{\partial b_{j}})\cdot\log(cN^{k})}$
+
+6: if $\left|\frac{\partial f(\mathbf{x}^k)}{\partial b_j}\right| > \tau_j$ , then Update $b_j = b_j s$
+7: while $\left| \frac{\partial f(\mathbf{x}^k)}{\partial b_j} \right| > \tau_j$
+8: Return B.
+
+is far away from $\hat{\mathbf{e}}_j^2$ and thus the alignment accuracy is decreased. In addition, we push $e_i^1$ and $e_j^2$ to dense regions to generate $\hat{e}_i^1$ and $\hat{e}_j^2$ , by maximizing $f(\hat{\mathbf{A}}_i^1)$ and $f(\hat{\mathbf{A}}_j^2)$ , such that $\hat{e}_i^1$ and $\hat{e}_j^2$ are indistinguishable from their neighbors in perturbed KGs. This reduces the possibility of perturbation detection by humans or defender programs.
+
+We leverage the Projected Gradient Descent (PGD) technique (Madry et al., 2018) to produce perturbed adjacency matrices $\hat{\mathbf{A}}^1$ and $\hat{\mathbf{A}}^2$ of two KGs $G^{1}$ and $G^{2}$ .
+
+$$
+\begin{array}{l} \left(\mathbf {A} _ {i} ^ {1}\right) ^ {(t + 1)} = \Pi_ {\triangle^ {1}} \operatorname {s g n} \left[ \operatorname {R e L U} \left(\nabla_ {\left(\mathbf {A} _ {i} ^ {1}\right) ^ {t}} \mathcal {L} _ {A} \right. \right] \\ \left(\mathbf {A} _ {j} ^ {2}\right) ^ {(t + 1)} = \Pi_ {\triangle^ {2}} \operatorname {s g n} \left[ \operatorname {R e L U} \left(\nabla_ {\left(\mathbf {A} _ {j} ^ {2}\right) ^ {t}} \mathcal {L} _ {A} \right], \right. \tag {9} \\ t = 1, \dots , T \\ \end{array}
+$$
+
+where $(\mathbf{A}_i^1)^{(t + 1)}$ and $(\mathbf{A}_j^2)^{(t + 1)}$ denotes the perturbations of $\mathbf{A}_i^1$ and $\mathbf{A}_j^2$ derived at step $t$ . $\epsilon$ specifies the budget of allowed perturbed relations for each attacked entity. $\triangle^k = \{(\delta^k)^t |\mathbf{1}^T (\delta^k)^t\leq \epsilon ,(\delta^k)^t\in \{0,1\}^{N^k}\}$ , where $(\delta^k)^t = \| (\mathbf{A}_i^1)^t -$ $\mathbf{A}_i^1\| _2^2$ , represents the constraint set of the projection operator II, i.e., it encodes whether a relation in $\mathbf{A}_i^1$ is modified or not. The composition of the ReLU and sign operators guarantees $(\mathbf{A}_i^1)^t\in \{0,1\}^{N^1}$ and $(\mathbf{A}_j^2)^t\in \{0,1\}^{N^2}$ , as it adds (or removes) an relation or keeps it unchanged when an derivate in the gradient is positive (or negative). The outputs $(\mathbf{A}_i^1)^T$ and $(\mathbf{A}_j^2)^T$ at final step $T$ are used as the perturbed adjacency matrices $\hat{\mathbf{A}}_i^1$ and $\hat{\mathbf{A}}_j^2$ .
+
+# 4 Effective Adversarial Attacks
+
+Unfortunately, the above PGD-based unnoticeable attack method needs to iteratively calculate the gradient $\nabla_{(\mathbf{A}_i^1)}\mathcal{L}_A$ , which mainly depends on
+
+$\frac{\partial\left(\log\sigma((\mathbf{e}_i^1)^T\cdot\mathbf{e}_j^2)\right)}{\partial\mathbf{A}_i^1}$ in the GCN-based entity alignment models.
+
+Given an alignment signal $\phi\big((\mathbf{e}_i^1)^T, \mathbf{e}_j^2\big) = \frac{\partial\big(\log\sigma((\mathbf{e}_i^1)^T \cdot \mathbf{e}_j^2)\big)}{\partial(\mathbf{e}_i^1)^T}$ and a Jacobian matrix $\mathbf{J}_i = \frac{\partial(\mathbf{e}_i^1)^T}{\partial\mathbf{A}_i^1}$ , the gradient of $\log \sigma((\mathbf{e}_i^1)^T \cdot \mathbf{e}_j^2)$ is calculated as follows.
+
+$$
+\begin{array}{l} \frac {\partial \big (\log \sigma ((\mathbf {e} _ {i} ^ {1}) ^ {T} \cdot \mathbf {e} _ {j} ^ {2}) \big)}{\partial \mathbf {A} _ {i} ^ {1}} \\ = \frac {\partial \left(\log \sigma \left(\left(\mathbf {e} _ {i} ^ {1}\right) ^ {T} \cdot \mathbf {e} _ {j} ^ {2}\right)\right)}{\partial \left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}} \frac {\partial \left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}}{\partial \mathbf {A} _ {i} ^ {1}} \tag {10} \\ = \phi \left(\left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}, \mathbf {e} _ {j} ^ {2}\right) \mathbf {J} _ {i} \\ \end{array}
+$$
+
+It is obvious that the gradient is determined with both the signal and the Jacobian together. The situation that either the signal has saturating gradient or the Jacobian is insignificant is able to result in vanishing gradients in $\frac{\partial\left(\log\sigma((\mathbf{e}_i^1)^T\cdot\mathbf{e}_j^2)\right)}{\partial\mathbf{A}_i^1}$ and thus the attack failures.
+
+All singular values of a neural network's input-output Jacobian matrix concentrate near 1 is a property known as dynamical isometry (Pennington et al., 2017). Ensuring the mean squared singular value of a network's input-output Jacobian is $O(1)$ is essential for avoiding the exponential vanishing or explosion of gradients. We leverage the dynamical isometry theory for improving the effectiveness of the PGD adversarial attacks. Concretely, a neural network is dynamical isometry if all singular values $\lambda_{ir}$ of the Jacobian $\mathbf{J}_i$ are close to 1, i.e., $1 - \lambda_{ir} \leq \xi$ for $\forall r, r \in \{1, \dots, \min\{N^1, N^2\}\}$ and a small positive number $\xi \approx 0$ . In our problem, when the Jacobian matrix $\mathbf{J}_i$ is dynamical isometry, the signal $\phi((\mathbf{e}_i^1)^T, \mathbf{e}_j^2)$ backpropagates isometrically over the neural network and maintains the norm and all angles between vectors.
+
+Intuitively, if we select a good attack signal amplification factor $\alpha$ to amplify $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ as follows, then this can improve the diffusion of attack signals. In addition, a good $\alpha$ should guarantee the relative order of the network's output logits invariant, to ensure the decision boundary of entity alignment unchanged.
+
+$$
+\tilde {\mathbf {e}} _ {i} ^ {1} = \alpha \mathbf {e} _ {i} ^ {1}, \tilde {\mathbf {e}} _ {j} ^ {2} = \alpha \mathbf {e} _ {j} ^ {2} \tag {11}
+$$
+
+We rewrite the gradients with $\alpha$ as follows.
+
+$$
+\begin{array}{l} \frac {\partial \left(\log \sigma \left(\left(\tilde {\mathbf {e}} _ {i} ^ {1}\right) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2}\right)\right)}{\partial \mathbf {A} _ {i} ^ {1}} \\ = \frac {\partial \left(\log \sigma \left(\left(\tilde {\mathbf {e}} _ {i} ^ {1}\right) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2}\right)\right)}{\partial \left(\tilde {\mathbf {e}} _ {i} ^ {1}\right) ^ {T}} \frac {\partial \left(\tilde {\mathbf {e}} _ {i} ^ {1}\right) ^ {T}}{\partial \left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}} \frac {\partial \left(\mathbf {e} _ {i} ^ {1}\right) ^ {T}}{\partial \mathbf {A} _ {i} ^ {1}} \tag {12} \\ = \phi \big ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T}, \tilde {\mathbf {e}} _ {j} ^ {2} \big) \alpha \mathbf {J} _ {i} \\ \end{array}
+$$
+
+Notice that $\phi\big((\tilde{\mathbf{e}}_i^1)^T, \tilde{\mathbf{e}}_j^2\big) =$
+
+$$
+\frac {\sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2}) (1 - \sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2})) \tilde {\mathbf {e}} _ {j} ^ {2}}{\sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2})} = (1 - \sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot
+$$
+
+$(\tilde{\mathbf{e}}_j^2))\tilde{\mathbf{e}}_j^2$ When $\alpha$ is close to $\infty$ , the alignment signal $\phi \bigl ((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2\bigr)$ approaches zero and thus the vanishing gradient problem is encountered in adversarial attacks. In addition, all singular values of $\alpha \mathbf{J}_i$ are equal to zeros if $\alpha = 0$ $\frac{\partial\left(\log\sigma((\tilde{\mathbf{e}}_i^1)^T.\tilde{\mathbf{e}}_j^2)\right)}{\partial\mathbf{A}_i^1}$ is equal to zero, which leads to the vanishing gradient problem too.
+
+Therefore, a desired $\alpha$ for avoiding the exponential vanishing of gradients should stand in between 0 and $\infty$ , in order to guarantee the signal $\phi \left((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2\right)$ large enough, i.e., $\| \phi \bigl ((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2\bigr)\| _2 > \eta$ for a positive threshold $\eta$ , as well as make all singular values of $\alpha \mathbf{J}_i$ close to 1, such that the signal $\phi \bigl ((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2\bigr)$ can be well backpropagated from the output layer to the input layer.
+
+In order to make the mean of singular values of $\alpha \mathbf{J}_i$ close to 1, the first option of $\alpha$ is the inverse of the mean of singular values of $\mathbf{J}_i$ .
+
+$$
+\alpha = \frac {\left| D \right| N}{\sum_ {i = 1} ^ {| D |} \sum_ {r = 1} ^ {N} \lambda_ {i r}} \tag {13}
+$$
+
+where $\lambda_{ir}$ is the $r^{th}$ singular value of $\mathbf{J}_i$ . $|D|$ is the size of the set $D$ of pre-aligned entity pairs and $N = \min \{N^1, N^2\}$ .
+
+For the purpose of ensuring $\| \phi ((\tilde{\mathbf{e}}_i^1)^T,\tilde{\mathbf{e}}_j^2)\| _2 > \eta$ , the second option of $\alpha$ should be satisfied with $1 - \sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) > \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ . The feasible $\alpha$ can be obtained through the following theorem.
+
+Theorem 1. Let entity embedding vectors $\tilde{\mathbf{e}}_k^2$ and $\tilde{\mathbf{e}}_l^2$ be the most similar and least similar to $(\tilde{\mathbf{e}}_i^1)^T$ $(1\leq k,l\leq N^{2})$ , i.e., $\tilde{\mathbf{e}}_k^2 = \mathrm{argmax}_{\tilde{\mathbf{e}}_k^2}(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_k^2$ and $\tilde{\mathbf{e}}_l^2 = \mathrm{argmin}_{\tilde{\mathbf{e}}_l^2}(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_l^2$ , and $c = (\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_k^2$ Also, suppose that $d$ is the minimal norm of entity embedding vectors in $G^{2}$ , i.e., $d = \min_{\tilde{\mathbf{e}}_m^2}\| \tilde{\mathbf{e}}_m^2\| _2$ for $\forall e_m^2\in E^2$ . For a given $0 < \eta < d / 2$ , if $\alpha <$
+
+# Algorithm 2 Effective Adversarial Attacks
+
+Input: KG $G^{k} = (E^{k}, R^{k}, T^{k})$ , set of pre-aligned entity pairs $D = \{(e_{i}^{1}, e_{j}^{2}) | e_{i}^{1} \leftrightarrow e_{j}^{2}\}$ , trained entity embedding model $h$ , noise budget $\epsilon$ , and signal threshold $\eta$ .
+
+Output: Perturbed adjacency matrices
+
+$$
+\{\hat {\mathbf {A}} _ {i} ^ {1}, \hat {\mathbf {A}} _ {j} ^ {2} | (e _ {i} ^ {1}, e _ {j} ^ {2}) \in D \}.
+$$
+
+1: for each pair $(e_i^1, e_j^2)$ in $D$
+2: Set $\hat{\mathbf{e}}_i^1 = \mathbf{e}_i^1 = h(e_i^1)$ , $\hat{\mathbf{e}}_j^2 = \mathbf{e}_j^2 = h(e_j^2)$ ;
+3: Compute $\alpha_{1} = \frac{|D|N}{\sum_{i = 1}^{|D|}\sum_{r = 1}^{N}\lambda_{ir}}$ in Eq.(13);
+4: for $t = 1,\dots ,T$
+5: Initialize $\alpha_{2} = 1.0$
+6: if $1 - \sigma((\tilde{\mathbf{e}}_i^1)^T \cdot \tilde{\mathbf{e}}_j^2) \leq \eta / \| \tilde{\mathbf{e}}_j^2\|_2$
+7: Update $\alpha_{2} = \sqrt{\frac{1}{c}\log\frac{d - \eta}{\eta}}$ in Theorem 1;
+8: Amplify $\tilde{\mathbf{e}}_i^1 = \alpha_1\alpha_2\mathbf{e}_i^1$ $\tilde{\mathbf{e}}_j^2 = \alpha_1\alpha_2\mathbf{e}_j^2;$
+9: Calculate $\frac{\partial\left(\log\sigma((\tilde{\mathbf{e}}_i^1)^T\cdot\tilde{\mathbf{e}}_j^2)\right)}{\partial\tilde{\mathbf{A}}_i^1}$ and $\frac{\partial\left(\log\sigma((\tilde{\mathbf{e}}_i^1)^T\cdot\tilde{\mathbf{e}}_j^2)\right)}{\partial\tilde{\mathbf{A}}_j^2};$
+10: Use the PGD to update $\hat{\mathbf{A}}_i^1$ , $\hat{\mathbf{A}}_j^2$ in Eq.(9);
+11: Return $\{\hat{\mathbf{A}}_i^1, \hat{\mathbf{A}}_j^2 | (e_i^1, e_j^2) \in D\}$ .
+
+$$
+\begin{array}{l} \sqrt {\frac {1}{c} \log \frac {d - \eta}{\eta}}, t h e n 1 - \sigma ((\tilde {\mathbf {e}} _ {i} ^ {1}) ^ {T} \cdot \tilde {\mathbf {e}} _ {j} ^ {2}) > \eta / \| \tilde {\mathbf {e}} _ {j} ^ {2} \| _ {2} f o r \\ \forall e _ {j} ^ {2} \in E ^ {2}. \end{array}
+$$
+
+Proof. Please see Appendix A for proof.
+
+Algorithm 2 combines the above two kinds of $\alpha$ to produce effective adversarial attacks with attack signal amplification. The perturbed entity embeddings $\hat{\mathbf{e}}_i^1$ and $\hat{\mathbf{e}}_j^2$ are initialized with clean ones $\mathbf{e}_i^1$ and $\mathbf{e}_j^2$ in step 2. The first amplification factor $\alpha_{1}$ is calculated in step 3. The second factor $\alpha_{2}$ is computed in steps 5-7. $\alpha_{1}$ and $\alpha_{2}$ are integrated together for enhancing the attack signal propagation of neural networks in steps 8-9. The PGD attack method with attack signal amplification is utilized to perturb the KGs. The algorithm repeats the above iterative procedure until convergence.
+
+# 5 Experimental Evaluation
+
+Table 1 presents the statistics of the DBP15K datasets (Sun et al., 2017). They consist of three different cross-lingual datasets which are $DBP15K_{ZH-EN}$ , $DBP15K_{JA-EN}$ , and $DBP15K_{FR-EN}$ . Each cross-lingual dataset contains two monolingual KGs in different languages and 15,000 pre-aligned entity pairs between two KGs. In the experiment, $30\%$ pre-aligned entity
+
+| Dataset | #Entities | #Relations | #Triples | #Alignments |
| ZH-EN ZH | 66,469 | 2,830 | 153,929 | 15,000 |
| 98,125 | 2,317 | 237,674 |
| JA-EN JA | 65,744 | 2,043 | 164,373 | 15,000 |
| 95,680 | 2,096 | 233,319 |
| FR-EN FA | 66,858 | 1,379 | 192,191 | 15,000 |
| 105,889 | 2,209 | 278,590 |
+
+Table 1: Statistics of Datasets
+
+pairs are used for training data and the remaining are used for test data.
+
+We compare the EAA model with seven state-of-the-art attack models. Sememe-based Word Substitution (SWS) incorporates the sememe-based word substitution and swarm optimization-based search to conduct word-level attacks (Zang et al., 2020). Inflection Word Swap (IWS) perturb the inflectional morphology of words to craft plausible and semantically similar adversarial examples (Tan et al., 2020; Morris et al., 2020). We utilize the above two word-level attack models to replace associated entities of a relation based on semantics. GF-Attack attacks graph embedding methods by devising new loss and approximating the spectrum (Chang et al., 2020). LowBlow is a general low-rank adversarial attack model which is able to affect the performance of various graph learning tasks (Entezari et al., 2020). We use the above two graph attack models to directly add/remove relations in terms of graph topology. CRIAGE aims to add/remove the facts to/from the KG that degrades the performance of link prediction (Pezeshkpour et al., 2019). DPA contains a collection of data poisoning attack strategies against knowledge graph embedding (Zhang et al., 2019). RL-RR uses reinforcement learning policy to produce deceptively perturbed KGs while keeping the downstream quality of the original KG (Raman et al., 2021). To our best knowledge, this work is the first to study adversarial attacks on cross-lingual entity alignment.
+
+We evaluate four versions of EAA to show the strengths of different components. EAA-P uses the basic PGD (Madry et al., 2018) to produce adversarial attacks. EAA-D only utilizes the KDE and density maximization to generate effective and unnoticeable attacks. EAA-A employs only our attack signal amplification strategy to improve the performance of the basic PGD attack. EAA operates with the full support of both KDE and signal amplification components.
+
+We validate the effectiveness of the above attack models with three representative cross-lingual entity alignment algorithms. AttrGNN integrates
+
+ | AttrGNN | RNM | REA |
| Attacks | Hits@1 | MRR | Hits@1 | MRR | Hits@1 | MRR |
| Clean | 0.796 | 0.845 | 0.841 | 0.875 | 0.792 | 0.818 |
| SWS | 0.726 | 0.839 | 0.745 | 0.862 | 0.764 | 0.848 |
| IWS | 0.708 | 0.761 | 0.729 | 0.823 | 0.759 | 0.804 |
| GF-Attack | 0.709 | 0.815 | 0.724 | 0.833 | 0.733 | 0.844 |
| LowBlow | 0.677 | 0.773 | 0.678 | 0.776 | 0.697 | 0.797 |
| CRIAGE | 0.646 | 0.704 | 0.655 | 0.719 | 0.662 | 0.715 |
| DPA | 0.603 | 0.712 | 0.636 | 0.751 | 0.635 | 0.733 |
| RL-RR | 0.562 | 0.684 | 0.628 | 0.713 | 0.637 | 0.722 |
| EAA | 0.497 | 0.538 | 0.525 | 0.636 | 0.538 | 0.641 |
+
+Table 2: Results on $DBP15K_{ZH-EN}$ with $5\%$ perturbed relations
+
+ | AttrGNN | RNM | REA |
| Attacks | Hits@1 | MRR | Hits@1 | MRR | Hits@1 | MRR |
| Clean | 0.783 | 0.834 | 0.872 | 0.899 | 0.799 | 0.823 |
| SWS | 0.724 | 0.839 | 0.774 | 0.854 | 0.788 | 0.843 |
| IWS | 0.718 | 0.787 | 0.755 | 0.804 | 0.745 | 0.796 |
| GF-Attack | 0.715 | 0.824 | 0.747 | 0.826 | 0.767 | 0.845 |
| LowBlow | 0.737 | 0.783 | 0.728 | 0.802 | 0.723 | 0.821 |
| CRIAGE | 0.705 | 0.756 | 0.699 | 0.769 | 0.707 | 0.769 |
| DPA | 0.643 | 0.725 | 0.723 | 0.753 | 0.669 | 0.766 |
| RL-RR | 0.689 | 0.716 | 0.691 | 0.765 | 0.706 | 0.768 |
| EAA | 0.579 | 0.612 | 0.618 | 0.642 | 0.621 | 0.652 |
+
+both attribute and relation triples for better performance of cross-lingual entity alignment (Liu et al., 2020). RNM is a novel relation-aware neighborhood matching model for entity alignment (Zhu et al., 2021). To our best knowledge, REA is the only robust cross-lingual entity alignment solution against adversarial attacks by detecting noise in the perturbed inter-KG entity links (Pei et al., 2020).
+
+We use two popular metrics in entity alignment to verify the attack effectiveness: Hits@k (i.e., the ratio of correctly aligned entities ranked in the top $k$ candidates) and MRR (i.e., mean reciprocal rank). A smaller Hits@k or MRR indicates a worse entity alignment but a better attack. $K$ is fixed to 1 in all tests.
+
+Attack performance on various datasets with different entity alignment algorithms. Table 2-4 exhibit the Hits@1 and MRR scores of three GCN-based entity alignment algorithms on test data by nine attack models over three groups of cross-lingual datasets. Clean represents that the experiments run on the original KGs without any perturbations. For all other attack models, the number of perturbed relations is fixed to $5\%$ in these experiments. It is observed that among nine attack methods, no matter how strong the attacks are, the EAA method achieve the lowest Hits@1 and MRR scores on perturbed KGs in most experi
+
+Table 3: Results on $DBP15K_{JA-EN}$ with $5\%$ perturbed relations
+
+ | AttrGNN | RNM | REA |
| Attacks | Hits@1 | MRR | Hits@1 | MRR | Hits@1 | MRR |
| Clean | 0.919 | 0.91 | 0.938 | 0.954 | 0.812 | 0.855 |
| SWS | 0.782 | 0.873 | 0.814 | 0.886 | 0.807 | 0.846 |
| IWS | 0.755 | 0.801 | 0.803 | 0.836 | 0.802 | 0.806 |
| GF-Attack | 0.715 | 0.828 | 0.779 | 0.848 | 0.792 | 0.848 |
| LowBlow | 0.792 | 0.841 | 0.799 | 0.826 | 0.793 | 0.852 |
| CRIAGE | 0.733 | 0.864 | 0.744 | 0.873 | 0.781 | 0.831 |
| DPA | 0.704 | 0.757 | 0.796 | 0.817 | 0.695 | 0.791 |
| RL-RR | 0.754 | 0.792 | 0.745 | 0.823 | 0.754 | 0.784 |
| EAA | 0.643 | 0.697 | 0.644 | 0.709 | 0.681 | 0.696 |
+
+Table 4: Results on $DBP15K_{FR-EN}$ with $5\%$ perturbed relations
+
+ments, showing the effectiveness of EAA for the adversarial attacks. Compared to the entity alignment results under other attack models, EAA, on average, achieves $17.7\%$ , $12.8\%$ , and $12.8\%$ improvement of Hits@1 and $17.6\%$ , $16.9\%$ , and $13.7\%$ boost of MRR on $DBP15K_{ZH-EN}$ , $DBP15K_{JA-EN}$ , and $DBP15K_{FR-EN}$ respectively. In addition, the promising performance of EAA with all three entity alignment models implies that EAA has great potential as a general attack solution to other entity alignment methods, which is desirable in practice.
+
+Ablation study. Figure 2 and 3 present the Hits@1 and MRR scores achieved by three entity alignment methods under adversarial attacks with four variants of our EAA attack model. We have observed the complete EAA achieves the lowest Hits@1 (< 0.681) and the smallest MRR scores (< 0.709) respectively, which are obviously better than other versions. Notice that EAA-A achieves the better attack performance than EAA-P in most tests. A reasonable explanation is that our attack signal amplification technique is able to alleviate the vanishing gradient issue, which effectively helps maintain the utility of adversarial attacks in GCN-based entity alignment models. In addition, EAA-D also performs well in most experiments, compared with EAA-P. A rational guess is that it is difficult to correctly match the entities in two KGs when they lie in dense regions with many similar entities. These results illustrate both KDE and signal amplification methods are important in producing effective and unnoticeable attacks in entity alignment.
+
+Attack performance with varying perturbed relations. Figure 4 presents the performance of entity alignment under nine attack models by varying the ratios of perturbed edges from $5\%$ to $30\%$ . It is obvious that the attacking performance improves for each attacker with an increase in the number of perturbed edges. This phenomenon indicates that current GCN-based entity alignment methods are very sensitive to adversarial attacks. EAA achieves
+
+
+(a) AttrGNN
+
+
+(b) RNM
+
+
+(c) REA
+Figure 2: Hits@1 of EAA variants
+
+
+(a) AttrGNN
+Figure 3: MRR of EAA variants
+
+
+(b) RNM
+
+
+(c) REA
+
+
+(a) AttrGNN on ZH-EN
+Figure 4: Hits@1 with varying perturbed relations
+
+
+(b) RNM on ZH-EN
+
+
+(c) REA on ZH-EN
+(a) Perturbation budget $\epsilon$
+Figure 5: Results with varying parameters
+
+
+
+
+(b) Signal threshold $\eta$
+
+the lowest Hits@1 values $(< 0.538)$ , which are still better than the other eight methods in most tests. Especially, when the perturbation ratio is large than $10\%$ , the Hits@1 values drop quickly.
+
+Impact of perturbation budget $\epsilon$ . Figure 5 (a) measures the performance effect of $\epsilon$ in the EAA model for the entity alignment by varying $\epsilon$ from 1 to 6. It is observed that when increasing $\epsilon$ , both Hits@1 and MRR scores of the EAA model decreases substantially. This demonstrates it is difficult to train a robust entity alignment model under large $\epsilon$ constraints. However, a large $\epsilon$ can be easily detected by humans or by defender programs. Notice that the average number of associated relations of each entity in three datasets is between 2.3 and 2.9. Thus we suggest generating both effective and unnoticeable attacks for the entity alignment task under $\epsilon$ between 2 and 3, such that $\epsilon$ is smaller than the average number of associated relations.
+
+Impact of signal threshold $\eta$ . Figure 5 (b) shows the impact of $\eta$ in our EAA model over three groups of datasets. The performance curves initially drop when $\eta$ increases. Intuitively, this can help alleviate the vanishing gradient issue in the PGD adversarial attacks. Later on, the performance curves keep relatively stable or even increasing when $\eta$ continuously increases. A reasonable explanation is that the too large $\eta$ makes the upper bound of $\alpha$ too small. This results in poor-conditioned
+
+Jacobian and thus leads to the vanishing gradient issue again. Thus, it is important to determine the optimal $\eta$ for the EAA model.
+
+# 6 Related Work
+
+Knowledge graph alignment. Knowledge graph alignment techniques have attracted active research in the last decade (Xu et al., 2020b; Sun et al., 2020a; Berrendorf et al., 2021b,a) and can be broadly classified into two categories: (1) Translation-based techniques, which denote entities by computing the plausibility of relational facts measured by a specific fact plausibility scoring function, including MtransE (Chen et al., 2017a), IPTransE (Zhu et al., 2017), JAPE (Sun et al., 2017), BootEA (Sun et al., 2018), RSNs (Guo et al., 2019), NAEA (Zhu et al., 2019), OTEA (Pei et al., 2019b), TransEdge (Sun et al., 2019), HyperKA (Sun et al., 2020c). The idea of this kind of methods are originated from cross-lingual word embedding techniques. Thus, they are able to capture fine-grained fact semantics. However, they fail to preserve the global topological structure of knowledge graphs; (2) GCN-based methods, which utilize GCN models to model global structure information of knowledge graphs by recursively aggregating the features of neighbors of each entity, such as GCN-Align (Wang et al., 2018), SEA (Pei et al., 2019a), MuGNN (Cao et al., 2019), HopGCN (Xu
+
+et al., 2019c), NAEA (Zhu et al., 2019), AVR-GCN (Ye et al., 2019), RDGCN (Wu et al., 2019a), HGCN-JE (Wu et al., 2019b), KECG (Li et al., 2019), MRAEA (Mao et al., 2020a), AliNet (Sun et al., 2020d), CG-MuAlign (Zhu et al., 2020), NMN (Wu et al., 2020b), DAT (Zeng et al., 2020), SSP (Nie et al., 2020), RREA (Mao et al., 2020b), DINGAL (Yan et al., 2021), RNM (Zhu et al., 2021), JEANS (Chen et al., 2021), Dual-AMN (Mao et al., 2021), KE-GCN (Yu et al., 2021). These methods can fully utilize the topological and neighborhood information to learn better representations of entities. However, it is difficult to model fine-grained fact semantics.
+
+Adversarial attacks on text and graph data. Recent studies have presented that NLP and graph models, especially DNN models, are highly sensitive to adversarial attacks, i.e., carefully designed small deliberate perturbations in input intended to result in analysis failures (Song et al., 2018; Chen et al., 2020; Xu et al., 2019a; Wang et al., 2019; Zhang et al., 2020a; Huq and Pervin, 2020).
+
+In the NLP area, the majority of research efforts focus on attacking the corpus in different models, including dialogue generation (Niu and Bansal, 2018), machine translation (Belinkov and Bisk, 2018; Tan et al., 2020; Niu et al., 2020), model-agnostic attacks (Wallace et al., 2019; Zang et al., 2020; Morris et al., 2020), natural language inference (Abdou et al., 2020; Chan et al., 2020; Li et al., 2020b), reading comprehension (Jia and Liang, 2017; Blohm et al., 2018; Tan et al., 2020), and sentiment classification (Wu et al., 2020c; Kurita et al., 2020; Wang et al., 2020).
+
+Graph data analysis have attracted active research in the last decade (Cheng et al., 2009; Zhou et al., 2009, 2010; Cheng et al., 2011; Zhou and Liu, 2012; Cheng et al., 2012; Lee et al., 2013; Su et al., 2013; Zhou et al., 2013; Zhou and Liu, 2013; Palanisamy et al., 2014; Zhou et al., 2014; Zhou and Liu, 2014; Su et al., 2015; Zhou et al., 2015b; Bao et al., 2015; Zhou et al., 2015d; Zhou and Liu, 2015; Zhou et al., 2015a,c; Lee et al., 2015; Zhou et al., 2016; Zhou, 2017; Palanisamy et al., 2018; Zhou et al., 2018b,a; Ren et al., 2019; Zhou et al., 2019c,b,d; Zhou and Liu, 2019; Wu et al., 2020a, 2021a; Zhou et al., 2020b; Zhang et al., 2020b; Zhou et al., 2020c,a; Goswami et al., 2020; Zhou et al., 2021b; Zhao et al., 2021; Ren et al., 2021; Jin et al., 2021; Wu et al., 2021b; Zhou et al., 2021a; Zhang et al., 2021; Liu et al., 2021). Various adver
+
+sarial attack models have been developed to show the vulnerability of graph learning models in node classification (Dai et al., 2018; Zügner et al., 2018; Wang and Gong, 2019; Xu et al., 2019b; Zügner and Gunnemann, 2019; Takahashi, 2019; Entezari et al., 2020; Sun et al., 2020b; Ma et al., 2020; Zügner et al., 2020; Xi et al., 2021; He et al., 2021), community detection (Chen et al., 2017b; Waniek et al., 2018; Chen et al., 2019; Li et al., 2020a), network embedding (Chen et al., 2018; Bojchevski and Gunnemann, 2019; Chang et al., 2020), graph classification (Dai et al., 2018; Xi et al., 2021), link prediction (Zhou et al., 2019a), similarity search (Dey and Medya, 2020), malware detection (Hou et al., 2019), and graph matching (Zhang et al., 2020b).
+
+Only recently, researchers have started to develop adversarial attack techniques to maximally degrade the performance of knowledge graph learning in knowledge graph embedding (Minervini et al., 2017; Pujara et al., 2017; Pezeshkpour et al., 2019; Zhang et al., 2019; Banerjee et al., 2021) and knowledge graph-based dialogue generation (Xu et al., 2020a). REA detects noise in the perturbed inter-graph links for robust cross-lingual entity alignment (Pei et al., 2020). RL-RR aims to produce deceptively perturbed knowledge graphs, which maintain the downstream performance of the original knowledge graph while significantly deviating from the original knowledge graph's semantics and structure (Raman et al., 2021).
+
+# 7 Conclusions
+
+We have studied the problem of adversarial attacks against cross-lingual entity alignment. First, we proposed to utilize kernel density estimation technique to estimate and maximize the densities of attacked entities and generate effective and unnoticeable perturbations, by pushing attacked entities to dense regions in two KGs. Second, we analyze how gradient vanishing causes failures of gradient-based adversarial attacks. We design an attack signal amplification method to ensure informative signal propagation. The EAA model achieves superior performance against representative attack models.
+
+# 8 Ethical Considerations
+
+In this work, all the three knowledge graph datasets are open-released by previous works for research (Sun et al., 2017). All the three datasets are widely used in training/evaluating the crosslingual entity alignment, for example, (Liu et al.,
+
+2020; Zhu et al., 2021; Pei et al., 2020; Yan et al., 2021; Mao et al., 2021). All the three datasets are open-accessed resources that everyone can see and no privacy-related data (such as gender, nickname, birthday, etc.) are included. All the three knowledge graph datasets are originally collected and filtered from Wikipedia (under the license CC BY-SA 3.0). It is allowed to reuse them in research. But if it needs commercial use, it may need to ask for additional permission from the original author/copyright owner (Wik; Sun et al., 2017). To summary, as research work, this work has no concerns on the dataset and other aspects. But if someone wants to use the same/similar data as us in commercial, they have to further check the licenses.
+
+# References
+
+Answers.com. https://www Answers.com.
+
+Wikipedia. http://www.wikipedia.org/.
+
+Mostafa Abdou, Vinit Ravishankar, Maria Barrett, Yonatan Belinkov, Desmond Elliott, and Anders Søgaard. 2020. The sensitivity of language models and humans to winograd schema perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7590-7604.
+
+Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pages 274-283.
+
+Soren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007., pages 722-735.
+
+Prithu Banerjee, Lingyang Chu, Yong Zhang, Laks V.S. Lakshmanan, and Lanjun Wang. 2021. Stealthy targeted data poisoning attack on knowledge graphs. In Proceedings of the 37th IEEE International Conference on Data Engineering, IEEE 2021.
+
+Xianqiang Bao, Ling Liu, Nong Xiao, Yang Zhou, and Qi Zhang. 2015. Policy-driven autonomic configuration management for nosql. In Proceedings of the 2015 IEEE International Conference on Cloud Computing (CLOUD'15), pages 245-252, New York, NY.
+
+Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings.
+
+Max Berrendorf, Evgeniy Faerman, and Volker Tresp. 2021a. Active learning for entity alignment. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, pages 48-62.
+
+Max Berrendorf, Ludwig Wacker, and Evgeniy Faerman. 2021b. A critical assessment of state-of-the-art in entity alignment. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part II, pages 18-32.
+
+Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, and Ngoc Thang Vu. 2018. Comparing attention-based convolutional and recurrent neural networks: Success and limitations in machine reading comprehension. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 - November 1, 2018, pages 108-118.
+
+Aleksandar Bojchevski and Stephan Gunnemann. 2019. Adversarial attacks on node embeddings via graph poisoning. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 695-704.
+
+Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1452-1461.
+
+Alvin Chan, Yi Tay, Yew-Soon Ong, and Aston Zhang. 2020. Poison attacks against text datasets with conditional adversarially regularized autoencoder. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 4175-4189.
+
+Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Junzhou Huang. 2020. A restricted black-box adversarial framework towards attacking graph embedding models. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New Yrok, NY, USA, February 7 - 12, 2020.
+
+Jinyin Chen, Lihong Chen, Yixian Chen, Minghao Zhao, Shanqing Yu, Qi Xuan, and Xiaoniu Yang. 2019. Ga-based q-attack on community detection. IEEE Trans. Comput. Social Systems, 6(3):491-503.
+
+Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. 2018. Fast gradient attack on network embedding. CoRR, abs/1809.02797.
+Liang Chen, Jintang Li, Jiaying Peng, Tao Xie, Zengxu Cao, Kun Xu, Xiangnan He, and Zibin Zheng. 2020. A survey of adversarial learning on graphs. CoRR, abs/2003.05730.
+Muhao Chen, Weijia Shi, Ben Zhou, and Dan Roth. 2021. Cross-lingual entity alignment with incidental supervision. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 645-658.
+Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017a. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 1511-1517.
+Yizheng Chen, Yacin Nadji, Athanasios Kountouras, Fabian Monrose, Roberto Perdisci, Manos Antonakakis, and Nikolaos Vasiloglou. 2017b. Practical attacks against graph-based clustering. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30-November 03, 2017, pages 1125-1142.
+Hong Cheng, David Lo, Yang Zhou, Xiaoyin Wang, and Xifeng Yan. 2009. Identifying bug signatures using discriminative graph mining. In Proceedings of the 18th International Symposium on Software Testing and Analysis (ISSTA'09), pages 141-152, Chicago, IL.
+Hong Cheng, Yang Zhou, Xin Huang, and Jeffrey Xu Yu. 2012. Clustering large attributed information networks: An efficient incremental computing approach. Data Mining and Knowledge Discovery (DMKD), 25(3):450-477.
+Hong Cheng, Yang Zhou, and Jeffrey Xu Yu. 2011. Clustering large attributed graphs: A balance between structural and attribute similarities. ACM Transactions on Knowledge Discovery from Data (TKDD), 5(2):1-33.
+Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pages 1123-1132.
+Caitlin Dewey. 2016. You probably haven't even noticed google's sketchy quest to control the world's knowledge. https://www.washingtonpost.com/news/the-intersect/wp/2016/05/11/you-probably-havent-even-noticed-goggles-sketchy-quest-to-control-the-worlds-knowledge/.
+
+Palash Dey and Sourav Medya. 2020. Manipulating node similarity measures in networks. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '20, Auckland, New Zealand, May 9-13, 2020.
+Negin Entezari, Saba Al-Sayouri, Amirali Darvishzadeh, and Evangelos Papalexakis. 2020. All you need is low (rank): Defending against adversarial attacks on graphs. In Proceedings of the 13th ACM International Conference on Web Search and Data Mining, WSDM 2020, Houston, TX, February 3-7, 2020.
+Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Sayan Goswami, Ayam Pokhrel, Kisung Lee, Ling Liu, Qi Zhang, and Yang Zhou. 2020. Graphmap: Scalable iterative graph processing using nosql. The Journal of Supercomputing (TJSC), 76(9):6619-6647.
+Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 2505-2514.
+Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. 2021. Stealing links from graph neural networks. In 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021.
+Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, Edwin Lewis-Kelham, Gerard de Melo, and Gerhard Weikum. 2011. YAGO2: exploring and querying world knowledge in time, space, context, and many languages. In Proceedings of the 20th International Conference on World Wide Web, WWW 2011, Hyderabad, India, March 28 - April 1, 2011 (Companion Volume), pages 229-232.
+Shifu Hou, Yujie Fan, Yiming Zhang, Yanfang Ye, Jingwei Lei, Wenqiang Wan, Jiabin Wang, Qi Xiong, and Fudong Shao. 2019. $\alpha$ Cyber: Enhancing robustness of android malware detection system against adversarial attacks on heterogeneous graph based model. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, pages 609-618.
+Aminul Huq and Mst. Tasnim Pervin. 2020. Adversarial attacks and defense on texts: A survey. CoRR, abs/2005.14108.
+Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on
+
+Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2021-2031.
+Ruoming Jin, Dong Li, Jing Gao, Zhi Liu, Li Chen, and Yang Zhou. 2021. Towards a better understanding of linear models for recommendation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'21), Virtual Event.
+Keita Kurita, Paul Michel, and Graham Neubig. 2020. Weight poisoning attacks on pretrained models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2793-2806.
+Kisung Lee, Ling Liu, Karsten Schwan, Calton Pu, Qi Zhang, Yang Zhou, Emre Yigitoglu, and Pingpeng Yuan. 2015. Scaling iterative graph computations with graphmap. In Proceedings of the 27th IEEE international conference for High Performance Computing, Networking, Storage and Analysis (SC'15), pages 57:1-57:12, Austin, TX.
+Kisung Lee, Ling Liu, Yuzhe Tang, Qi Zhang, and Yang Zhou. 2013. Efficient and customizable data partitioning framework for distributed big pdf data processing in the cloud. In Proceedings of the 2013 IEEE International Conference on Cloud Computing (CLOUD'13), pages 327-334, Santa Clara, CA.
+Chengjiang Li, Yixin Cao, Lei Hou, Jiaxin Shi, Juanzi Li, and Tat-Seng Chua. 2019. Semi-supervised entity alignment via joint knowledge embedding model and cross-graph model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2723-2732.
+Jia Li, Honglei Zhang, Zhichao Han, Yu Rong, Hong Cheng, and Junzhou Huang. 2020a. Adversarial attack on community detection by hiding individuals. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 917-927.
+Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020b. BERT-ATTACK: adversarial attack against BERT using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6193-6202.
+Ji Liu, Jizhou Huang, Yang Zhou, Xuhong Li, Shilei Ji, Haoyi Xiong, and Dejing Dou. 2021. From distributed machine learning to federated learning: A survey. CoRR, abs/2104.14362.
+Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP
+
+2020, Online, November 16-20, 2020, pages 6355-6364.
+Jiaqi Ma, Shuangrui Ding, and Qiaozhu Mei. 2020. Towards more practical adversarial attacks on graph neural networks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, Online.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings.
+Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan. 2021. Boosting the speed of entity alignment $10x$ : Dual attention matching network with normalized hard sample mining. In WWW '21: The Web Conference 2021, April 19-23, 2021.
+Xin Mao, Wenting Wang, Huimin Xu, Man Lan, and Yuanbin Wu. 2020a. MRAEA: an efficient and robust entity alignment approach for cross-lingual knowledge graph. In WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pages 420-428.
+Xin Mao, Wenting Wang, Huimin Xu, Yuanbin Wu, and Man Lan. 2020b. Relational reflection entity alignment. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 1095-1104.
+George A. Miller. 1992. WORDNET: a lexical database for english. In Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, USA, February 23-26, 1992.
+Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, and Sebastian Riedel. 2017. Adversarial sets for regularising neural link predictors. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017, Sydney, Australia, August 11-15, 2017.
+John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 119-126.
+Hao Nie, Xianpei Han, Le Sun, Chi Man Wong, Qiang Chen, Suhui Wu, and Wei Zhang. 2020. Global structure and local semantics-preserved embeddings for entity alignment. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3658-3664.
+
+Tong Niu and Mohit Bansal. 2018. Adversarial over-sensitivity and over-stability strategies for dialogue models. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 - November 1, 2018, pages 486-496.
+Xing Niu, Prashant Mathur, Georgiana Dinu, and Yaser Al-Onaizan. 2020. Evaluating robustness to input perturbations for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8538-8544.
+Balaji Palanisamy, Ling Liu, Kisung Lee, Shicong Meng, Yuzhe Tang, and Yang Zhou. 2014. Anonymizing continuous queries with delay-tolerant mix-zones over road networks. Distributed and Parallel Databases (DAPD), 32(1):91-118.
+Balaji Palanisamy, Ling Liu, Yang Zhou, and Qingyang Wang. 2018. Privacy-preserving publishing of multilevel utility-controlled graph datasets. ACM Transactions on Internet Technology (TOIT), 18(2):24:1-24:21.
+Emanuel Parzen. 1962. On estimation of a probability density function and mode. The Annals of Mathematical Statistics, 33(3):1065-1076.
+Shichao Pei, Lu Yu, Robert Hoehndorf, and Xiangliang Zhang. 2019a. Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 3130-3136.
+Shichao Pei, Lu Yu, Guoxian Yu, and Xiangliang Zhang. 2020. REA: robust cross-lingual entity alignment between knowledge graphs. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 2175-2184.
+Shichao Pei, Lu Yu, and Xiangliang Zhang. 2019b. Improving cross-lingual entity alignment via optimal transport. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3231-3237.
+Jeffrey Pennington, Samuel S. Schoenholz, and Surya Ganguli. 2017. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems (NIPS) 2017, 4-9 December 2017, Long Beach, CA, USA, pages 4785-4795.
+Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,
+
+NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3336-3347.
+Jay Pujara, Eriq Augustine, and Lise Getoor. 2017. Sparsity and noise: Where knowledge graph embeddings fall short. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1751-1756.
+Mrigank Raman, Siddhant Agarwal, Peifeng Wang, Aaron Chan, Hansen Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, and Xiang Ren. 2021. Learning to deceive knowledge graph augmented models via targeted perturbation. In 9th International Conference on Learning Representations, ICLR 2021, Online, May 4-7, 2021, Conference Track Proceedings.
+Jiaxiang Ren, Zijie Zhang, Jiayin Jin, Xin Zhao, Sixing Wu, Yang Zhou, Yelong Shen, Tianshi Che, Ruoming Jin, and Dejing Dou. 2021. Integrated defense for resilient graph matching. In Proceedings of the 38th International Conference on Machine Learning, (ICML'21), pages 8982-8997, Virtual Event.
+Jiaxiang Ren, Yang Zhou, Ruoming Jin, Zijie Zhang, Dejing Dou, and Pengwei Wang. 2019. Dual adversarial learning based network alignment. In Proceedings of the 19th IEEE International Conference on Data Mining (ICDM'19), pages 1288-1293, Beijing, China.
+Wenzhuo Song, Shengsheng Wang, Bo Yang, You Lu, Xuehua Zhao, and Xueyan Liu. 2018. Learning node and edge embeddings for signed networks. Neurocomputing, 319:42-54.
+Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 4444-4451.
+Zhiyuan Su, Ling Liu, Mingchu Li, Xinxin Fan, and Yang Zhou. 2013. Servicetrust: Trust management in service provision networks. In Proceedings of the 10th IEEE International Conference on Services Computing (SCC'13), pages 272-279, Santa Clara, CA.
+Zhiyuan Su, Ling Liu, Mingchu Li, Xinxin Fan, and Yang Zhou. 2015. Reliable and resilient trust management in distributed service provision networks. ACM Transactions on the Web (TWEB), 9(3):1-37.
+Jian Sun, Yu Zhou, and Chengqing Zong. 2020a. Dual attention network for cross-lingual entity alignment. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 3190-3201.
+
+Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant G. Honavar. 2020b. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 673-683.
+Zequn Sun, Muhao Chen, Wei Hu, Chengming Wang, Jian Dai, and Wei Zhang. 2020c. Knowledge association with hyperbolic knowledge graph embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5704-5716.
+Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attribute-preserving embedding. In *The Semantic Web - ISWC* 2017 - 16th International Semantic Web Conference, Vienna, Austria, October 21-25, 2017, Proceedings, Part I, pages 628-644.
+Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4396-4402.
+Zequn Sun, JiaCheng Huang, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu. 2019. Transedge: Translating relation-contextualized embeddings for knowledge graphs. In The Semantic Web - ISWC 2019 - 18th International Semantic Web Conference, Auckland, New Zealand, October 26-30, 2019, Proceedings, Part I, pages 612-629.
+Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020d. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 222-229.
+Tsubasa Takahashi. 2019. Indirect adversarial attacks via poisoning neighbors for graph convolutional networks. In 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, December 9-12, 2019, pages 1395-1400.
+Samson Tan, Shafiq R. Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! combating linguistic discrimination with inflectional perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2920-2935.
+Xiaobin Tang, Jing Zhang, Bo Chen, Yang Yang, Hong Chen, and Cuiping Li. 2020. BERT-INT: A bert-based interaction model for knowledge graph align
+
+ment. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJ-CAI 2020, pages 3174-3180.
+Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2153-2162.
+Binghui Wang and Neil Zhenqiang Gong. 2019. Attacking graph-based classification via manipulating the graph structure. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019, pages 2023-2040.
+Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, and Bo Li. 2020. T3: tree-autoencoder constrained adversarial text generation for targeted attack. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6134-6150.
+Wenqi Wang, Lina Wang, Run Wang, Zhibo Wang, and Aoshuang Ye. 2019. Towards a robust deep neural network in texts: A survey. CoRR, abs/1902.07285.
+Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 349-357.
+Marcin Waniek, Tomasz P. Michalak, Michael J. Wooldridge, and Talal Rahwan. 2018. Hiding individuals and communities in a social network. Nature Human Behaviour, 2:139-147.
+Sixing Wu, Ying Li, Dawei Zhang, Yang Zhou, and Zhonghai Wu. 2020a. Diverse and informative dialogue generation with context-specific commonsense knowledge awareness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, (ACL'20), pages 5811-5820, Online.
+Sixing Wu, Ying Li, Dawei Zhang, Yang Zhou, and Zhonghai Wu. 2021a. Topicka: Generating commonsense knowledge-aware dialogue responses towards the recommended topic fact. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, (IJCAI'20), pages 3766-3772, Online.
+Sixing Wu, Minghui Wang, Dawei Zhang, Yang Zhou, Ying Li, and Zhonghai Wu. 2021b. Knowledge-aware dialogue generation via hierarchical infobox accessing and infobox-dialogue interaction graph
+
+network. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, (IJCAI'21), Online.
+Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019a. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5278-5284.
+Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2019b. Jointly learning entity and relation representations for entity alignment. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 240-249.
+Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2020b. Neighborhood matching network for entity alignment. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6477-6487.
+Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020c. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4166-4176.
+Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. 2021. Graph backdoor. In 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021.
+Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, and Anil K. Jain. 2019a. Adversarial attacks and defenses in images, graphs and text: A review. CoRR, abs/1909.08072.
+Hongcai Xu, Junpeng Bao, and Gaojie Zhang. 2020a. Dynamic knowledge graph-based dialogue generation with improved adversarial meta-learning. CoRR, abs/2004.08833.
+Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. 2019b. Topology attack and defense for graph neural networks: An optimization perspective. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3961-3967.
+Kun Xu, Linfeng Song, Yansong Feng, Yan Song, and Dong Yu. 2020b. Coordinated reasoning for cross-lingual knowledge graph alignment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational
+
+Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9354-9361.
+Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, and Dong Yu. 2019c. Crosslingual knowledge graph alignment via graph matching neural network. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 3156-3161.
+Yuchen Yan, Lihui Liu, Yikun Ban, Baoyu Jing, and Hanghang Tong. 2021. Dynamic knowledge graph alignment. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021.
+Rui Ye, Xin Li, Yujie Fang, Hongyu Zang, and Mingzhong Wang. 2019. A vectorized relational graph convolutional network for multi-relational network alignment. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4135-4141.
+Donghan Yu, Yiming Yang, Ruohong Zhang, and Yuexin Wu. 2021. Knowledge embedding based graph convolutional network. In WWW '21: The Web Conference 2021, April 19-23, 2021.
+Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6066-6080.
+Weixin Zeng, Xiang Zhao, Wei Wang, Jiuyang Tang, and Zhen Tan. 2020. Degree-aware alignment for entities in tail. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 811-820.
+Gong Zhang, Yang Zhou, Sixing Wu, Zeru Zhang, and Dejing Dou. 2021. Cross-lingual entity alignment with adversarial kernel embedding and adversarial knowledge translation. CoRR, abs/2104.07837.
+Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, and Kui Ren. 2019. Data poisoning attack against knowledge graph embedding. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4853-4859.
+Wei Emma Zhang, Quan Z. Sheng, Ahoud Abdulrahmn F. Alhazmi, and Chenliang Li. 2020a. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Trans. Intell. Syst. Technol., 11(3):24:1-24:41.
+
+Zijie Zhang, Zeru Zhang, Yang Zhou, Yelong Shen, Ruoming Jin, and Dejing Dou. 2020b. Adversarial attacks on deep graph matching. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 (NeurIPS'20), Virtual.
+Xin Zhao, Zeru Zhang, Zijie Zhang, Lingfei Wu, Jiayin Jin, Yang Zhou, Ruoming Jin, Dejing Dou, and Da Yan. 2021. Expressive 1-lipschitz neural networks for robust multiple graph learning against adversarial attacks. In Proceedings of the 38th International Conference on Machine Learning, (ICML'21), pages 12719-12735, Virtual Event.
+Kai Zhou, Tomasz P. Michalak, Marcin Waniek, Talal Rahwan, and Yevgeniy Vorobeychik. 2019a. Attacking similarity-based link prediction in social networks. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '19, Montreal, QC, Canada, May 13-17, 2019, pages 305-313.
+Yang Zhou. 2017. Innovative Mining, Processing, and Application of Big Graphs. Ph.D. thesis, Georgia Institute of Technology, Atlanta, GA, USA.
+Yang Zhou, Amnay Amimeur, Chao Jiang, Dejing Dou, Ruoming Jin, and Pengwei Wang. 2018a. Density-aware local siamese autoencoder network embedding with autoencoder graph clustering. In Proceedings of the 2018 IEEE International Conference on Big Data (BigData'18), pages 1162-1167, Seattle, WA.
+Yang Zhou, Hong Cheng, and Jeffrey Xu Yu. 2009. Graph clustering based on structural/attribute similarities. Proceedings of the VLDB Endowment (PVLDB), 2(1):718-729.
+Yang Zhou, Hong Cheng, and Jeffrey Xu Yu. 2010. Clustering large attributed graphs: An efficient incremental approach. In Proceedings of the 10th IEEE International Conference on Data Mining (ICDM'10), pages 689-698, Sydney, Australia.
+Yang Zhou, Chao Jiang, Zijie Zhang, Dejing Dou, Ruoming Jin, and Pengwei Wang. 2019b. Integrating local vertex/edge embedding via deep matrix fusion and siamese multi-label classification. In Proceedings of the 2019 IEEE International Conference on Big Data (BigData'19), pages 1018-1027, Los Angeles, CA.
+Yang Zhou, Kisung Lee Ling Liu, Qi Zhang, and Balaji Palanisamy. 2019c. Enhancing collaborative filtering with multi-label classification. In Proceedings of the 2019 International Conference on Computational Data and Social Networks (CSoNet'19), pages 323-338, Ho Chi Minh City, Vietnam.
+Yang Zhou and Ling Liu. 2012. Clustering analysis in large graphs with rich attributes. In Dawn E. Holmes and Lakhmi C. Jain, editors, Data Mining: Foundations and Intelligent Paradigms: Volume 1: Clustering, Association and Classification. Springer.
+
+Yang Zhou and Ling Liu. 2013. Social influence based clustering of heterogeneous information networks. In Proceedings of the 19th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'13), pages 338-346, Chicago, IL.
+Yang Zhou and Ling Liu. 2014. Activity-edge centric multi-label classification for mining heterogeneous information networks. In Proceedings of the 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'14), pages 1276-1285, New York, NY.
+Yang Zhou and Ling Liu. 2015. Social influence based clustering and optimization over heterogeneous information networks. ACM Transactions on Knowledge Discovery from Data (TKDD), 10(1):1-53.
+Yang Zhou and Ling Liu. 2019. Approximate deep network embedding for mining large-scale graphs. In Proceedings of the 2019 IEEE International Conference on Cognitive Machine Intelligence (CogMI'19), pages 53-60, Los Angeles, CA.
+Yang Zhou, Ling Liu, and David Buttler. 2015a. Integrating vertex-centric clustering with edge-centric clustering for meta path graph analysis. In Proceedings of the 21st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'15), pages 1563-1572, Sydney, Australia.
+Yang Zhou, Ling Liu, Kisung Lee, Balaji Palanisamy, and Qi Zhang. 2020a. Improving collaborative filtering with social influence over heterogeneous information networks. ACM Transactions on Internet Technology (TOIT), 20(4):36:1-36:29.
+Yang Zhou, Ling Liu, Kisung Lee, Calton Pu, and Qi Zhang. 2015b. Fast iterative graph computation with resource aware graph parallel abstractions. In Proceedings of the 24th ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC'15), pages 179-190, Portland, OR.
+Yang Zhou, Ling Liu, Kisung Lee, and Qi Zhang. 2015c. Graphtwist: Fast iterative graph computation with two-tier optimizations. Proceedings of the VLDB Endowment (PVLDB), 8(11):1262-1273.
+Yang Zhou, Ling Liu, Chang-Shing Perng, Anca Sailer, Ignacio Silva-Lepe, and Zhiyuan Su. 2013. Ranking services by service network structure and service attributes. In Proceedings of the 20th International Conference on Web Service (ICWS'13), pages 26-33, Santa Clara, CA.
+Yang Zhou, Ling Liu, Calton Pu, Xianqiang Bao, Kisung Lee, Balaji Palanisamy, Emre Yigitoglu, and Qi Zhang. 2015d. Clustering service networks with entity, attribute and link heterogeneity. In Proceedings of the 22nd International Conference on Web Service (ICWS'15), pages 257-264, New York, NY.
+Yang Zhou, Ling Liu, Sangeetha Seshadri, and Lawrence Chiu. 2016. Analyzing enterprise storage workloads with graph modeling and clustering.
+
+IEEE Journal on Selected Areas in Communications (JSAC), 34(3):551-574.
+Yang Zhou, Jiaxiang Ren, Dejing Dou, Ruoming Jin, Jingyi Zheng, and Kisung Lee. 2020b. Robust meta network embedding against adversarial attacks. In Proceedings of the 20th IEEE International Conference on Data Mining (ICDM'20), pages 1448-1453, Sorrento, Italy.
+Yang Zhou, Jiaxiang Ren, Ruoming Jin, Zijie Zhang, Dejing Dou, and Da Yan. 2020c. Unsupervised multiple network alignment with multinominal gan and variational inference. In Proceedings of the 2020 IEEE International Conference on Big Data (BigData'20), pages 868-877, Atlanta, GA.
+Yang Zhou, Jiaxiang Ren, Ruoming Jin, Zijie Zhang, Jingyi Zheng, Zhe Jiang, Da Yan, and Dejing Dou. 2021a. Unsupervised adversarial network alignment with reinforcement learning. To appear in ACM Transactions on Knowledge Discovery from Data (TKDD).
+Yang Zhou, Jiaxiang Ren, Sixing Wu, Dejing Dou, Ruoming Jin, Zijie Zhang, and Pengwei Wang. 2019d. Semi-supervised classification-based local vertex ranking via dual generative adversarial nets. In Proceedings of the 2019 IEEE International Conference on Big Data (BigData'19), pages 1267-1273, Los Angeles, CA.
+Yang Zhou, Sangeetha Seshadri, Lawrence Chiu, and Ling Liu. 2014. Graphlens: Mining enterprise storage workloads using graph analytics. In Proceedings of the 2014 IEEE International Congress on Big Data (BigData'14), pages 1-8, Anchorage, AK.
+Yang Zhou, Sixing Wu, Chao Jiang, Zijie Zhang, De- jing Dou, Ruoming Jin, and Pengwei Wang. 2018b. Density-adaptive local edge representation learning with generative adversarial network multi-label edge classification. In Proceedings of the 18th IEEE International Conference on Data Mining (ICDM'18), pages 1464-1469, Singapore.
+Yang Zhou, Zeru Zhang, Sixing Wu, Victor Sheng, Xiaoying Han, Zijie Zhang, and Ruoming Jin. 2021b. Robust network alignment via attack signal scaling and adversarial perturbation elimination. In Proceedings of the 30th Web Conference (WWW'21), pages 3884-3895, Virtual Event / Ljubljana, Slovenia.
+Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via joint knowledge embeddings. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4258-4264.
+Qi Zhu, Hao Wei, Bunyamin Sisman, Da Zheng, Christos Faloutsos, Xin Luna Dong, and Jiawei Han. 2020. Collective multi-type entity alignment between knowledge graphs. In WWW '20: The Web
+
+Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 2241-2252.
+Qiannan Zhu, Xiaofei Zhou, Jia Wu, Jianlong Tan, and Li Guo. 2019. Neighborhood-aware attentional representation for multilingual knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 1943-1949.
+Yao Zhu, Hongzhi Liu, Zhonghai Wu, and Yingpeng Du. 2021. Relation-aware neighborhood matching model for entity alignment. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021.
+Daniel Zügner, Amir Akbarnejad, and Stephan Günneumann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 2847-2856.
+Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Gunnemann. 2020. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Trans. Knowl. Discov. Data, 14(5):57:1-57:31.
+Daniel Zügner and Stephan Gunnemann. 2019. Adversarial attacks on graph neural networks via meta learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
+
+# Appendix
+
+# A Theoretical Analysis
+
+Theorem 1. Let entity embedding vectors $\tilde{\mathbf{e}}_k^2$ and $\tilde{\mathbf{e}}_l^2$ be the most similar and least similar to $(\tilde{\mathbf{e}}_i^1)^T$ $(1\leq k,l\leq N^{2})$ , i.e., $\tilde{\mathbf{e}}_k^2 = \mathrm{argmax}_{\tilde{\mathbf{e}}_k^2}(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_k^2$ and $\tilde{\mathbf{e}}_l^2 = \mathrm{argmin}_{\tilde{\mathbf{e}}_l^2}(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_l^2$ , and $c = (\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_k^2$ Also, suppose that $d$ is the minimal norm of entity embedding vectors in $G^{2}$ , i.e., $d = \min_{\tilde{\mathbf{e}}_m^2}\| \tilde{\mathbf{e}}_m^2\| _2$ for $\forall e_m^2\in E^2$ . For a given $0 < \eta < d / 2$ , if $\alpha < \sqrt{\frac{1}{c}\log\frac{d - \eta}{\eta}}$ then $1 - \sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) > \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ for $\forall e_j^2\in E^2$
+
+Proof. $1 - \sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) > \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ is equivalent to $\sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) < 1 - \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ . We convert it to $\frac{1}{1 + \exp(-(\tilde{\mathbf{e}}_i^1)^T\cdot\tilde{\mathbf{e}}_j^2)} < 1 - \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ As $(\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2\leq c$ we have $\frac{1}{1 + \exp(-\alpha^2(\mathbf{e}_i^1)^T\cdot\mathbf{e}_j^2)}\leq \frac{1}{1 + \exp(-\alpha^2c)}$ If we can prove $\frac{1}{1 + \exp(-\alpha^2c)} < 1 - \eta /\| \tilde{\mathbf{e}}_j^2$ , then we can testify $\frac{1}{1 + \exp(-\alpha^2(\mathbf{e}_i^1)^T\cdot\mathbf{e}_j^2)} < 1 - \eta /\| \tilde{\mathbf{e}}_j^2$ Thus, we need to solve $\exp (\alpha^{2}c) < \frac{\|\tilde{\mathbf{e}}_{j}^{2}\|_{2} - \eta}{\eta}$
+
+As $\| \tilde{\mathbf{e}}_j^2\| _2\geq d$ feasible $\alpha$ for $\exp (\alpha^2 c) < \frac{d - \eta}{\eta}$ is also feasible for $\exp (\alpha^{2}c) < \frac{\|\tilde{\mathbf{e}}_j^2\|_2 - \eta}{\eta}$ Since exp is a monotonic increasing function, by solving the above inequality, we have feasible $\alpha < \sqrt{\frac{1}{c}\log\frac{d - \eta}{\eta}}$
+
+Notice that $0 < \eta < d / 2$ . This makes $\frac{d - \eta}{\eta} > 1$ and the upper bound of $\alpha$ be positive. Therefore, for any $\alpha < \frac{1}{c}\log \frac{d - \eta}{\eta}$ , $1 - \sigma ((\tilde{\mathbf{e}}_i^1)^T\cdot \tilde{\mathbf{e}}_j^2) > \eta /\| \tilde{\mathbf{e}}_j^2\| _2$ is satisfied.
\ No newline at end of file
diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/images.zip b/adversarialattackagainstcrosslingualknowledgegraphalignment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..29dcc08012146b13b27caf8e7ebedb26daa4688d
--- /dev/null
+++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48388e5ff8c9a8dfe14588f6d0d78a443d95c3a3df147ca6da4815ad87b09022
+size 562334
diff --git a/adversarialattackagainstcrosslingualknowledgegraphalignment/layout.json b/adversarialattackagainstcrosslingualknowledgegraphalignment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4e93de1c8d06f45d14b56cef4983e0861a513551
--- /dev/null
+++ b/adversarialattackagainstcrosslingualknowledgegraphalignment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:798e63181185ce7638368ff511073832b54484809bb10a9fa9089c978866d5f8
+size 850367
diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_content_list.json b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4e462fbcba0660b1742298663e64847dfb32fe85
--- /dev/null
+++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9e178aad794a697a3201620a852045830ccc62885458d86e73029bbe0e9733c7
+size 109766
diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_model.json b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1e53c640733bc42714d784af9c8722350ff49ff3
--- /dev/null
+++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c5ea76e1af65cf2ac7d8d888ce0b6477aad37df2cc84673053a6be485d5d4ef
+size 128088
diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_origin.pdf b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..83e8ad0ce9f511132db6c3be17b2fbf99b6de5e8
--- /dev/null
+++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/3aa335a8-268c-48b4-8d69-b7ac206e0f13_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1c684bc8966050b57f8ecd04f7ba998024ca2432e5f0d10369dec15d3183cfc
+size 975222
diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/full.md b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..baf4af07cbf597117b5b49d50f75379e14eb064c
--- /dev/null
+++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/full.md
@@ -0,0 +1,371 @@
+# Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods
+
+Peru Bhardwaj1 John Kelleher2* Luca Costabello3* Declan O'Sullivan1*
+
+$^{1}$ ADAPT Centre, Trinity College Dublin, Ireland
+
+$^{2}$ ADAPT Centre, TU Dublin, Ireland
+
+3 Accenture Labs, Ireland
+
+peru.bhardwaj@adaptcentre.ie
+
+# Abstract
+
+Despite the widespread use of Knowledge Graph Embeddings (KGE), little is known about the security vulnerabilities that might disrupt their intended behaviour. We study data poisoning attacks against KGE models for link prediction. These attacks craft adversarial additions or deletions at training time to cause model failure at test time. To select adversarial deletions, we propose to use the model-agnostic instance attribution methods from Interpretable Machine Learning, which identify the training instances that are most influential to a neural model's predictions on test instances. We use these influential triples as adversarial deletions. We further propose a heuristic method to replace one of the two entities in each influential triple to generate adversarial additions. Our experiments show that the proposed strategies outperform the state-of-art data poisoning attacks on KGE models and improve the MRR degradation due to the attacks by up to $62\%$ over the baselines.
+
+# 1 Introduction
+
+Knowledge Graph Embeddings (KGE) are the state-of-art models for relational learning on large scale Knowledge Graphs (KG). They drive enterprise products ranging from search engines to social networks to e-commerce (Noy et al., 2019). However, the analysis of their security vulnerabilities has received little attention. Identifying these vulnerabilities is especially important for high-stake domains like healthcare and finance that employ KGE models to make critical decisions (Hogan et al., 2020; Bendtsen and Petrovski, 2019). We study the security vulnerabilities of KGE models through data poisoning attacks (Biggio and Roli, 2018; Joseph et al., 2019) that aim to degrade the predictive performance of learned KGE models by adding or removing triples to the input training graph.
+
+
+(a)
+
+
+(b)
+
+Target triple becomes False by adversarial
+
+(a) deletion
+
+(b) add
+
+tion
+
+
+Figure 1: Adversarial attacks against KGE models for fraud detection. The knowledge graph consists of two types of entities - Person and BankAccount. The missing target triple to predict is (Sam, allied_with, Joe). Original KGE model predicts this triple as True. But a malicious attacker uses the instance attribution methods to either (a) delete an adversarial triple or (b) add an adversarial triple. Now, the KGE model predicts the missing target triple as False.
+
+Designing data poisoning attacks against KGE models poses two main challenges. First, to select adversarial deletions or additions, we need to measure the impact of a candidate perturbation on the model's predictions. But the naive approach of re-training a new KGE model for each candidate perturbation is computationally prohibitive. Second, while the search space for adversarial deletions is limited to existing triples in the KG, it is computationally intractable to enumerate through all candidate adversarial additions. Furthermore, attack strategies proposed against models for other graph modalities (Xu et al., 2020) do not scale to KGE models; as they would require gradients with respect to a dense adjacency tensor of the KG.
+
+In this work, we propose to use the model-agnostic instance attribution methods from Interpretable Machine Learning (Molnar, 2019) to select adversarial deletions and additions against KGE models. Instance attribution methods identify the training instances that are influential to
+
+model predictions, that is, deleting the instances from the training data would considerably change the model parameters or predictions. These methods are widely used to generate post-hoc example-based explanations for deep neural networks on images (Koh and Liang, 2017; Hanawa et al., 2021; Charpiat et al., 2019) and text (Han et al., 2020; Han and Tsvetkov, 2020; Pezeshkpour et al., 2021). Since the KGE models have relatively shallow neural architectures and the instance attribution metrics are independent of the black-box models and the input domain, they are a promising approach to estimate the influence of training triples on the KGE model predictions. Yet, despite their promise, they have not been used on KGE models so far. We use the instance attribution methods to address the challenge of measuring the impact of a candidate adversarial deletion on the model predictions.
+
+We focus on the adversarial goal of degrading the KGE model prediction on a given target triple. To achieve this goal, we use three types of instance attribution methods - Instance Similarity that compares the feature representations of target and training triples (Hanawa et al., 2021; Charpiat et al., 2019); Gradient Similarity that compares the gradients of model's loss function due to target and training triples (Hanawa et al., 2021; Charpiat et al., 2019); and Influence Function (Koh and Liang, 2017) which is a principled approach from the robust statistics to estimate the effect of removing a training triple on the KGE model's predictions.
+
+Using these metrics, we select the most influential training triple for adversarial deletion. Using the influential triple, we further select adversarial addition by replacing one of the two entities of the influential triple with the most dissimilar entity in the embedding space. The intuition behind this step is to add a triple that would reduce the influence of the influential triple. This solution also overcomes the scalability challenge for adversarial additions by comparing only the entity embeddings to select the replacement. Figure 1 shows an example of the proposed adversarial deletions and additions against KGE models for fraud detection.
+
+We evaluate the proposed attacks for four KGE models - DistMult, ComplEx, ConvE and TransE on two benchmark datasets - WN18RR and FB15k-237. Our results show that instance attribution metrics achieve significantly better performance than all state-of-art attacks for both adversarial additions and deletions on three out of four models;
+
+and better or equivalent performance on one model. We find that even simple metrics based on instance similarity outperform the state-of-the-art poisoning attacks and are as effective as the computationally expensive Influence Function.
+
+Thus, the main contribution of our research is a collection of effective adversarial deletion and addition strategies based on instance attribution methods against KGE models.
+
+# 2 Knowledge Graph Embeddings
+
+A Knowledge Graph (KG), is a set of triples $\mathcal{T} \subseteq \mathcal{E} \times \mathcal{R} \times \mathcal{E}$ where each triple encodes the relationship $\mathbf{r}$ as a typed link between the subject entity $s$ and the object entity $o$ , i.e. $\mathcal{T} := \{t : (s, \mathbf{r}, o) \mid s, o \in \mathcal{E} \text{ and } \mathbf{r} \in \mathcal{R}\}$ . Here, $\mathcal{E}$ is the set of entities and $\mathcal{R}$ is the set of relations in the knowledge graph. Large scale KGs are often curated automatically from the user content or from the Web and thus are incomplete in practice. To predict the missing links in a KG, the state-of-art method is to learn low dimensional feature vectors for entities and relations in the graph and use them to score the links. These feature vectors are called Knowledge Graph Embeddings (KGE) and denoted as $\pmb{\theta} := \{\pmb{E}, \pmb{R}\}$ where $\pmb{E} \in \mathbb{R}^k$ is the embedding matrix for entities, $\pmb{R} \in \mathbb{R}^k$ is the embedding matrix for relations and $k$ is the embedding dimension.
+
+Scoring Functions: KGE models differ from each other by their scoring functions $f:\mathcal{T}\to \mathbb{R}$ which combine the subject, relation and object embeddings to assign a score to the triple, i.e. $f_{t}\coloneqq f(\pmb{e}_{s},\pmb{e}_{r},\pmb{e}_{o})$ where $\pmb {e}_s,\pmb {e}_o\in \pmb{E}$ and $\pmb {e_r}\in \pmb{R}$ . Table 1 shows the different scoring functions of KGE models used in this research.
+
+These scoring functions are used to categorize the models as additive or multiplicative (Chandrahas et al., 2018). Additive models apply relation-specific translation from the subject embedding to the object embedding. The scoring function for such models is expressed as $f_{t} = -\left\| M_{\mathrm{r}}^{1}(e_{s}) + e_{\mathrm{r}} - M_{\mathrm{r}}^{2}(e_{o}) \right\|$ where $M_{\mathrm{r}} \in \mathbb{R}^{k \times k}$ is a projection matrix from entity space to relation space. An example of additive models is TransE where $M_{\mathrm{r}}^{1} = M_{\mathrm{r}}^{2} = I$ .
+
+On the other hand, multiplicative models score triples through multiplicative interactions between the subject, relation and object embeddings. The scoring function for these models is expressed as $f_{t} = e_{\mathrm{r}}^{\top}\mathcal{F}(e_{s},e_{o})$ where the function $\mathcal{F}$ measures the compatibility between the subject and
+
+| Model | Scoring Function | Feature Vectors |
| DistMult | ⟨es, e_r, eo⟩ | es○er○eo |
| ComplEx | R(⟨es, e_r, eo⟩) | R(ess○er○eo) |
| ConvE | ⟨(es*er), eo⟩ | (ess*er)○eo |
| TransE | -||es+er-eo||p | -(ess+er-eo) |
+
+Table 1: Scoring functions $f_{sro}$ and the proposed Triple Feature Vectors $\pmb{f}_{sro}$ of the KGE models used in this research. For ComplEx, $\pmb{e}_s,\pmb{e}_r,\pmb{e}_o\in \mathbb{C}^k$ ; for the remaining models $\pmb{e}_s,\pmb{e}_r,\pmb{e}_o\in \mathbb{R}^k$ . Here, $\langle \cdot \rangle$ denotes the tri-linear dot product; o denotes the element-wise Hadamard product; $\overline{\cdot}$ denotes conjugate for complex vectors; $\| \cdot \| _p$ denotes l-p norm; $*$ is the neural architecture in ConvE, i.e. $e_s*e_r\coloneqq \sigma (\mathrm{vec}(\sigma ([\overline{e_r},\overline{e_s} ]*\Omega))W)$ where $\sigma$ denotes sigmoid activation, $*$ denotes 2D convolution; $\overline{\cdot}$ denotes 2D reshaping of real vectors.
+
+object embeddings and varies across different models within this family. DistMult, ComplEx and ConvE are examples of multiplicative models.
+
+Training: Since the KGs only contain positive triples; to train the KGE model, synthetic negative samples $t' \in \mathcal{T}'$ are generated by replacing the subject or object in the positive triples with other entities in $\mathcal{E}$ . That is, for each positive triple $t \coloneqq (s, \mathbf{r}, o)$ , the set of negative samples is $t' \coloneqq \{(s', \mathbf{r}, o) \cup (s, \mathbf{r}, o')\}$ . The training objective is to learn the embeddings that score positive triples existing in the KG higher than the negative triples generated synthetically. To achieve this, a triple-wise loss function $\mathcal{L}(t, \boldsymbol{\theta}) \coloneqq \ell(t, \boldsymbol{\theta}) + \sum_{t' \in \mathcal{T}'} \ell(t', \boldsymbol{\theta})$ is minimized. Thus, the optimal parameters $\widehat{\boldsymbol{\theta}}$ learned by the model are defined by $\widehat{\boldsymbol{\theta}} \coloneqq \arg \min_{\boldsymbol{\theta}} \sum_{t \in \mathcal{T}} \mathcal{L}(t, \boldsymbol{\theta})$ . Further details on KGE loss functions and negative sampling strategies are available in Ruffinelli et al. (2020).
+
+Missing Link Prediction: Given the learned embeddings $\theta$ , missing triples in the knowledge graph are predicted by an entity ranking evaluation protocol. Similar to the training process, subject-side negatives $t_s' = (s', \mathbf{r}, o)$ and object-side negatives $t_o' = (s, \mathbf{r}, o')$ are sampled for each test triple $t = (s, \mathbf{r}, o)$ to be predicted. Of these negatives, the triples already existing in the training, validation or test set are filtered out (Bordes et al., 2013). The test triple is then ranked against the remaining negatives based on the scores predicted by the KGE model. The state-of-art evaluation metrics reported over the entire set are (i) MR: mean of the ranks, (ii) MRR: mean of the reciprocals of ranks and (iii) Hits@n: number of triples ranked in top-n.
+
+# 3 Poisoning Knowledge Graph Embeddings via Instance Attribution
+
+We consider an adversarial attacker that aims to degrade the KGE model's predictive performance on a set of missing triples that have been ranked highly plausible by the model. We denote these target triples as $\mathcal{Z} \coloneqq \{z \coloneqq (z_s, z_r, z_o)\}$ . Since the predicted ranks are based on the predicted scores; to reduce the predicted rank of a target triple, we craft perturbations to the training data that aim to reduce the predicted score of the target triple.
+
+Threat Model: We use the same threat model as the state-of-art poisoning attacks on KGE models (Pezeshkpour et al., 2019; Zhang et al., 2019a). We focus on the white-box attack setting where the attacker has full knowledge of the victim model architecture and access to the learned embeddings. However, they cannot perturb the architecture or the embeddings directly; but only through perturbations in the training data. We study both adversarial additions and adversarial deletions. In both settings, the attacker is restricted to making only one edit in the neighbourhood of the target triple. The neighbourhood of the target triple $z \coloneqq (z_{s}, z_{\mathbf{r}}, z_{o})$ is the set of triples that have the same subject or the same object as the target triple, i.e. $\mathcal{X} \coloneqq \{x \coloneqq (x_{s}, x_{\mathbf{r}}, x_{o}) \mid x_{s} \in \{z_{s}, z_{o}\} \vee x_{o} \in \{z_{s}, z_{o}\} \}$ .
+
+# 3.1 Instance Attribution Methods
+
+For adversarial deletions, we want to identify the training triples that have influenced the KGE model's prediction on the target triple. Deleting these influential triples from the training set will likely degrade the prediction on the target triple. Thus, we define an influence score $\phi(z,x): \mathcal{T} \times \mathcal{T} \to \mathbb{R}$ for the pairs of triples $(z,x) \in \mathcal{T} \times \mathcal{T}$ which indicates the influence of training triple $x$ on the prediction of target triple $z$ . Larger values of the influence score $\phi(z,x)$ indicate that removing $x$ from the training data would cause larger reduction in the predicted score on $z$ .
+
+Trivially, we can compute the influence score for a training triple by removing the triple and retraining the KGE model. However, this is a prohibitively expensive step that requires re-training a new KGE model for every candidate influential triple. Thus, we use the following instance-attribute methods from Interpretable Machine Learning (Molnar, 2019) to estimate the influence score $\phi(z, x)$ without re-training the model.
+
+# 3.1.1 Instance Similarity
+
+We estimate the influence of training triple $x$ on the prediction of target triple $z$ based on the similarity of their feature representations. The intuition behind these metrics is to identify the training triples that a KGE model has learnt to be similar to the target triple and thus (might) have influenced the model's prediction on the target triple.
+
+Computing this similarity between triples requires feature vector representations for the triples. We note that while the standard KGE scoring functions assign a scalar score to the triples, this scalar value is obtained by reducing over the embedding dimension. For example, in the tri-linear dot product for DistMult, the embeddings of subject, relation and object are multiplied element-wise and then the scalar score for the triple is obtained by summing over the embedding dimension, i.e. $f_{t} \coloneqq \langle e_{s}, e_{r}, e_{o} \rangle \coloneqq \sum_{i=1}^{k} e_{s_{i}} e_{r_{i}} e_{o_{i}}$ where $k$ is the embedding dimension.
+
+Thus, to obtain feature vector representations for the triples $\pmb{f}_t: \mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \mathbb{R}^k$ , we use the state-of-art KGE scoring functions without reduction over the embedding dimension. For the DistMult model, the triple feature vector is $\pmb{f} := e_s \circ e_r \circ e_o$ where $\circ$ is the Hadamard (element-wise) product. Table 1 shows the feature vector scores for different KGE models used in this research.
+
+Given the feature vectors for target triples $\pmb{f}(z)$ and training triples $\pmb{f}(x)$ , we follow Hanawa et al. (2021) and define the following metrics.
+
+Dot Metric: This metric computes the similarity between target and training instances as the dot product of their feature vectors. That is, $\phi_{\text{dot}}(z,x) \coloneqq \langle \pmb{f}(z), \pmb{f}(x) \rangle$
+
+$\ell_{2}$ Metric: This metric computes similarity as the negative Euclidean distance between the feature vectors of target instance and test instance. That is, $\phi_{\ell_2}(z,x)\coloneqq -\| f(z) - f(x)\| _2$
+
+Cosine Metric: This metric computes similarity as the dot product between $\ell_{2}$ normalized feature vectors of target and test instance, i.e. it ignores the magnitude of the vectors and only relies on the angle between them. That is, $\phi_{\cos}(z,x)\coloneqq \cos (f(z),f(x))$
+
+Here, we denote the dot product for two vectors $\mathbf{a}$ and $\mathbf{b}$ as $\langle \mathbf{a}, \mathbf{b} \rangle \coloneqq \sum_{i=1}^{p} a_i b_i$ ; the $\ell_2$ norm of a vector as $\| \mathbf{a} \|_2 \coloneqq \sqrt{\langle \mathbf{a}, \mathbf{a} \rangle}$ ; and the cos similarity between vectors $\mathbf{a}$ and $\mathbf{b}$ as $\cos(\mathbf{a}, \mathbf{b}) \coloneqq \frac{\langle \mathbf{a}, \mathbf{b} \rangle}{\| \mathbf{a} \|_2 \| \mathbf{b} \|_2}$ .
+
+# 3.1.2 Gradient Similarity
+
+We represent the gradient of the loss for triple $z$ w.r.t. model parameters as $\pmb {g}(z,\widehat{\pmb{\theta}})\coloneqq \nabla_{\pmb{\theta}}\mathcal{L}(z,\widehat{\pmb{\theta}})$ . Gradient similarity metrics compute similarity between the gradients due to target triple $z$ and the gradients due to training triple $x$ . The intuition is to assign higher influence to training triples that have similar effect on the model's parameters as the target triple; and are therefore likely to impact the prediction on target triple (Charpiat et al., 2019). Thus, using the same similarity functions as Instance Similarity metrics, we define the following three metrics for gradient similarity - Gradient Dot (GD), Gradient $\ell_2$ (GL) and Gradient Cosine (GC).
+
+$$
+\mathbf {G D} (\mathbf {d o t}): \phi_ {\mathrm {G D}} (z, x) := \langle \boldsymbol {g} (z, \widehat {\boldsymbol {\theta}}), \boldsymbol {g} (x, \widehat {\boldsymbol {\theta}}) \rangle
+$$
+
+$$
+\mathbf {G L} (\ell_ {\mathbf {2}}): \phi_ {\mathrm {G L}} (z, x) := - \left\| \boldsymbol {g} (z, \widehat {\boldsymbol {\theta}}) - \boldsymbol {g} (x, \widehat {\boldsymbol {\theta}}) \right\| _ {2}
+$$
+
+$$
+\mathbf {G C} (\boldsymbol {\cos}): \phi_ {\mathrm {G C}} (z, x) := \cos \left(\boldsymbol {g} (z, \widehat {\boldsymbol {\theta}}), \boldsymbol {g} (x, \widehat {\boldsymbol {\theta}})\right)
+$$
+
+# 3.1.3 Influence Functions
+
+Influence Functions (IF) is a classic technique from robust statistics and was introduced to explain the predictions of black-box models in Koh and Liang (2017). To estimate the effect of a training point on a model's predictions, it first approximates the effect of removing the training point on the learned model parameters. To do this, it performs a first order Taylor expansion around the learned parameters $\widehat{\pmb{\theta}}$ at the optimality conditions.
+
+Following the derivation in Koh and Liang (2017), the effect of removing the training triple $x$ on $\widehat{\pmb{\theta}}$ is given by $d\widehat{\theta} / d\epsilon_{i} = H_{\widehat{\theta}}^{-1} g(x, \widehat{\pmb{\theta}})$ . Here, $H_{\widehat{\pmb{\theta}}}$ denotes the Hessian of the loss function $H_{\widehat{\pmb{\theta}}} := 1/n \sum_{t \in \mathcal{T}} \nabla_{\pmb{\theta}}^2 \mathcal{L}(t, \widehat{\pmb{\theta}})$ . Using the chain rule then, we approximate the influence of removing $x$ on the model's prediction at $z$ as $\langle g(z, \widehat{\pmb{\theta}}), d\widehat{\pmb{\theta}} / d\epsilon_{i} \rangle$ . Thus, the influence score using IF is defined as:
+
+$$
+\mathbf {I F}: \phi_ {\mathrm {I F}} (z, x) := \langle \boldsymbol {g} (z, \widehat {\boldsymbol {\theta}}), \boldsymbol {H} _ {\widehat {\boldsymbol {\theta}}} ^ {- 1} \boldsymbol {g} (x, \widehat {\boldsymbol {\theta}}) \rangle
+$$
+
+Computing the IF for KGE models poses two challenges - (i) storing and inverting the Hessian matrix is computationally too expensive for a large number of parameters; (ii) the Hessian is not guaranteed to be positive definite and thus, invertible because KGE models are non-convex models. To address both these challenges, we follow the guidelines in Koh and Liang (2017). Instead of computing the exact Hessian matrix, we estimate the Hessian-vector product (HVP) with target triple's gradient. That is, for every target triple $z$ , we precompute the value $\pmb{H}_{\widehat{\theta}}^{-1}\pmb{g}(z,\widehat{\theta})$ . Then, for each
+
+neighbourhood triple $x$ in the training set, we compute $\phi_{\mathrm{IF}}(z,x)$ using the pre-computed HVP. Furthermore, we use the stochastic estimator LiSSA (Agarwal et al., 2017) that computes the HVP in linear time using samples from training data. For the second issue of non-convexity, we add a "damping" term to the Hessian so that it is positive definite and invertible. This term is a hyperparameter that is tuned to ensure that all eigenvalues of the Hessian matrix are positive, i.e. the Hessian matrix is positive definite. Further discussion on the validity of Influence Functions for non-convex settings is available in Koh and Liang (2017).
+
+# 3.2 Adversarial Additions
+
+In this attack setting, the adversarial attacker can only add triples to the neighbourhood of target triple. Using the Instance Attribution metrics above, we select the training triple $x \coloneqq (x_{s}, x_{\mathbf{r}}, x_{o})$ in the neighbourhood of the target triple $z \coloneqq (z_{s}, z_{\mathbf{r}}, z_{o})$ that is most influential to the prediction of $z$ . For brevity, let's assume $x_{s} = z_{s}$ , i.e., the influential and target triples have the same subject. To generate adversarial addition using the influential triple, we propose to replace $x_{o}$ with the most dissimilar entity $x_{o'}$ . Since the adversarial triple $x' \coloneqq (x_{s}, x_{\mathbf{r}}, x_{o'})$ has the same subject and relation as the influential triple but a different object, it should reduce the influence of the influential triple on the target triple's prediction. This in turn should degrade the model prediction on target triple. For multiplicative models, we select the dissimilar entity $x_{o'}$ using the cosine similarity between $x_{o}$ and the entities $\mathcal{E}$ . For additive models, we use the $\ell_{2}$ similarity between $x_{o}$ and the entities $\mathcal{E}$ .
+
+# 4 Evaluation
+
+We evaluate the effectiveness of the proposed attack strategies in degrading the KGE model's predictions on target triples at test time. We follow the state-of-art protocol to evaluate poisoning attacks (Xu et al., 2020) - we train a victim KGE model on the original dataset; generate adversarial deletions or additions using one of the attacks; perturb the original dataset; and train a new KGE model on the perturbed dataset. The hyperparameters for victim and poisoned KGE models are same.
+
+We evaluate our attacks on four state-of-art KGE models - DistMult, ComplEx, ConvE and TransE on two publicly available benchmark datasets -
+
+WN18RR and FB15k-237. To be able to evaluate the effectiveness of attacks in degrading the predictive performance, we select a subset of the benchmark test triples that has been ranked highest (ranks=1) by the victim KGE model. From this subset, we randomly sample 100 triples as the target triples. This is to avoid the expensive Hessian inverse estimation in the IF metric for a large number of target triples (for each target triple, this estimation requires one training epoch).
+
+The source code implementation of our experiments is available at https://github.com/PeruBhardwaj/AttributionAttack.
+
+Baselines: We evaluate our attacks against baseline methods based on random edits and the state-of-art poisoning attacks. Random_n adds or removes a random triple from the neighbourhood of the target triple. Random_g adds or removes a random triple globally and is not restricted to the target's neighbourhood. Direct-Del and Direct-Add are the adversarial deletion and addition attacks proposed in Zhang et al. (2019a). CRIAGE is the poisoning attack from Pezeshkpour et al. (2019) and is a baseline for both deletions and additions. GR (Gradient Rollback) (Lawrence et al., 2021) uses influence estimation to provide post-hoc explanations for KGE models and can also be used to generate adversarial deletions. Thus, we include this method as a baseline for adversarial deletions.
+
+The attack evaluations in Zhang et al. (2019a); Pezeshkpour et al. (2019); Lawrence et al. (2021) differ with respect to the definition of their neighbourhood. Thus, to ensure fair evaluation, we implement all methods with the same neighbourhood triples that are linked to the subject or object of the target triple (Section 3). We use the publicly available implementations for CRIAGE and Gradient Rollback and implement Direct-Del and Direct-Add ourselves. Further details on datasets, implementation of KGE models, baselines and computing resources is available in Appendix A and B.
+
+Results: For WN18RR and FB15k-237 respectively, Tables 2 and 3 show the degradation in MRR and Hits@1 due to adversarial deletions; and Tables 4 and 5 due to adversarial additions for state-of-art KGE models. Below we discuss different patterns in these results. We also discuss runtime efficiency of the attack methods in Appendix C.1.
+
+ | DistMult | ComplEx | ConvE | TransE |
| MRR | Hits@1 | MRR | Hits@1 | MRR | Hits@1 | MRR | Hits@1 |
| Original | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| Baseline Attacks | Random_n | 0.87 (-13%) | 0.82 | 0.85 (-15%) | 0.80 | 0.82 (-18%) | 0.79 | 0.82 (-18%) | 0.70 |
| Random_g | 0.97 | 0.95 | 0.96 | 0.93 | 0.99 | 0.98 | 0.93 | 0.87 |
| Direct-Del | 0.88 | 0.77 | 0.86 (-14%) | 0.77 | 0.71 (-29%) | 0.64 | 0.54 (-46%) | 0.37 |
| CRIAGE | 0.73 (-27%) | 0.66 | - | - | Er | Er | - | - |
| GR | 0.95 | 0.90 | 0.93 | 0.86 | 0.95 | 0.91 | 0.84 | 0.77 |
| Proposed Attacks | Dot Metric | 0.89 | 0.82 | 0.85 | 0.79 | 0.84 (-16%) | 0.80 | 0.77 | 0.60 |
| \( \ell_2 \)Metric | 0.25 (-75%) | 0.16 | 0.29 (-71%) | 0.20 | 0.88 | 0.78 | 0.62 | 0.50 |
| Cos Metric | 0.25 (-75%) | 0.16 | 0.29 (-71%) | 0.20 | 0.87 | 0.76 | 0.56 (-44%) | 0.40 |
| GD (dot) | 0.28 (-72%) | 0.19 | 0.29 | 0.21 | 0.25 | 0.21 | 0.71 (-29%) | 0.57 |
| GL (\( \ell_2 \)) | 0.30 | 0.20 | 0.28 (-72%) | 0.19 | 0.17 (-83%) | 0.12 | 0.72 | 0.60 |
| GC (cos) | 0.29 | 0.19 | 0.29 | 0.21 | 0.20 | 0.16 | 0.71 (-29%) | 0.57 |
| IF | 0.28 (-72%) | 0.19 | 0.29 (-71%) | 0.20 | 0.22 (-78%) | 0.17 | 0.71 (-29%) | 0.57 |
+
+Table 2: Reduction in MRR and Hits@1 due to adversarial deletions on target triples in WN18RR. Lower values indicate better results; best results for each model are in bold. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best reduction in percentage relative to the original MRR; computed as (poisoned - original)/original * 100.
+
+# 4.1 Comparison with Baselines
+
+We observe that the proposed strategies for adversarial deletions and adversarial additions successfully degrade the predictive performance of KGE models. On the other hand, the state-of-art attacks are ineffective or only partially effective. Adversarial deletions from Gradient Rollback perform similar to random baselines; likely because this method estimates the influence of a training triple as the sum of its gradients over the training process. In this way, it does not account for the target triple in the influence estimation. The method is also likely to be effective only for a KGE model that is trained with a batch size of 1 because it needs to track the gradient updates for each triple.
+
+The CRIAGE baseline is only applicable to DistMult and ConvE. But we found that the method ran into numpy.linalg.LinAlgError: Singular matrix error for ConvE; because the Hessian matrix computed from the victim model embeddings was non-invertible4. For adversarial deletions on DistMult, the baseline works better than random edits but not the proposed attacks5. It is also ineffective against adversarial additions.
+
+We see that Direct-Del is effective on TransE, but not on multiplicative models. This is likely
+
+because it estimates the influence of a candidate triple as the difference in the triple's score when the neighbour entity embedding is perturbed. The additive nature of this influence score might make it more suitable for additive models. We also see that Direct-Add works similar to random additions, likely because it uses random down-sampling.
+
+The proposed attacks based on instance attribution methods consistently outperform random baselines for adversarial additions and deletions. One exception to this pattern are adversarial additions against TransE on WN18RR. In this case, no influence metric performs better than random neighbourhood edits, though they are all effective for adversarial deletions. One possible reason is that the TransE model is designed to learn hierarchical relations like _has_part. We found that the target triples ranked highest by the model have such hierarchical relations; and the influential triple for them has the same relation. That is, the triple $(s_1, \text{has_part}, s)$ is the influential triple for $(s, \text{has_part}, o)$ . Removing this influential triple breaks the hierarchical link between $s_1$ and $s$ ; and degrades TransE predictions on the target. But adding the triple $(s_2, \text{has_part}, s)$ still preserves the hierarchical structure which TransE can use to score the target correctly. We provide more examples of such relations in Appendix C.3.
+
+# 4.2 Comparison across Influence Metrics
+
+We see that the IF and Gradient Similarity metrics show similar degradation in predictive performance.
+
+This indicates that the computationally expensive Hessian inverse in the IF can be avoided and simpler metrics can identify influential triples with comparable effectiveness. Furthermore, cos and $\ell_2$ based Instance Similarity metrics outperform all other methods for adversarial deletions on DistMult, ComplEx and TransE. This effectiveness of naive metrics indicates the high vulnerability of shallow KGE architectures to data poisoning attacks in practice. In contrast to this, the Input Similarity metrics are less effective in poisoning ConvE, especially significantly on WN18RR. This is likely because the triple feature vectors for ConvE are based on the output from a deeper neural architecture than the Embedding layer alone. Within Instance Similarity metrics, we see that the dot metric is not as effective as others. This could be because the dot product does not normalize the triple feature vectors. Thus, training triples with large norms are prioritized over relevant influential triples (Hanawa et al., 2021).
+
+# 4.3 Comparison of datasets
+
+We note that the degradation in predictive performance is more significant on WN18RR than on FB15k-237. This is likely due to the sparser graph structure of WN18RR, i.e. there are fewer neighbours per target triple in WN18RR than in FB15k-237 (Appendix C.4). Thus, the model learns its predictions from few influential triples in WN18RR; and removing only one neighbour significantly degrades the model's predictions on the target triple.
+
+On the other hand, because of more neighbours in FB15k-237, the model predictions are likely influenced by a group of training triples. Such group effect of training instances on model parameters has been studied in Koh et al. (2019); Basu et al. (2020). We will investigate these methods for KGE models on FB15k-237 in the future.
+
+# 5 Related Work
+
+Cai et al. (2018) and Nickel et al. (2015) provide a comprehensive survey of KGE models. We use the most popular models DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), ConvE (Dettmers et al., 2018) and TransE (Bordes et al., 2013).
+
+Our work is most closely related to CRIAGE (Pezeshkpour et al., 2019) and Direct Attack (Zhang et al., 2019a), that study both adversarial additions and deletions against KGE models. But CRIAGE is only applicable to multiplicative
+
+models and our experiments (Section 4) show that Direct Attack is effective (with respect to random baselines) on additive models only. On the other hand, our instance attribution methods work for all KGE models. Recently, Lawrence et al. (2021) propose Gradient Rollback to estimate the influence of training triples on the KGE model predictions. The original study uses the influential triples for post-hoc explanations, but they can also be used for adversarial deletions. However, the attack stores the model parameter updates for all training triples which are in the order of millions for benchmark datasets; and our experiments (Section 4) show that it performs similar to random deletions. Whereas, our influence estimation methods do not require additional storage and are consistently better than random baselines on all KGE models.
+
+We also study data poisoning attacks against KGE models in Bhardwaj et al. (2021). Here, we exploit the inductive abilities of KGE models to select adversarial additions that improve the predictive performance of the model on a set of decoy triples; which in turn degrades the performance on target triples. These inference patterns based attacks cannot be used for adversarial deletions, but we will perform detailed comparison for adversarial additions in future. In parallel work, Banerjee et al. (2021) study risk aware adversarial attacks with the aim of reducing the exposure risk of an adversarial attack instead of improving the attack effectiveness. Also, previous studies by Minervini et al. (2017) and Cai and Wang (2018) use adversarial regularization on the training loss of KGE models to improve predictive performance. But these adversarial samples are not in the input domain and aim to improve instead of degrade model performance. Poisoning attacks have also been studied against models for undirected and single relational graph data (Zügner et al., 2018; Dai et al., 2018; Xu et al., 2020). But they cannot be applied directly to KGE models because they require gradients of a dense adjacency matrix.
+
+Other related work towards understanding KGE models are Zhang et al. (2019b) and Nandwani et al. (2020) that generate post-hoc explanations in the form of sub-graphs. Also, Trouillon et al. (2019) study the inductive abilities of KGE models as binary relation properties for controlled inference tasks with synthetic datasets. Recently, Allen et al. (2021) interpret the structure of KGE by drawing comparison with word embeddings.
+
+ | DistMult | ComplEx | ConvE | TransE |
| MRR | Hits@1 | MRR | Hits@1 | MRR | Hits@1 | MRR | Hits@1 |
| Original | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| Baseline Attacks | Random_n | 0.66 (-34%) | 0.52 | 0.65 (-35%) | 0.51 | 0.62 (-38%) | 0.46 | 0.71 (-29%) | 0.56 |
| Random_g | 0.68 | 0.53 | 0.65 (-35%) | 0.51 | 0.63 | 0.50 | 0.75 | 0.61 |
| Direct-Del | 0.59 (-41%) | 0.42 | 0.62 (-38%) | 0.47 | 0.57 (-43%) | 0.41 | 0.62 (-38%) | 0.45 |
| CRIAGE | 0.62 | 0.47 | - | - | Er | Er | - | - |
| GR | 0.68 | 0.55 | 0.66 | 0.51 | 0.62 | 0.45 | 0.68 | 0.53 |
| Proposed Attacks | Dot Metric | 0.63 | 0.47 | 0.64 | 0.49 | 0.60 | 0.44 | 0.74 | 0.62 |
| \( \ell_2 \)Metric | 0.58 | 0.41 | 0.56 (-44%) | 0.40 | 0.53 (-47%) | 0.35 | 0.63 (-37%) | 0.46 |
| Cos Metric | 0.56 (-44%) | 0.39 | 0.57 | 0.40 | 0.55 | 0.38 | 0.63 (-37%) | 0.45 |
| GD (dot) | 0.60 | 0.44 | 0.60 | 0.45 | 0.55 (-45%) | 0.37 | 0.65 | 0.49 |
| GL (\( \ell_2 \)) | 0.62 | 0.45 | 0.60 | 0.45 | 0.56 | 0.41 | 0.70 | 0.58 |
| GC (cos) | 0.58 (-42%) | 0.42 | 0.57 (-43%) | 0.39 | 0.57 | 0.40 | 0.64 (-36%) | 0.48 |
| IF | 0.60 (-40%) | 0.44 | 0.60 (-40%) | 0.45 | 0.58 (-42%) | 0.43 | 0.66 (-34%) | 0.52 |
+
+Table 3: Reduction in MRR and Hits@1 due to adversarial deletions on target triples in FB15k-237. Lower values indicate better results. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best reduction in percentage relative to the original MRR.
+
+ | DistMult | ComplEx | ConvE | TransE |
| MRR | Hits@1 | MRR | Hits@1 | MRR | Hits@1 | MRR | Hits@1 |
| Original | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| Baseline Attacks | Random_n | 0.99 (-1%) | 0.98 | 0.97 (-3%) | 0.94 | 0.99 (-1%) | 0.98 | 0.76 (-24%) | 0.57 |
| Random_g | 0.99 (-1%) | 0.97 | 0.97 (-3%) | 0.95 | 0.99 (-1%) | 0.98 | 0.93 | 0.87 |
| Direct-Add | 0.98 (-2%) | 0.96 | 0.95 (-5%) | 0.92 | 0.99 (-1%) | 0.98 | 0.81 (-19%) | 0.67 |
| CRIAGE | 0.98 (-2%) | 0.97 | - | - | Er | Er | - | - |
| Proposed Attacks | Dot Metric | 0.97 | 0.93 | 0.95 | 0.90 | 0.95 (-5%) | 0.91 | 0.95 | 0.90 |
| \( \ell_2 \)Metric | 0.89 (-11%) | 0.78 | 0.88 | 0.77 | 0.98 | 0.96 | 0.87 (-13%) | 0.83 |
| Cos Metric | 0.89 (-11%) | 0.78 | 0.87 (-13%) | 0.77 | 0.99 | 0.98 | 0.87 (-13%) | 0.83 |
| GD (dot) | 0.90 | 0.79 | 0.89 | 0.79 | 0.92 | 0.85 | 0.80 (-20%) | 0.73 |
| GL (\( \ell_2 \)) | 0.89 (-11%) | 0.79 | 0.86 (-14%) | 0.73 | 0.88 (-12%) | 0.77 | 0.89 | 0.83 |
| GC (cos) | 0.90 | 0.80 | 0.87 | 0.76 | 0.91 | 0.82 | 0.80 (-20%) | 0.73 |
| IF | 0.90 (-10%) | 0.79 | 0.89 (-11%) | 0.79 | 0.91 (-8.9%) | 0.82 | 0.77 (-23%) | 0.67 |
+
+Table 4: Reduction in MRR and Hits@1 due to adversarial additions on target triples in WN18RR. Lower values indicate better results. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best reduction in percentage relative to the original MRR.
+
+ | DistMult | ComplEx | ConvE | TransE |
| MRR | Hits@1 | MRR | Hits@1 | MRR | Hits@1 | MRR | Hits@1 |
| Original | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| Baseline Attacks | Random_n | 0.65 (-34%) | 0.50 | 0.69 | 0.57 | 0.61 (-39%) | 0.46 | 0.74 | 0.62 |
| Random_g | 0.66 | 0.52 | 0.66 (-34%) | 0.52 | 0.63 | 0.50 | 0.73 (-27%) | 0.61 |
| Direct-Add | 0.64 (-36%) | 0.48 | 0.66 (-34%) | 0.52 | 0.60 (-40%) | 0.45 | 0.72 (-28%) | 0.59 |
| CRIAGE | 0.66 | 0.50 | - | - | Er | Er | - | - |
| Proposed Attacks | Dot Metric | 0.67 | 0.54 | 0.65 | 0.50 | 0.61 | 0.46 | 0.74 (-26%) | 0.62 |
| \( \ell_2 \)Metric | 0.64 | 0.50 | 0.66 | 0.52 | 0.59 (-41%) | 0.43 | 0.74 (-26%) | 0.62 |
| Cos Metric | 0.63 (-37%) | 0.49 | 0.63 (-37%) | 0.47 | 0.60 | 0.43 | 0.74 (-26%) | 0.61 |
| GD (dot) | 0.61 (-39%) | 0.45 | 0.65 | 0.50 | 0.62 | 0.46 | 0.71 (-29%) | 0.58 |
| GL (\( \ell_2 \)) | 0.63 | 0.48 | 0.67 | 0.53 | 0.61 (-39%) | 0.45 | 0.74 | 0.60 |
| GC (cos) | 0.62 | 0.46 | 0.64 (-36%) | 0.49 | 0.61 (-39%) | 0.45 | 0.71 (-29%) | 0.56 |
| IF | 0.61 (-39%) | 0.45 | 0.65 (-35%) | 0.50 | 0.58 (-42%) | 0.42 | 0.71 (-29%) | 0.58 |
+
+Table 5: Reduction in MRR and Hits@1 due to adversarial additions on target triples in FB15k-237. Lower values indicate better results; best results for each model are in bold. First block of rows are the baseline attacks with random edits; second block is state-of-art attacks; remaining are the proposed attacks. For each block, we report the best reduction in percentage relative to the original MRR; computed as (poisoned - original)/original * 100.
+
+The instance attribution methods we use are also used for post-hoc example-based explanations of black-box models (Molnar, 2019). Hanawa et al. (2021); Charpiat et al. (2019); Pruthi et al. (2020) use Instance or Gradient Similarity on image data. Similar to us, Han et al. (2020); Han and Tsvetkov (2020); Pezeshkpour et al. (2021) use different instance attribution methods, but to provide post-hoc explanations on natural language.
+
+# 6 Conclusion
+
+We propose data poisoning attacks against KGE models using instance attribution methods and demonstrate that the proposed attacks outperform the state-of-art attacks. We observe that the attacks are particularly effective when the KGE model relies on few training instances to make predictions, i.e. when the input graph is sparse.
+
+We also observe that shallow neural architectures like DistMult, ComplEx and TransE are vulnerable to naive attacks based on Instance Similarity. These models have shown competitive predictive performance by proper hyperparameter tuning (Ruffinelli et al., 2020; Kadlec et al., 2017), making them promising candidates for use in production pipelines. But our research shows that these performance gains can be brittle. This calls for improved KGE model evaluation that accounts for adversarial robustness in addition to predictive performance.
+
+Additionally, as in Bhardwaj (2020); Bhardwaj et al. (2021), we call for future proposals to defend against the security vulnerabilities of KGE models. Some promising directions might be to use adversarial training techniques or train ensembles of models over subsets of training data to prevent the model predictions being influenced by a few triples only. Specification of the model failure modes through adversarial robustness certificates will also improve the usability of KGE models in high-stake domains like healthcare and finance.
+
+# Acknowledgements
+
+This research was conducted with the financial support of Accenture Labs and Science Foundation Ireland (SFI) at the ADAPT SFI Research Centre at Trinity College Dublin. The ADAPT SFI Centre for Digital Content Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant No. 13/RC/2106_P2.
+
+# Broader Impact
+
+We study the problem of generating data poisoning attacks against KGE models. These models drive many enterprise products ranging from search engines (Google, Microsoft) to social networks (Facebook) to e-commerce (eBay) (Noy et al., 2019), and are increasingly used in domains with high stakes like healthcare and finance (Hogan et al., 2020; Bendtsen and Petrovski, 2019). Thus, it is important to identify the security vulnerabilities of these models that might be exploited by malicious actors to manipulate the predictions of the model and cause system failure. By highlighting these security vulnerabilities of KGE models, we provide an opportunity to fix them and protect stakeholders from harm. This honours the ACM Code of Ethics to contribute to societal well-being and avoid harm due to computing systems.
+
+Furthermore, to study data poisoning attacks against KGE models, we use the Instance Attribution Methods from Interpretable Machine Learning. These methods can also be used to provide post-hoc explanations for KGE models and thus, improve our understanding of the predictions made by the models. In addition to understanding model predictions, instance based attribution methods can help guide design decisions during KGE model training. There are a vast number of KGE model architectures, training strategies and loss functions, and empirically quantifying the impact of the design choices is often challenging (Ruffinelli et al., 2020). Thus, we would encourage further research on exploring the use of instance attribution methods to understand the impact of these choices on the KGE model predictions. By tracing back the model predictions to the input knowledge graph, we can gain a better understanding of the success or failure of different design choices.
+
+# References
+
+Naman Agarwal, Brian Bullins, and Elad Hazan. 2017. Second-order stochastic optimization for machine learning in linear time. Journal of Machine Learning Research, 18(116):1-40.
+Carl Allen, Ivana Balazevic, and Timothy Hospedales. 2021. Interpreting knowledge graph relation representation from word embeddings. In International Conference on Learning Representations.
+Prithu Banerjee, Lingyang Chu, Yong Zhang, Laks V.S. Lakshmanan, and Lanjun Wang. 2021. Stealthy targeted data poisoning attack on knowledge graphs. In
+
+2021 IEEE 37th International Conference on Data Engineering (ICDE), pages 2069-2074. IEEE.
+Samyadeep Basu, Xuchen You, and Soheil Feizi. 2020. On second-order group influence functions for black-box predictions. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 715-724. PMLR.
+Claus Bendtsen and Slavé Petrovski. 2019. How data and AI are helping unlock the secrets of disease. In AstraZeneca Blog.
+Peru Bhardwaj. 2020. Towards adversarially robust knowledge graph embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 34(10):13712-13713.
+Peru Bhardwaj, John Kelleher, Luca Costabello, and Declan O'Sullivan. 2021. Poisoning knowledge graph embeddings via relation inference patterns. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1875-1888, Online. Association for Computational Linguistics.
+Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317-331.
+Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2787-2795. Curran Associates, Inc.
+Hongyun Cai, Vincent W Zheng, and Kevin Chen-Chuan Chang. 2018. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Transactions on Knowledge and Data Engineering.
+Liwei Cai and William Yang Wang. 2018. KBGAN: Adversarial learning for knowledge graph embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1470-1480, New Orleans, Louisiana. Association for Computational Linguistics.
+Chandrahas, Aditya Sharma, and Partha Talukdar. 2018. Towards understanding the geometry of knowledge graph embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 122-131, Melbourne, Australia. Association for Computational Linguistics.
+
+Guillaume Charpiat, Nicolas Girard, Loris Felardos, and Yuliya Tarabalka. 2019. Input similarity from the neural network perspective. In NeurIPS 2019-33th Annual Conference on Neural Information Processing Systems.
+Luca Costabello, Sumit Pai, Chan Le Van, Rory McGrath, Nicholas McCarthy, and Pedro Tabacof. 2019. AmpliGraph: a Library for Representation Learning on Knowledge Graphs.
+Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In International conference on machine learning, pages 1115-1124. PMLR.
+Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1):1811-1818.
+Xiaochuang Han and Yulia Tsvetkov. 2020. Fortifying toxic speech detectors against veiled toxicity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7732-7739, Online. Association for Computational Linguistics.
+Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence functions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5553-5563, Online. Association for Computational Linguistics.
+Kazuaki Hanawa, Sho Yokoi, Satoshi Hara, and Kentaro Inui. 2021. Evaluation of similarity-based explanations. In International Conference on Learning Representations.
+Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard de Melo, Claudio Gutierrez, Jose Emilio Labra Gayo, Sabrina Kirrane, Sebastian Neumaier, Axel Polleres, Roberto Navigli, Axel Cyrille Ngonga Ngomo, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan F. Sequeda, Steffen Staab, and Antoine Zimmermann. 2020. Knowledge graphs. CoRR, abs/2003.02320.
+Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2019. Adversarial Machine Learning. Cambridge University Press.
+Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 69-74, Vancouver, Canada. Association for Computational Linguistics.
+Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pages 1885-1894. PMLR.
+
+Pang Wei W Koh, Kai-Siang Ang, Hubert Teo, and Percy S Liang. 2019. On the accuracy of influence functions for measuring group effects. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2869-2878. PMLR.
+Carolin Lawrence, Timo Sztyler, and Mathias Niepert. 2021. Explaining neural matrix factorization with gradient rollback. Proceedings of the AAAI Conference on Artificial Intelligence, 35(6):4987-4995.
+Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, and Sebastian Riedel. 2017. Adversarial sets for regularising neural link predictors. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017, Sydney, Australia, August 11-15, 2017. AUAI Press.
+Christoph Molnar. 2019. Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/.
+Yatin Nandwani, Ankesh Gupta, Aman Agrawal, Mayank Singh Chauhan, Parag Singla, and Mausam. 2020. OxKBC: Outcome explanation for factorization based knowledge base completion. In *Automated Knowledge Base Construction*.
+Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2015. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11-33.
+Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. 2019. Industry-scale knowledge graphs: Lessons and challenges. Commun. ACM, 62(8):36-43.
+Pouya Pezeshkpour, Sarthak Jain, Byron Wallace, and Sameer Singh. 2021. An empirical comparison of instance attribution methods for NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 967-975, Online. Association for Computational Linguistics.
+Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3336-3347, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+Daniel Ruffinelli, Samuel Broscheit, and Rainer Gemulla. 2020. You can teach an old dog new tricks! on training knowledge graph embeddings. In International Conference on Learning Representations.
+Théo Trouillon, Éric Gaussier, Christopher R. Dance, and Guillaume Bouchard. 2019. On inductive abilities of latent factor models for relational learning. J. Artif. Int. Res., 64(1):21-53.
+Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International Conference on Machine Learning, pages 2071-2080.
+Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K Jain. 2020. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing, 17(2):151-178.
+Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, and Kui Ren. 2019a. Data poisoning attack against knowledge graph embedding. In International Joint Conference on Artificial Intelligence.
+Wen Zhang, Bibek Paudel, Wei Zhang, Abraham Bernstein, and Huajun Chen. 2019b. Interaction embeddings for prediction and explanation in knowledge graphs. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 96-104.
+Daniel Zügner, Amir Akbarnejad, and Stephan Gunnemann. 2018. Adversarial attacks on neural networks for graph data. In International Conference on Knowledge Discovery & Data Mining, pages 2847-2856.
+
+# Appendix
+
+# A Dataset Details
+
+We evaluate the proposed attacks on four state-of-art KGE models - DistMult, ComplEx, ConvE and TransE; on two publicly available benchmark datasets for link prediction $^{6}$ - WN18RR and FB15k-237. For the KGE model evaluation protocol, we filter out triples from the validation and test set that contain unseen entities.
+
+To assess the attack effectiveness in degrading performance on triples predicted as True, we need to select a set of triples that are predicted as True by the victim model. Thus, we select a subset of the benchmark test set that has been ranked the best (i.e. ranks $= 1$ ) by the victim KGE model. If this subset has more than 100 triples, we randomly sample 100 triples as the target triples; otherwise we use all triples as target triples. We do this pre-processing step to avoid the expensive Hessian inverse computation in the Influence Functions (IF) for a large number of target triples - for each target triple, estimating the Hessian inverse (as an HVP) using the LissA algorithm requires one training epoch.
+
+ | | WN18RR | FB15k-237 |
| Entities | | 40,559 | 14,505 |
| Relations | | 11 | 237 |
| Training | | 86,835 | 272,115 |
| Validation | | 2,824 | 17,526 |
| Test | | 2,924 | 20,438 |
| Subset with Best Ranks | DistMult | 1,109 | 1,183 |
| ComplEx | 1,198 | 1,238 |
| ConvE | 1,106 | 901 |
| TransE | 15 | 1223 |
+
+Table 6: Statistics for WN18RR and FB15k-237. We removed triples from the validation and test set that contained unseen entities to ensure that we do not add new entities as adversarial edits. The numbers above (including the number of entities) reflect this filtering.
+
+Table 6 shows the dataset statistics and the number of triples which are ranked best by the different KGE models.
+
+# B Training Details
+
+# B.1 Training KGE models
+
+We implement four KGE models - DistMult, ComplEx, ConvE and TransE. We use the 1-N training strategy proposed in Lacroix et al. (2018) but we do not add the reciprocal relations. Thus, for
+
+ | WN18RR | FB15k-237 |
| MRR | Hits@1 | MRR | Hits@1 |
| DistMult | 0.48 | 0.44 | 0.34 | 0.24 |
| ComplEx | 0.51 | 0.47 | 0.34 | 0.25 |
| ConvE | 0.44 | 0.41 | 0.32 | 0.23 |
| TransE | 0.21 | 0.02 | 0.33 | 0.24 |
+
+Table 7: MRR and Hits@1 results for original KGE models on WN18RR and FB15k-237
+
+each triple, we generate scores for $(s,r)\to o$ and $(o,r)\rightarrow s$
+
+For TransE scoring function, we use the L2 norm. The loss function used for all models is Pytorch's CrossEntropyLoss. For regularization, we use N3 regularization and input dropout on DistMult and ComplEx; input dropout, hidden dropout and feature dropout on ConvE; and L2 regularization (Bordes et al., 2013) and input dropout for TransE.
+
+We do not use early stopping to ensure same hyperparameters for original and poisoned KGE models. We use an embedding size of 200 for all models on both datasets. An exception is TransE model for WN18RR, where we used embedding $\dim = 100$ due to the expensive time and space complexity of 1-N training for TransE. We manually tuned the hyperparameters for KGE models based on suggestions from state-of-art implementations (Ruffinelli et al., 2020; Dettmers et al., 2018; Lacroix et al., 2018; Costabello et al., 2019).
+
+Table 7 shows the MRR and Hits@1 for the original KGE models on WN18RR and FB15k-237. To re-train the KGE model on poisoned dataset, we use the same hyperparameters as the original model. We run all model training, adversarial attacks and evaluation on a shared HPC cluster with Nvidia RTX 2080ti, Tesla K40 and V100 GPUs.
+
+To ensure reproducibility, our source code is publicly available on GitHub at https://github.com/PeruBhardwaj/ AttributionAttack. The results in Section 4 can be reproduced by passing the argument reproduce - results to the attack scripts. Example commands for this are available in the bash scripts in our codebase. The hyperparameter used to generate the results can be inspected in the set_hyperparams() function in the file utils.py or in the log files.
+
+For the LissA algorithm used to estimate the Hessian inverse in Influence Functions, we select the hyperparameter values using suggestions from Koh and Liang (2017). The values are selected
+
+to ensure that the Taylor expansion in the estimator converges. These hyperparameter values for our experiments are available in the function set_if_parameters() in the file utils.py of the accompanying codebase.
+
+# B.2 Baseline Implementation Details
+
+One of the baselines in Section 4 of the main paper is the Direct-Del and Direct-Add attack from (Zhang et al., 2019a). The original study evaluated the method for the neighbourhood of subject of the target triple. We extend it for both subject and object to ensure fair comparison with other attacks. Since no public implementation is available, we implement our own.
+
+ | WN18RR |
| Original | High | Low |
| DistMult | 1.00 | 0.98 | 0.98 |
| ComplEx | 1.00 | 0.96 | 0.95 |
| ConvE | 1.00 | 0.99 | 0.99 |
| TransE | 1.00 | 0.81 | 0.86 |
| FB15k-237 |
| Original | High | Low |
| DistMult | 1.00 | 0.64 | 0.64 |
| ComplEx | 1.00 | 0.67 | 0.66 |
| ConvE | 1.00 | 0.62 | 0.60 |
| TransE | 1.00 | 0.72 | 0.73 |
+
+The Direct-Add attack is based on computing a perturbation score for all possible candidate additions. Since the search space for candidate additions is of the order $\mathcal{E} \times \mathcal{R}$ (where $\mathcal{E}$ and $\mathcal{R}$ are the set of entities and relations), it uses random down sampling to filter out the candidates. The percent of triples down sampled are not reported in the original paper and a public implementation is not available. So, in this paper, we pick a high and a low value for the percentage of triples to be down-sampled and generate adversarial additions for both fractions. We arbitrarily choose $20\%$ of all candidate additions for high; and $5\%$ of all candidate additions as low.
+
+Thus, we generate two poisoned datasets from the attack - one that used a high number of candidates and another that used a low number of candidates. We train two separate KGE models on
+
+these datasets to assess the baseline performance. Table 8 shows the MRR of the original model; and poisoned KGE models from attack with high and low down-sampling percents. The results reported for Direct-Add in Section 4 of the main paper are the better of the two results (which show more degradation in performance) for each combination.
+
+# C Further Analysis of Proposed Attacks
+
+# C.1 Runtime Analysis
+
+We analyze the runtime efficiency of baseline and proposed attack methods for adversarial deletions. For brevity, we consider the attacks on DistMult model, but the results on other models show similar time scales. Table 9 shows the time taken in seconds to select the influential triples for DistMult model on WN18RR and FB15k-237.
+
+Table 8: MRR of KGE models trained on original datasets and poisoned datasets from the Direct-Add baseline attack in Zhang et al. (2019a). High, Low indicate the high $(20\%)$ and low percentage $(5\%)$ of candidates selected from random down-sampling.
+
+ | | WN18RR | FB15k-237 |
| Baseline Attacks | Random_n | 0.024 | 0.057 |
| Random_g | 0.002 | 0.002 |
| Direct-Del | 0.407 | 0.272 |
| CRIAGE | 2.235 | 75.117 |
| GR | 29.919 | 174.191 |
| Proposed Attacks | Dot Metric | 0.288 | 0.342 |
| \( \ell_2 \)Metric | 0.057 | 0.067 |
| Cos Metric | 0.067 | 0.148 |
| GD (dot) | 7.354 | 109.015 |
| GL (\( \ell_2 \)) | 8.100 | 120.659 |
| GC (cos) | 9.478 | 141.276 |
| IF | 4751.987 | 4750.404 |
+
+Table 9: Time taken in seconds for baseline and proposed attacks to generate influential triples for Dist-Mult on WN18RR and FB15k-237
+
+We see that the Instance Similarity metrics (dot metric, $\ell_2$ metric, cos metric) are more efficient than the state-of-art attacks (Direct-Del, CRIAGE and GR). Furthermore, the $\ell_2$ metric is almost as quick as random triple selection. The efficiency of Gradient Similarity metrics is also better than or equivalent to CRIAGE and GR.
+
+Only the attack method based on IF is much slower than any other method. This is because estimating the Hessian inverse in IF requires one training epoch for every target triple, that is, we run 100 training epochs to get the influential triples for 100 target triples. However, our results in Section 4.2 of the main paper show that this expensive computation does not provide improved adversarial deletions, and thus, might be unnecessary to select influential triples for KGE models.
+
+| Target Relation | Influential Relation |
| _has_part | _has_part |
| _synset_domain主題_of | _synset_domain主題_of |
| _has_part | _has_part |
| _synset_domain主題_of | _synset_domain主題_of |
| _synset_domain主題_of | _synset_domain主題_of |
| _synset_domain主題_of | _synset_domain主題_of |
| _instance_hypernym | _instance_hypernym |
| _synset_domain主題_of | _synset_domain主題_of |
| _instance_hypernym | _synset_domain主題_of |
| _synset_domain主題_of | _synset_domain主題_of |
| _member_meronym | .derivationally_related_form |
| _synset_domain主題_of | _synset_domain主題_of |
| _has_part | _has_part |
| _member_meronym | _member_meronym |
| _synset_domain主題_of | _synset_domain主題_of |
+
+# C.2 Additional Comparison with CRIAGE
+
+The baseline attack method CRIAGE estimates the influence of a training triple using the BCE loss and is thus likely to be effective only for KGE models that are trained with BCE loss. In Section 4.1, we found that the proposed attacks are more effective than the baseline attack.
+
+But since our original models are trained with cross-entropy loss, we perform an additional analysis of the Instance Similarity attacks against CRIAGE for the DistMult model trained with BCE loss. Table 11 shows the reduction in MRR and Hits@1 due to adversarial deletions in this training setting. We find that the Instance Similarity attacks outperform the baseline for this setting as well.
+
+Table 10: Relations from the target triples and influential triples (adversarial deletions) for the cos metric on WN18RR-TransE. This combination has 15 target triples and the table shows the relations for all of them.
+
+ | WN18RR | FB15k-237 |
| MRR | Hits@1 | MRR | Hits@1 |
| Original | 1.00 | 1.00 | 1.00 | 1.00 |
| CRIAGE | 0.67 | 0.63 | 0.63 | 0.46 |
| Dot Metric | 0.86 | 0.81 | 0.61 | 0.44 |
| \( \ell_2 \)Metric | 0.12 | 0.06 | 0.60 | 0.43 |
| Cos Metric | 0.12 | 0.06 | 0.58 | 0.38 |
+
+Table 11: Reduction MRR and Hits@1 due to adversarial deletions for DistMult (trained with BCE loss) on WN18RR and FB15k-237
+
+# C.3 Analysis of Instance Attribution Methods on WN18RR-TransE
+
+For the TransE model on WN18RR, we found that the instance attribution methods lead to effective adversarial deletions with respect to random baselines, but not adversarial additions (Section 4.1 of main paper). A possible reason is based on the ability of TransE model hierarchical relations, i.e. the relations that represent a hierarchy between the subject and object entities. For example, $(s,\_ \text{has\_part},o)$ indicates that $s$ is the parent node for $o$ in a hierarchy.
+
+We select the Instance Similarity method as a metric for further analysis. It performs the best of all instance attribution methods for adversarial deletions, but performs worse than random neighbourhood edits for adversarial additions. Table 10 shows the relations in the target triples and the influential triples (i.e. adversarial deletions) selected by cos metric.
+
+We see that the target triples contain mostly hierarchical relations like _synset_domain_topic_of and _has_part. Also the cos metric identifies influential triples with same relations. And since our adversarial additions are only based on modifying the entity in the influential triple, these edits improve the hierarchy structure of the graph instead of breaking it. Thus, these edits perform well for adversarial deletions, but not for additions.
+
+# C.4 Neighbourhood Sparsity Comparison on WN18RR and FB15k-237
+
+In Section 4.3 of the main paper, we found that the proposed attacks are significantly more effective for WN18RR than for FB15k-237. This is likely because there are fewer triples in the neighbourhood of target triples for WN18RR than for FB15k-237. The graph in Figure 2 shows the median number of neighbours of the target triples for WN18RR and FB15k-237. We report median (instead of mean) because of large standard deviation in the number of target triple neighbours for FB15k-237.
+
+We see that the target triple's neighbourhood for WN18RR is significantly sparser than the neighbourhood for FB15k-237. Thus, since the KGE model predictions are learned from fewer triples for WN18RR, it is also easier to perturb these results with fewer adversarial edits.
+
+
+Figure 2: Comparison of the median number of neighbouring triples of target triples from WN18RR and FB15k-237 for DistMult, ComplEx, ConvE and TransE.
\ No newline at end of file
diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/images.zip b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9f1ff46117d011eabb7196f8fab51c17e844c1d1
--- /dev/null
+++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8887a58990278b807234711a420a5cbb78bb10a01eb6bd75d2341e7a81fd1cf2
+size 674193
diff --git a/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/layout.json b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..aacec6118256aa974ec2914c0a7335522c25994a
--- /dev/null
+++ b/adversarialattacksonknowledgegraphembeddingsviainstanceattributionmethods/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b907de027ca4406f6cafc8640400f596578baab572bd44887449ae078b8eac8
+size 477329
diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_content_list.json b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..08b5671edc9f82f551ba241176318de866f35366
--- /dev/null
+++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:253765ef9cebab2c6ec3686fd5fd2c76b2550d16ee0bd32fc3c93a55971e1fea
+size 80897
diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_model.json b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2f61afa69e3690e0cf1fd8adf53dfaaaa828b462
--- /dev/null
+++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90d747bdcd248a6e070dfc0435274bef065407a3946e75c93b7f8db70d06ba57
+size 98204
diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_origin.pdf b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0a05a2b609b8532f5d78fcb48481cb5210197766
--- /dev/null
+++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/8483c4c2-12c7-40ca-984a-441e48f0e1f1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a01579785d60ece88c7f04ec892efe00cc637da4e9b93ee85bf3f00c91608f3c
+size 514205
diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/full.md b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..96dd3f285be87c7209aacc1412b42f0e91316061
--- /dev/null
+++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/full.md
@@ -0,0 +1,348 @@
+# Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup
+
+Guang Liu, Yuzhao Mao, Hailong Huang, Weiguo Gao, and Xuan Li
+
+PingAn Life Insurance of China
+
+https://github.com/PAI-SmallIsAllYourNeed/Mixup-AMP
+
+# Abstract
+
+Mixup is a recent regularizer for current deep classification networks. Through training a neural network on convex combinations of pairs of examples and their labels, it imposes locally linear constraints on the model's input space. However, such strict linear constraints often lead to under-fitting which degrades the effects of regularization. Noticeably, this issue is getting more serious when the resource is extremely limited. To address these issues, we propose the Adversarial Mixing Policy (AMP), organized in a "min-max-rand" formulation, to relax the Locally Linear Constraints in Mixup. Specifically, AMP adds a small adversarial perturbation to the mixing coefficients rather than the examples. Thus, slight non-linearity is injected in-between the synthetic examples and synthetic labels. By training on these data, the deep networks are further regularized, and thus achieve a lower predictive error rate. Experiments on five text classification benchmarks and five backbone models have empirically shown that our methods reduce the error rate over Mixup variants in a significant margin (up to $31.3\%$ ), especially in low-resource conditions (up to $17.5\%$ ).
+
+# 1 Introduction
+
+Deep classification models have achieved impressive results in both images (He et al., 2016; Dosovitskiy et al., 2020) and language processing (Devlin et al., 2019; Kim, 2014; Wang et al., 2016). One of the most significant challenges to train a deep model is the great efforts and costs to collect large-scale labels. Without sufficient labels, the deep networks tend to generalize poorly, leading to unsatisfactory performance. Thus, the regularization techniques under augmentation schema, which generate labeled data to regularize models (Hernandez-Garcia and König, 2018), are widely explored (Wei and Zou, 2019; Liu et al., 2021).
+
+Mixup (Zhang et al., 2018) is an effective regularizer under the augmentation schema. In recent
+
+years, topics related to Mixup have warranted serious attention (Lee et al., 2020; Xu et al., 2020; Verma et al., 2019; Archambault et al., 2019; Berthelot et al., 2019b,a; Beckham et al., 2019; Mao et al., 2019; Zhu et al., 2020). The core idea of Mixup is to generate synthetic training data via a mixing policy, which convex combines a pair of examples and its labels. Through training on these data, the classification networks will be regularized to reach higher performance. Unlike conventional regularizers (Srivastava et al., 2014; Hanson and Pratt, 1988; Ioffe and Szegedy, 2015), Mixup imposes a kind of locally linear constraint (Zhang et al., 2018; Guo et al., 2019b) on the model's input space.
+
+However, vanilla Mixup often suffers from underfitting due to the ambiguous data (Guo et al., 2019b; Guo, 2020; Mai et al., 2021) generated under the strict locally linear constraints. To alleviate the under-fitting, (Guo, 2020) uses extra parameters to project the inputs and labels into a high dimensional space to properly separate the data. (Guo et al., 2019b; Mai et al., 2021) use auxiliary networks to learn the mixing policy in a data-driven way to avoid the generation of ambiguous data. Although existing works effectively reduce the underfitting, they have limitations to properly regularization networks. Current networks are prone to be over-fitting when adding the extra parameters. Eventually, these methods degrade the effects of regularization. The conflicts between over-fitting and under-fitting get more serious when the labeled resources are rare or hard to obtain. Besides, the methods with auxiliary networks usually have difficulties in integrating with other Mixup variants. More importantly, Mixup works well in most cases (Guo et al., 2019b). Adding too much non-linearity into Mixup will sacrifice the majority of synthetic data that can regularize the networks under locally linear constraints. So, the locally linear constraints in Mixup only need to be slightly
+
+relaxed.
+
+In this paper, we propose the Adversarial Mixing Policy (AMP) to overcome these limitations. We modify the adversarial training (Goodfellow et al., 2015), which relaxes the linear nature of the network without any extra parameters or auxiliary networks, to relax the Locally Linear Constraints in Mixup. Inspired by the "min-max" formulation of adversarial training, we formulate our method as a form of "min-max-rand" regularization. Specifically, the "rand" operation randomly samples a mixing coefficient as in vanilla Mixup to generate synthetic example and label. Then, the "max" operation calculates the perturbation of the mixing coefficient and applies it. Note that the updated mixing coefficient is only used to re-synthetic example, keeping the synthetic label unchanged. Thus, slight non-linearity is injected in-between the synthetic example and label. Finally, the "min" operation minimizes the training loss over the non-linearly generated example-label pairs. In summary, we highlight the following contributions:
+
+- We propose an Adversarial Mixing Policy (AMP) to relax the Locally Linear Constraints (LLC) in Mixup without any auxiliary networks. It can be seamlessly integrated into other Mixup variants for its simplicity.
+- To the best of our knowledge, this is the first exploration of the application of adversarial perturbation to the mixing coefficient in Mixup.
+- We analyze our proposed method with extensive experiments and show that our AMP improves the performance of two Mixup variants on various settings and outperforms the nonlinear Mixup in terms of error rate.
+
+# 2 Background
+
+# 2.1 Linear nature of the networks
+
+Let $(x; y)$ be a sample in the training data, where $x$ denotes the input and $y$ the corresponding label. Deep networks learn a mapping function from $x$ to $y$ , which is:
+
+$$
+f (x) = y ^ {\prime} \rightarrow y. \tag {1}
+$$
+
+Here, $y'$ is the output of the networks, $\rightarrow$ represents the learning process. The linear nature of networks can be interpreted as that a small change in the
+
+input will lead to a change of model output:
+
+$$
+f (x + \nabla x) = y ^ {\prime} + \nabla y. \tag {2}
+$$
+
+Here, $\nabla x$ is a small perturbation of $x$ , and $\nabla y$ is the changing of output caused by the injection of $\nabla x$ . This linearity causes the networks vulnerable to adversarial attacks (Goodfellow et al., 2015).
+
+# 2.2 Relax the linear nature
+
+To relax the linear nature of the networks, adversarial training (Goodfellow et al., 2015) forces the networks to learn the following mapping function,
+
+$$
+f (x + \nabla x) = y ^ {\prime} \rightarrow y, \tag {3}
+$$
+
+where $\nabla x$ is an small adversarial perturbation. Such kind of training can effectively relax the linearity of networks and improve the robustness of deep networks. However, there exists a trade-off between model robustness(Equation. 3) and generalization(Equation. 1)(Tsipras et al., 2019).
+
+# 2.3 Locally linear constraints in Mixup
+
+Mixup can be formulated as follows,
+
+$$
+f \left(m _ {x} (\lambda)\right) = y ^ {\prime} \rightarrow m _ {y} (\lambda), \tag {4}
+$$
+
+$$
+m _ {x} (\lambda) = x _ {1} \cdot \lambda + x _ {2} \cdot (1 - \lambda), \tag {5}
+$$
+
+$$
+m _ {y} (\lambda) = y _ {1} \cdot \lambda + y _ {2} \cdot (1 - \lambda), \tag {6}
+$$
+
+where $\lambda \in [0,1]$ is the mixing coefficient. $m$ is the mixing policy. $(x_{1};y_{1})$ and $(x_{2};y_{2})$ are a pair of examples from the original training data. By training on synthetic data, $m_x(\lambda)$ and $m_y(\lambda)$ , Mixup (Zhang et al., 2018; Verma et al., 2019) imposes the Locally Linear Constraints on the input space of networks. Different from Eq. 2, this linearity can be formulated as follow,
+
+$$
+f \left(m _ {x} (\lambda + \nabla \lambda)\right) = y ^ {\prime} + \nabla y \rightarrow m _ {y} (\lambda + \nabla \lambda). \tag {7}
+$$
+
+Here, the $\nabla \lambda$ is a small change in $\lambda$ . We can observe that the output of the networks is changed accordingly. That is similar to the form of the linear nature of networks. Under these settings, the small change in $\lambda$ often leads to an undesirable change of output. Eventually, these strict linear constraints lead to under-fitting that degrades the regularization effects (Guo et al., 2019b; Guo, 2020).
+
+# 2.4 Why relaxing locally linear constraints
+
+Relaxing the strict linear constraints in Mixup can alleviate the under-fitting and therefore improve
+
+the regularization effects (Guo, 2020). The underfitting happens when the synthetic data is corrupted or ambiguous for the network. So, if we can make the networks compatible with such data, like the soft margin (Suykens and Vandewalle, 1999), the under-fitting will be eased. Furthermore, such a technique is best realized the relaxing without extra parameters. Inspired by the adversarial training (Eq. 3), we hypothesize that injecting slight non-linearity into Mixup can relax its constraints without extra parameters as follow,
+
+$$
+f (m _ {x} (\lambda + \nabla \lambda)) = y ^ {\prime} \rightarrow m _ {y} (\lambda), \qquad (8)
+$$
+
+where $\nabla \lambda$ is an adversarial perturbation injected to the original mixing coefficient $\lambda$ .
+
+# 3 Methodology
+
+As shown in Figure 1, Adversarial Mixing Policy (AMP) consists of three operations: Rand, Max and Min. Rand Operation (RandOp) generates the synthetic data by interpolating pairs of training examples and their labels with a random mixing coefficient $\lambda$ . Max Operation (MaxOp) injects a small adversarial perturbation into the $\lambda$ to resynthesize the example and keeps the synthetic label unchanged. This operation injects slight nonlinearity into the synthetic data. Min Operation (MinOp) minimizes the losses of these data. Additionally, we use a simple comparison to eliminate the influence caused by the scaling of gradients.
+
+# 3.1 Method formulation
+
+Given a training set $D = \{x_{i},y_{i}\}$ of texts, in which each sample includes a sequence of words $x_{i}$ and a label $y_{i}$ . A classification model encodes the text into a hidden state and predicts the category of text. Mixup's objective is to generate interpolated sample $\hat{g}_k$ and label $\hat{y}$ by randomly linear interpolation with ratio $\lambda$ applied on a data pair $(x_{i};y_{i})$ and $(x_{j};y_{j})$ . Our method aims to project a perturbation $\nabla \lambda$ into $\lambda$ to maximize the loss on interpolated data. Then, it minimizes the maximized loss. Inspired by adversarial training, we formulate this problem as a min-max-rand optimization problem,
+
+$$
+\min _ {\theta} \mathbb {E} _ {\hat {D}} \max _ {| \nabla \lambda | \leq \varepsilon} \ell_ {m i x} (f _ {r a n d} (\lambda + \nabla \lambda , i, j, k); \theta). \tag {9}
+$$
+
+Here, $\hat{D} = \{\hat{g}_{ki},\hat{y}_i\}$ is the synthetic data set generated by $f_{rand}(\lambda ,i,j)$ , $\nabla \lambda$ is the adversarial perturbation of $\lambda$ , $\varepsilon$ is the maximum step size, $\ell_{mix}(*)$
+
+is the Mixup loss function, $f_{rand}(*)$ represent the random interpolation of data and labels, $\lambda$ is the random mixing coefficient sampled from a Beta distribution with $\alpha$ parameters, $i$ and $j$ are the randomly sampled data indexes in $D$ , $k$ is the mixed layer.
+
+# 3.2 Rand operation
+
+Rand Operation (RandOp) is identical to Mixup (Zhang et al., 2018). It aims to generate random interpolated data between two categories. Specifically, it generates synthetic labeled data by linearly interpolating pairs of training examples as well as their corresponding labels. For a data pair $(x_{i};y_{i})$ and $(x_{j};y_{j})$ , $x$ denotes the examples and $y$ the one-hot encoding of the corresponding labels. Consider a model $f(x) = f_{k}(g_{k}(x))$ , $g_{k}$ denotes the part of the model mapping the input data to the hidden state at layer $k$ , and $f_{k}$ denotes the part mapping such hidden state to the output of $f(x)$ . The synthetic data is generated as follows,
+
+$$
+\lambda \sim \operatorname {B e t a} (\alpha , \alpha), \tag {10}
+$$
+
+$$
+\hat {g} _ {k} = g _ {k} \left(x _ {i}\right) \cdot \lambda + g _ {k} \left(x _ {j}\right) \cdot (1 - \lambda), \tag {11}
+$$
+
+$$
+\hat {y} = y _ {i} \cdot \lambda + y _ {j} \cdot (1 - \lambda), \tag {12}
+$$
+
+where $\lambda$ is the mixing coefficient for the data pair, $\alpha$ indicates the hyper-parameter of Beta distribution, $\hat{g}_k$ is the synthetic hidden state. For efficient computation, the mixing happens by randomly picking one sample and then pairs it up with another sample drawn from the same mini-batch (Zhang et al., 2018). Here, the sample is obtained randomly. To simplify, we reformulate the random interpolation $f_{rand}(*)$ as follow,
+
+$$
+\left(f _ {k} \left(\hat {g} _ {k}\right), \hat {y}\right) := \underset {\lambda \sim B e t a (\alpha , \alpha)} {f _ {r a n d}} (\lambda , i, j, k). \tag {13}
+$$
+
+Here, $f_{rand}(*)$ takes the results of Equation 10-12 as input, outputs the model predictions $f_{k}(\hat{g}_{k})$ and the label $\hat{y}$ . The model trained on the generated data tends to reduce the volatility of prediction on these data. Then, the model will generalize better on unseen data.
+
+# 3.3 Max operation
+
+Max operation (MaxOp) injects a small adversarial perturbation to inject slight non-linearity between the synthetic example and synthetic label. It means that the generated synthetic data will not strictly follow the Locally Linear Constraints in Mixup. To achieve this, we propose an algorithm,
+
+
+Figure 1: The major operations of Adversarial Mixing Policy (AMP).
+
+which is similar to the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), to inject an adversarial perturbation to the $\lambda$ . It calculates the gradient of $\lambda$ in the gradient ascend direction,
+
+$$
+\max _ {| \nabla \lambda | \leq \varepsilon} \ell_ {m i x} \left(f _ {r a n d} (\lambda + \nabla \lambda , i, j, k); \theta\right), \tag {14}
+$$
+
+where the $\nabla \lambda$ is the gradients of $\lambda$ on gradient ascent direction, $\varepsilon$ is the step size. Different from the FGSM (Goodfellow et al., 2015), we add a small perturbation on $\lambda$ instead of the input. Besides, the $\lambda$ is a scalar, we can get the adversarial direction and strength directly. So, there is no need to perform the normalization on $\nabla \lambda$ .
+
+$$
+\lambda^ {\prime} = \lambda + \varepsilon \cdot \nabla \lambda , \tag {15}
+$$
+
+where $\lambda^{\prime}$ is the slight hardness version of mix coefficient, $\varepsilon$ is the step size, $\nabla \lambda$ is the clipped $(\leq 1)$ gradient of $\lambda$ . The perturbation is the gradient in the adversarial direction. We calculate the gradient of $\lambda$ as follow,
+
+$$
+\nabla \lambda = \frac {\partial \mathcal {L}}{\partial \lambda}. \tag {16}
+$$
+
+Here, the Mixup loss $\mathcal{L}$ is calculated by interpolation of losses on pair of labels (Zhang et al., 2018; Verma et al., 2019) as follow,
+
+$$
+\begin{array}{l} \mathcal{L} = \ell_{mix}(f_{rand}(\lambda ,i,j,k);\theta)_{\substack{\lambda \sim Beta (\alpha ,\alpha)}} \\ = \ell_ {c e} \left(f _ {k} \left(\hat {g} _ {k}\right), y _ {i}; \theta\right) \cdot \lambda + \tag {17} \\ \ell_ {c e} \left(f _ {k} \left(\hat {g} _ {k}\right), y _ {j}; \theta\right) \cdot (1 - \lambda). \\ \end{array}
+$$
+
+Here, $\mathcal{L}$ represents the loss of synthetic data generated under mixing coefficient $\lambda$ , $\theta$ is the parameters of the model, $\ell_{mix}(*)$ is the Mixup loss, $\ell_{ce}(*)$ represents the cross-entropy function. Notable that
+
+the step size of gradient $\varepsilon$ may lead to undesirable results that minimize the losses. So, we need to eliminate the influence caused by $\varepsilon$ .
+
+# 3.4 Min operation
+
+Min operation (MinOp) minimizes loss of constraints relaxed synthetic data as follow,
+
+$$
+\underset {\theta} {\arg \min } \mathcal {L} _ {\text {f i n a l}}, \tag {18}
+$$
+
+where $\mathcal{L}_{final}$ is the final loss. In addition, MinOp leans to minimize the larger loss in the previous two steps to eliminate the influence of the step size $\varepsilon$ . Besides, this preference will help model learning from the one with larger loss to reduce the risk of under-fitting. We use a mask-based mechanism to realize the operation as follow,
+
+$$
+\mathcal {L} _ {\text {f i n a l}} = \mathcal {L} \cdot (1 - \operatorname {m a s k}) + \mathcal {L} ^ {\prime} \cdot \operatorname {m a s k}. \tag {19}
+$$
+
+Here, the mask is used as a selector of losses. The comparison is carried out on losses before and after updated $\lambda$ in the synthetic example. The latter one $\mathcal{L}'$ is calculated as follow,
+
+$$
+\mathcal {L} ^ {\prime} = \ell_ {m i x} \left(f _ {r a n d} \left(\lambda^ {\prime}, i, j, k\right); \theta\right). \tag {20}
+$$
+
+Here, $\lambda^\prime$ is the mixing coefficient after injecting perturbation (we only inject the perturbation into mixing coefficient of input, as Eq. 8), $\mathcal{L}^{\prime}$ is the Mixup loss on synthetic example generated under $\lambda^\prime$ . Note that the $\lambda$ for the synthetic label is unchanged. mask is calculated as follow,
+
+$$
+m a s k = \left\{ \begin{array}{l l} 1 & \delta_ {\mathcal {L}} > 0 \\ 0 & \delta_ {\mathcal {L}} \leq 0. \end{array} \right. \tag {21}
+$$
+
+Here, the mask is batch size vector, $\delta_{\mathcal{L}}$ is the direct comparison $\mathcal{L}' - \mathcal{L}$ . By doing this, the proposed method achieves steady improvement under different settings of step size.
+
+# 4 Experiments
+
+# 4.1 Data
+
+We evaluate the proposed AMP on five sentence classification benchmark datasets as used in (Guo et al., 2019a). TREC is a question dataset which aims to categorize a question into six types (Li and Roth, 2002). MR is a movie review dataset aiming at classifying positive/negative reviews (Pang and Lee, 2005). SST-1 is the Stanford Sentiment Treebank dataset with five sentiment categories: very positive, positive, neutral, negative, and very negative (Socher et al., 2013). SST-2 is a binary label version of SST-1. SUBJ is a dataset aiming to judge a sentence to be subjective or objective (Pang and Lee, 2004). Table 1 summarizes the statistical characteristics of the five datasets after prepossessing.
+
+Table 1: The statistics of datasets. $c$ is the category number. $l$ is the average length. $V$ is the vocabulary size. $N$ is the size of the training set. $T$ is the size of the testing set. ${CV}$ denotes the 10-fold cross-validation.
+
+| Data | c | l | V | N | T |
| TREC | 6 | 10 | 9592 | 5952 | 500 |
| SST-1 | 5 | 18 | 17836 | 11855 | 2210 |
| SST-2 | 2 | 19 | 16185 | 9613 | 1821 |
| SUBJ | 2 | 23 | 21323 | 10000 | CV |
| MR | 2 | 20 | 18765 | 10662 | CV |
+
+# 4.2 Baselines and Settings
+
+Our AMP is evaluated by integrating to two recent proposed Mixup variants. We choose five popular sentence classification models as the backbone to test the performance of all Mixups on the five benchmark datasets.
+
+Classification backbone. We test Mixups on five classification backbones. $LSTM_{rand}$ and $LSTM_{glove}$ (Wang et al., 2016) are two versions of bi-directional Long Short Term Memory(LSTM) with attention, where the former uses randomly initiated word embeddings and the latter uses GloVe (Pennington et al., 2014) initiated word embeddings. $CNN_{rand}$ and $CNN_{glove}$ (Kim, 2014) are two versions of convolutional neural networks. They are fed with randomly and GloVe initiated word embeddings, respectively. The above four methods are popular sentence classification models without pre-training techniques. We employ $BERT_{base}$ (Devlin et al., 2019) as the pre-training classification backbone.
+
+Mixup. We choose three popular Mixup variants for sentence classification as baselines. Word-Mixup (Guo et al., 2019a) is the straightforward application of Mixup on NLP tasks where linear interpolation applying on the word embedding level (first layer). SentMixup (Verma et al., 2019; Sun et al., 2020) is the Mixup applying to NLP tasks where linear interpolation is conducted in the last layer of hidden states. Non-linear Mixup is the non-linear version of SentMixup.
+
+AMP. WordAMP is applied on the word embedding level, the same as WordMixup. SentAMP is applied on the last layer of hidden states, the same as SentMixup.
+
+We obtained the source codes of backbone models from the public available implementations1. In our experiments, we follow the exact implementation and settings in (Kim, 2014; Wang et al., 2016; Devlin et al., 2019; Guo et al., 2019a; Verma et al., 2019). Specifically, we use filter sizes of 3, 4, and 5, each with 100 feature maps; dropout rate of 0.5 and L2 regularization of 1e-8 for the CNN baselines. We use hidden size of 1024 of single-layer; dropout rate of 0.5 and L2 regularization of 1e-8 for the LSTM baselines. For datasets without a standard development set, we randomly select $10\%$ of training data as a development set. Training is done through Adam (Kingma and Ba, 2015) over mini-batches of size 50 (CNN, LSTM) and 24 $(BERT_{base})$ respectively. The learning rate is 2e-4 for CNN and LSTM, and 1e-5 for $BERT_{base}$ . The word embeddings are 300 dimensions for CNN and LSTM. The step size $\varepsilon = 0.002$ for all experiments. The $\alpha$ for all Mixup is set to one. For each dataset, we train each model 10 times with different random seeds each with 8k steps and compute their mean error rates and standard deviations.
+
+# 4.3 Main results
+
+To evaluate the predictive performance of $AMP$ , we conduct five sets of experiments. For each setting, we compare the performance of without Mixup (w/o), WordMixup (Word), SentMixup (Sent) and non-linear Mixup(non-linear². As presented in Table 2, $AMP$ outperform Mixup comparison baselines. For example, compared with the Sent base
+
+Table 2: The results of our AMP method compared with two recent Mixup methods on five different datasets under five different classification models. For a fair comparison, we re-implement the Mixup baselines based on backbone models. The results may not be the same as the results in (Guo et al., 2019a; Sun et al., 2020). $RP$ indicates the relative improvement.† indicates the results are cited from (Guo, 2020).
+
+| Model | Mixup | TREC(%) | SST-1(%) | SST-2(%) | SUBJ(%) | MR(%) |
| \( RNN_{rand} \) | w/o | 11.3±1.48 | 63.7±3.00 | 18.0±0.85 | 10.7±0.57 | 24.9±1.11 |
| Sent | 10.5±1.16 | 55.8±0.75 | 16.6±0.38 | 10.3±0.55 | 24.2±0.72 |
| Sent(our) | 9.8±0.73 | 55.0±0.37 | 15.9±0.43 | 10.0±0.78 | 23.6±0.65 |
| RP(%) | 6.7↑ | 1.4↑ | 4.2↑ | 2.9↑ | 2.5↑ |
| Word | 9.8±0.86 | 55.9±0.62 | 16.1±0.62 | 9.4±0.77 | 23.6±0.75 |
| Word(our) | 9.5±0.84 | 55.6±0.67 | 15.3±0.43 | 8.8±0.48 | 22.7±0.96 |
| RP(%) | 3.1↑ | 0.5↑ | 5.0↑ | 6.4↑ | 3.8↑ |
| \( RNN_{glove} \) | w/o | 8.3±0.47 | 56.6±0.30 | 13.0±0.51 | 6.1±0.76 | 18.5±0.97 |
| Sent | 6.9±0.55 | 48.1±0.37 | 12.1±0.61 | 6.0±0.69 | 18.1±0.95 |
| Sent(our) | 6.7±0.27 | 48.0±0.45 | 11.5±0.31 | 5.8±0.79 | 17.8±0.98 |
| RP(%) | 2.9↑ | 0.2↑ | 5.0↑ | 3.3↑ | 1.7↑ |
| Word | 6.5±0.45 | 48.6±0.33 | 11.8±0.34 | 5.5±0.73 | 17.8±0.87 |
| Word(our) | 6.6±0.52 | 48.0±0.66 | 11.1±0.42 | 5.2±0.72 | 17.5±0.91 |
| RP(%) | 1.5↓ | 1.2↑ | 5.9↑ | 5.5↑ | 1.7↑ |
| \( CNN_{rand} \) | w/o | 8.8±0.86 | 63.2±0.54 | 17.6±0.52 | 9.5±0.64 | 24.2±1.39 |
| Sent | 8.3±0.63 | 58.1±0.48 | 19.9±0.32 | 9.5±0.52 | 25.1±0.91 |
| Sent(our) | 8.1±0.71 | 57.9±0.51 | 19.9±0.51 | 9.4±0.45 | 25.1±0.93 |
| RP(%) | 2.4↑ | 0.5↑ | → | 1.1↑ | → |
| Word | 8.3±0.71 | 58.0±0.55 | 19.4±0.22 | 9.7±0.57 | 24.6±0.78 |
| Word(our) | 8.4±0.92 | 57.5±0.50 | 19.2±0.53 | 9.2±0.68 | 24.1±0.98 |
| RP(%) | 1.2↓ | 1.0↑ | 1.0↑ | 5.2↑ | 2.0↑ |
| \( CNN_{glove} \) | w/o | 7.9±0.12 | 57.5±0.50 | 13.1±0.49 | 5.6±0.36 | 20.2±0.60 |
| Non-linear | 5.3±0.29† | 50.7±0.42† | 11.4±0.29† | 6.1±0.19† | 16.6±0.36† |
| Sent | 6.7±0.23 | 51.4±0.23 | 12.8±0.35 | 5.1±0.34 | 19.4±0.56 |
| Sent(our) | 4.6±0.33 | 50.6±0.40 | 11.7±0.25 | 5.1±0.62 | 17.4±0.69 |
| RP(%) | 31.3↑ | 1.6↑ | 8.6↑ | → | 10.3↑ |
| Word | 6.3±0.80 | 51.8±0.91 | 12.9±0.26 | 5.3±0.45 | 18.7±0.28 |
| Word(our) | 4.8±0.26 | 50.4±0.60 | 11.7±0.24 | 5.1±0.58 | 17.4±0.66 |
| RP(%) | 23.8↑ | 2.7↑ | 9.3↑ | 3.8↑ | 7.0↑ |
| \( BERT_{base} \) | w/o | 2.6±0.18 | 47.3±0.47 | 6.9±0.21 | 2.4±0.47 | 11.5±1.19 |
| Sent | 2.2±0.24 | 44.5±0.37 | 6.3±0.29 | 2.4±0.56 | 11.3±1.44 |
| Sent(our) | 2.1±0.20 | 44.3±0.54 | 5.9±0.30 | 2.3±0.49 | 11.2±1.31 |
| RP(%) | 4.5↑ | 0.4↑ | 9.5↑ | 4.2↑ | 0.9↑ |
| Word | 2.1±0.20 | 45.6±0.37 | 6.5±0.25 | 2.3±0.54 | 11.1±1.44 |
| Word(our) | 1.9±0.13 | 45.5±0.37 | 6.4±0.23 | 2.2±0.56 | 10.8±1.29 |
| RP(%) | 9.5↑ | 0.2↑ | 1.5↑ | 4.3↑ | 2.7↑ |
+
+line over $C N N_{glove}$ , Sent(our) achieves a significant improvement on all five datasets. For instance, Sent(our) outperform Sent on the TREC, SST2 and MR datasets over $C N N_{glove}$ , the relative improvements are $31.3\%$ , $8.6\%$ and $10.3\%$ , respectively3. Compared with Word over $R N N_{glove}$ , Word(our) reduces the error rate over $1.2\%$ (up to $5.9\%$ ) on all five testing datasets. Interestingly, one can see that the Word(our) outperform Non-linear Mixup on three out of five datasets. That shows the slightly relaxing of LLC achieves similar sometimes even better results than changing the LLC into a nonlinear version.
+
+We use different initial embeddings to evaluate
+
+the effectiveness of augmentation as (Guo et al., 2019a). From the embedding perspective, we have three kinds of embeddings: the randomly initiated embeddings $(RNN_{rand}$ and $CNN_{rand})$ the pre-trained fixed embeddings $(RNN_{glove}$ and $CNN_{glove})$ and the pre-trained context-aware embeddings $(BERT_{base})$ . For each kind of embeddings, AMP outperforms the Mixup baselines. For instance, when compared with Sent under randomly initiated embeddings, the proposed method Sent(our) obtains lower predictive error rate on eight out of ten experiments. While Word(our) outperforms Word on nine out of ten experiments. Similar results can be observed on the pre-trained embeddings settings. Even under the context-aware embeddings setting $(BERT_{base})$ , our AMP can fur
+
+ther improve the performance against the Mixup with advanced backbone models. Significantly, on SST1, our method helps $BERT_{base}$ outperforms the SOTA model ( $BERT_{large}$ , 44.5) (Munikar et al., 2019), which is as two times large as $BERT_{base}$ . The results show the effectiveness of our method.
+
+Table 3: The results of $BERT_{base}$ with SentAMP on low-resource settings. The experiments are run ten times on each scaled TREC datasets. The average error rate and standard deviation are reported.
+
+| % | labels | Sent | Sent(our) | RP(%) |
| 3 | 160 | 51.0±7.34 | 42.1±7.34 | +17.5 |
| 4 | 215 | 29.8±4.05 | 25.6±4.01 | +14.1 |
| 5 | 270 | 10.2±1.00 | 9.2±0.80 | +9.8 |
| 10 | 543 | 5.1±0.64 | 4.6±0.37 | +9.8 |
| 15 | 815 | 4.1±0.64 | 4.0±0.67 | +2.4 |
| 20 | 1089 | 3.6±0.62 | 3.5±0.48 | +2.8 |
| 40 | 2179 | 2.9±0.35 | 2.7±0.38 | +6.7 |
| 80 | 4359 | 2.2±0.17 | 2.1±0.10 | +4.5 |
| 100 | 5452 | 2.2±0.24 | 2.1±0.20 | +4.5 |
+
+# 4.4 Low-resource conditions
+
+With low resources, the under-fitting caused by the strict LLC has a serious impact on the model generalization. To evaluate our AMP performance with different amounts of data, particularly in the case of low-resource settings. We scale the size of the dataset by a certain ratio of data for each category. If the scaled category is less than 0, we retain at least one sample. We randomly generate ten different datasets for each scale ratio and then run the experiment on each dataset. The mean error rate and standard deviation are reported. As shown in Table 3, we can see that our method reduces the mean error rate against Mixup with a significant margin. For instance, Sent(our) reduces the error rate over Sent with $17.5\%$ and $14.1\%$ on $3\%$ and $4\%$ training data, separately. AMP works well as we expected in low resource conditions for its effectiveness in relaxing LLC in Mixup.
+
+# 4.5 Ablation study
+
+To further understand the Max Operation (MaxOp) and Min Operation (MinOp) effects in $AMP$ , we make several variations of our model. The variations are tested under $CNN_{glove}$ and $BERT_{base}$ on TREC. As presented in Table 4, the model trained without augmentation is denoted as Baseline. $+RandOp$ is identical to the model trained with Mixup, $+MaxOp$ indicates Mixup
+
+Table 4: Ablation study.
+
+| Method | Model | Operation | TREC |
| Word | \( CNN_{glove} \) | Baseline | 7.9±0.12 |
| +RandOp | 6.3±0.80 |
| +MaxOp | 4.7±0.35 |
| AMP | 4.8±0.26 |
| \( BERT_{base} \) | Baseline | 2.6±0.18 |
| +RandOp | 2.1±0.24 |
| +MaxOp | 2.0±0.23 |
| AMP | 1.9±0.13 |
| Sent | \( CNN_{glove} \) | Baseline | 7.9±0.12 |
| +RandOp | 6.7±0.23 |
| +MaxOp | 4.8±0.22 |
| AMP | 4.6±0.33 |
| \( BERT_{base} \) | Baseline | 2.6±0.18 |
| +RandOp | 2.2±0.24 |
| +MaxOp | 2.1±0.13 |
| AMP | 2.1±0.15 |
+
+Table 5: The results under different setting of $\alpha$ .
+
+| α | Methods | TREC | SST2 | MR |
| Word | 1.9±0.13 | 6.3±0.23 | 11.0±1.25 |
| 0.2 | Word(our) | 1.8±0.13 | 6.0±0.20 | 10.9±1.22 |
| RP(%) | +5.3 | +4.8 | +0.9 |
| Word | 1.9±0.13 | 6.7±0.24 | 11.1±1.25 |
| 0.5 | Word(our) | 1.9±0.16 | 6.1±0.18 | 10.8±1.25 |
| RP(%) | +0.0 | +8.9 | +2.7 |
| Word | 2.1±0.20 | 6.5±0.25 | 11.1±1.44 |
| 1.0 | Word(our) | 2.0±0.12 | 6.4±0.23 | 10.8±1.29 |
| RP(%) | +4.8 | +1.5 | +2.7 |
| Word | 2.1±0.18 | 6.8±0.13 | 11.2±1.44 |
| 1.5 | Word(our) | 2.0±0.12 | 6.5±0.28 | 11.0±1.34 |
| RP(%) | +4.8 | +4.4 | +1.8 |
+
+with MaxOp is used for model training, $AMP$ is the fully functional method of our proposed method. As the results presented in Table 4, MaxOp contributes the majority cut down of error rate. For instance, the $CNN_{glove}$ under Sent Mixup settings, MaxOp reduces the error rate from 6.7 to 4.8. That suggests the effectiveness of adversarial perturbation in relaxing the LLC in Mixup. The comparison in MinOp can mostly (three out of four times) further reduce the error rate. Specifically, it brings down the mean error rate from 4.8 to 4.6 on $CNN_{glove}$ . That indicates the effectiveness of MinOp in eliminating the influence of step size.
+
+
+(a) Random pair1
+
+
+(b) Random pair2
+
+
+(c) Full-size testing set
+Figure 2: The visualization of loss on unseen synthetic data. The results conduct by $BERT_{base}$ on $3\%$ TREC dataset, as listed in Table 3.
+
+# 4.6 Mix ratio distribution
+
+To analyze the effects of different shapes of mixing coefficient distributions, we compare Word(out) with Word on $BERT_{base}$ on four $\alpha$ settings (from 0.2 to 1.5) and three datasets: TREC, SST2, and MR. The $\alpha$ is the parameter of the Beta distribution. It controls the shape of how the mixing coefficient $\lambda$ is distributed. As presented in Table 5, our method can achieve lower mean error rates than Word on all $\alpha$ settings. For instance, Word(our) achieve 8.9% lower mean error rate than Word on SST2 with $\alpha = 0.5$ . The improvements come mainly from training the models with the slightly non-linear data generated by AMP.
+
+# 4.7 Visualization
+
+To intuitively demonstrate the effects of relaxing LLC, we visualize the loss of networks trained by our $AMP$ and Mixup. The synthetic data is generated strictly follow the LLC based on the testing data. The network trained with relaxed LLC has a smaller loss value shows the effectiveness of our method in alleviate under-fitting. As shown in Figure 2(a), 2(b) and 2(c), we draw the losses on synthetic data generated with mixing coefficient $\in [0,1]$ . Figure 2(a) and 2(b) each uses one random pair of data in the testing set for generating. For two random pair $(x_{1},y_{1})(x_{4},y_{4})$ and $(x_{2},y_{2})(x_{3},y_{3})$ , we calculate the Mixup loss of each pair on different $\lambda$ to get Figure 2(a) and 2(b). The loss curves on random pairs are not symmetric for the loss of each
+
+example of the pairs are different. The loss curves are encouraged (by LLC) to be a line in-between two examples. The line should start with the loss of one example and end with the loss of another example. The Mixup loss (interpolation on cross-entropy loss) and the different examples result in different shapes of the loss curves in Figure 2(a) and 2(b). As illustrated in Figure 2(a) and 2(b), one can observe that AMP have a smaller loss than Mixup. That indicates the effectiveness of training on the slightly non-linear synthetic data in the micro view.
+
+Figure 2(c) uses the full-size testing set for generating. Figure 2(c) shows the average loss over all synthetic data generated with the full-size testing set. We freeze the random seeds; thus, we can freeze the data pairs. Let the testing dataset be $X = [(x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3}),(x_{4},y_{4})]$ . The synthetic data is generated by $\lambda X + (1 - \lambda)X$ , where $X' = [(x_4,y_4),(x_3,y_3),(x_2,y_2),(x_1,y_1)]$ is shuffled $X$ . So, the loss when $\lambda = 0$ and $\lambda = 1$ are identical. Similarly, we can get a symmetric picture as Figure 2(c). One can observe that our method can achieve a significantly smaller average loss than Mixup in the macro view. The visualizations verified our assumption that relaxing LLC can further regularize models.
+
+# 5 Related work
+
+Mixup on text classification. Text classification has achieved remarkable improvements underlying some effective paradigms, e.g., CNN (Kim, 2014), attention-based LSTMs (Wang et al., 2016), GloVe (Pennington et al., 2014) and BERT (Devlin et al., 2019), etc. The large scale parameter of the model tends to generalize poorly in low-resource conditions. To overcome the limitation, Mixup (Zhang et al., 2018) is proposed as a data augmentation based regularizer. Few researches explore the Mixup (Guo et al., 2019b; Zhang et al., 2020; Guo, 2020) on NLP tasks. For classification, (Guo et al., 2019a) suggest applying Mixup on particular level of networks, i.e., word or sentence level. Although these work make promising progress, the mechanism of Mixup is still need to be explored.
+
+Adversarial Training. The min-max formulation of adversarial training has been theoretically and empirically verified (Beckham et al., 2019; Xu et al., 2020; Pang et al., 2020; Archambault et al., 2019; Lee et al., 2020; Miyato et al., 2015, 2018, 2017). Such training procedure first generates ad
+
+versarial examples that might maximize the training loss and then minimizes the training loss after adding the adversarial examples into the training set (Madry et al., 2018). The Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) is an efficient one-step method. Inspired by the min-max formulation of adversarial learning, we organize our method into a min-max-rand formulation.
+
+# 6 Conclusion
+
+For relaxing Locally Linear Constraints (LLC) in Mixup to alleviate the under-fitting, this paper proposes an Adversarial Mixing Policy (AMP). Inspired by the adversarial training, we organize our method into a min-max-rand formulation. The proposed method injects slightly non-linearity in between synthetic examples and synthetic labels without extra parameters. By training on these data, the networks can compatible with some ambiguous data and thus reduce under-fitting. Thus, the network will be further regularized to reach better performance. We evaluate our method on five popular classification models on five publicly available text datasets. Extensive experimental results show that our AMP can achieve a significantly lower error rate than vanilla Mixup (up to $31.3\%$ ), especially in low-resource conditions (up to $17.5\%$ ).
+
+# 7 Acknowledgments
+
+We thank Prof.Xiaojie Wang and Prof.Fangxiang Feng from BUPT for their valuable feedback on an earlier draft of this paper, and Yang Du from XDF for her suggestions of English writing for the final revision. We also thank anonymous reviewers for their helpful comments.
+
+# References
+
+Guillaume P Archambault, Yongyi Mao, Hongyu Guo, and Richong Zhang. 2019. Mixup as directional adversarial training. arXiv preprint arXiv:1906.06875.
+Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R. Devon Hjelm, Yoshua Bengio, and Chris Pal. 2019. On adversarial mixup resynthesis. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 4348-4359.
+David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. 2019a. Remixmatch: Semi-supervised learn
+
+ing with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785.
+David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019b. Mixmatch: A holistic approach to semisupervised learning. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5050-5060.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
+Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Hongyu Guo. 2020. Nonlinear mixup: Out-of-manifold data augmentation for text classification. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 4044-4051. AAAI Press.
+Hongyu Guo, Yongyi Mao, and Richong Zhang. 2019a. Augmenting data with mixup for sentence classification: An empirical study. arXiv preprint arXiv:1905.08941.
+Hongyu Guo, Yongyi Mao, and Richong Zhang. 2019b. Mixup as locally linear out-of-manifold regularization. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 3714-3722. AAAI Press.
+Stephen Hanson and Lorien Pratt. 1988. Comparing biases for minimal network construction with backpropagation. Advances in neural information processing systems, 1:177-185.
+
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society.
+Alex Hernández-García and Peter König. 2018. Data augmentation instead of explicit regularization. arXiv preprint arXiv:1806.03852.
+Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 448-456. JMLR.org.
+Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Linguistics.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Saehyung Lee, Hyungyu Lee, and Sungroh Yoon. 2020. Adversarial vertex mixup: Toward better adversarially robust generalization. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 269-278. IEEE.
+Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics.
+Guang Liu, Hailong Huang, Yuzhao Mao, Weiguo Gao, Xuan Li, and Jianping Shen. 2021. A diversity-enhanced and constraints-relaxed augmentation for low-resource classification. In Database Systems for Advanced Applications - 26th International Conference, DASFAA 2021, Taipei, Taiwan, April 11-14, 2021, Proceedings, Part II, volume 12682 of Lecture Notes in Computer Science, pages 262-270. Springer.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
+Zhijun Mai, Guosheng Hu, Dexiong Chen, Fumin Shen, and Heng Tao Shen. 2021. Metamixup: Learning adaptive interpolation policy of mixup with metalearning. IEEE Transactions on Neural Networks and Learning Systems.
+
+Xudong Mao, Yun Ma, Zhenguo Yang, Yangbin Chen, and Qing Li. 2019. Virtual mixup training for unsupervised domain adaptation. arXiv preprint arXiv:1905.04215.
+Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993.
+Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2015. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677.
+Manish Munikar, Sushil Shakya, and Aakash Shrestha. 2019. Fine-grained sentiment classification using bert. In 2019 Artificial Intelligence for Transforming Business and Society (AITB), volume 1, pages 1-5. IEEE.
+Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271-278, Barcelona, Spain.
+Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115-124, Ann Arbor, Michigan. Association for Computational Linguistics.
+Tianyu Pang, Kun Xu, and Jun Zhu. 2020. Mixup inference: Better exploiting mixup to defend adversarial attacks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
+
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.
+Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip S Yu, and Lifang He. 2020. Mixup-transformer: Dynamic data augmentation for nlp tasks. arXiv preprint arXiv:2010.02394.
+Johan AK Suykens and Joos Vandewalle. 1999. Least squares support vector machine classifiers. Neural processing letters, 9(3):293-300.
+Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Robustness may be at odds with accuracy. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitlagkas, David Lopez-Paz, and Yoshua Bengio. 2019. *Manifold mixup: Better representations by interpolating hidden states.* In *Proceedings of the 36th International Conference on Machine Learning*, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pages 6438-6447. PMLR.
+Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606-615, Austin, Texas. Association for Computational Linguistics.
+Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.
+Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. 2020. Adversarial domain adaptation with domain mixup. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6502-6509.
+Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
+Rongzhi Zhang, Yue Yu, and Chao Zhang. 2020. SeqMix: Augmenting active sequence labeling via sequence mixup. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language
+
+Processing (EMNLP), pages 8566-8579, Online. Association for Computational Linguistics.
+Jianchao Zhu, Liangliang Shi, Junchi Yan, and Hongyuan Zha. 2020. Automix: Mixup networks for sample interpolation via cooperative barycenter learning. In European Conference on Computer Vision, pages 633-649. Springer.
\ No newline at end of file
diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/images.zip b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c207e662a7c911842b57c57bda3a7c5da45e4e91
--- /dev/null
+++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b79a80d803fef00e5af327d615333cb1cbffb37d6dbdc1b9808861b595e41fc2
+size 587534
diff --git a/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/layout.json b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7214d2957444891bb59ce3c7b218dce41abd366b
--- /dev/null
+++ b/adversarialmixingpolicyforrelaxinglocallylinearconstraintsinmixup/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b8373684fbb67ab7874bc37be39b50187526e1d1021230eba7e7ea4858675718
+size 450232
diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_content_list.json b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e1b181861956fe4b8a813a9cad49199c962a344c
--- /dev/null
+++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ce4fada32f91f8a007fd20283c65ca7c6cabbae3ca85cec7c6608f24c07a0b1
+size 113978
diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_model.json b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e47a16e970cb7b4454147f7d412b40c0da920706
--- /dev/null
+++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d2221b63c27ac0e76e338c2a0f1f54aeeede91d4f530c5ca3d9dd57299f3b3c
+size 141895
diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_origin.pdf b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..523656015792a794398adf5d7607779f4eca0af4
--- /dev/null
+++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/f2ccb633-1538-4926-9168-efd9af7a37c0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:53b1b7729600ea85df4990f24471ffa1f936e0a79c6dbebb0a4c137fae501b2b
+size 1066655
diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/full.md b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf3286eac51ec73384690f40171f504e1e8cf97e
--- /dev/null
+++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/full.md
@@ -0,0 +1,513 @@
+# Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach
+
+Simiao Zuo†, Chen Liang†, Haoming Jiang‡, Xiaodong Liu‡, Pengcheng He‡, Jianfeng Gao‡, Weizhu Chen and Tuo Zhao†
+
+$^{\dagger}$ Georgia Institute of Technology $\square$ Amazon $\circ$ Microsoft
+
+{simiaozuo,ciang73}@gatech.edu,jhaoming@amazon.com
+
+{xiaodl,Pengcheng.H,jfgao,wzchen}@microsoft.com,
+
+tourzhao@gatech.edu
+
+# Abstract
+
+Adversarial regularization has been shown to improve the generalization performance of deep learning models in various natural language processing tasks. Existing works usually formulate the method as a zero-sum game, which is solved by alternating gradient descent/ascent algorithms. Such a formulation treats the adversarial and the defending players equally, which is undesirable because only the defending player contributes to the generalization performance. To address this issue, we propose Stackelberg Adversarial Regularization (SALT), which formulates adversarial regularization as a Stackelberg game. This formulation induces a competition between a leader and a follower, where the follower generates perturbations, and the leader trains the model subject to the perturbations. Different from conventional approaches, in SALT, the leader is in an advantageous position. When the leader moves, it recognizes the strategy of the follower and takes the anticipated follower's outcomes into consideration. Such a leader's advantage enables us to improve the model fitting to the unperturbed data. The leader's strategic information is captured by the Stackelberg gradient, which is obtained using an unrolling algorithm. Our experimental results on a set of machine translation and natural language understanding tasks show that SALT outperforms existing adversarial regularization baselines across all tasks. Our code is publicly available.
+
+# 1 Introduction
+
+Adversarial regularization (Miyato et al., 2017) has been shown to improve the generalization performance of deep learning models in various natural language processing (NLP) tasks, such as language modeling (Wang et al., 2019b), machine translation (Sato et al., 2019), natural language understanding (Jiang et al., 2020), and reading comprehension (Zhang et al., 2020).
+
+sion (Jia and Liang, 2017). However, even though significant progress has been made, the power of adversarial regularization is not fully harnessed.
+
+Conventional adversarial regularization is formulated as a zero-sum game (a min-max optimization problem), where two players seek to minimize/maximize their utility functions. In this formulation, an adversarial player composes perturbations, and a defending player solves for the model parameters subject to the perturbed inputs. Existing algorithms find the equilibrium of this zero-sum game using alternating gradient descent/ascent (Madry et al., 2018). For example, in a classification problem, the adversarial player first generates the input perturbations by running projected gradient ascent to maximize a loss function, and then the defending player updates the model using gradient descent, trying to decrease the classification error. Notice that in this case, neither of the players know the strategy of its competitor, i.e., the model does not know how the perturbations are generated, and vice versa. In other words, the two players are of the same priority, and either one of them can be advantageous in the game. It is possible that the adversarial player generates over-strong perturbations that hinder generalization of the model.
+
+To resolve this issue, we grant the defending player (i.e., the model) a higher priority than the adversarial player by letting the defender recognize its competitor's strategy, such that it is advantageous in the game. Consequently, we propose Stackelberg Adversarial Regularization (SALT), where we formulate adversarial regularization as a Stackelberg game (Von Stackelberg, 2010). The concept arises from economics, where two firms are competing in a market, and one of the them is in the leading position by acknowledging the opponent's strategy. In Stackelberg adversarial regularization, a leader solves for the model parameters, and a follower generates input perturbations. The leader procures its advantage by considering what the best response
+
+of the follower is, i.e., how will the follower respond after observing the leader's decision. Then, the leader minimizes its loss, anticipating the predicted response of the follower.
+
+The SALT framework identifies the interaction between the leader and the follower by treating the follower's strategy (i.e., the input perturbations) as an operator of the leader's decision (i.e., the model parameters). Then we can solve for the model parameters using gradient descent. One caveat is that computing the gradient term, which we call the Stackelberg gradient, requires differentiating the interaction operator. To rigorously define this operator, recall that the follower can be approximately solved using gradient ascent. We can treat the perturbations in each iteration as an operator of the model parameters, and the interaction operator is then the composition of such update-induced operators. Correspondingly, the Stackelberg gradient is obtained by differentiating through these updates. This procedure is referred to as unrolling (Pearlmutter and Siskind, 2008), and the only computational overhead caused by it is computing Hessian vector products. As a result, when applying the finite difference method, computing the Stackelberg gradient requires two backpropagation and an extra $O(d)$ complexity operation, where $d$ is the embedding dimension. Therefore, the unrolling algorithm computes the Stackelberg gradient without causing much computational overhead.
+
+We conduct experiments on neural machine translation (NMT) and natural language understanding (NLU) tasks. For the NMT tasks, we experiment on four low-resource and one rich-resource datasets. SALT improves upon existing adversarial regularization algorithms by notable margins, especially on low-resource datasets, where it achieves up to 2 BLEU score improvements. To test performance on NLU tasks, we evaluate SALT on the GLUE (Wang et al., 2019a) benchmark. SALT outperforms state-of-the-art models, such as BERT (Devlin et al., 2019), FreeAT (Shafahi et al., 2019), FreeLB (Zhu et al., 2019), and SMART (Jiang et al., 2020). We build SALT on the BERT-base architecture, and we achieve an average score of 84.5 on the GLUE development set, which is at least 0.7 higher than existing methods. Moreover, even though we adapt SALT to BERT-base, the performance is noticeably higher than the vanilla BERT-large model (84.5 vs. 84.0).
+
+The unrolling procedure was first proposed
+
+for auto-differentiation (Pearlmutter and Siskind, 2008), and later applied in various context, such as hyper-parameter optimization (Maclaurin et al., 2015; Finn et al., 2017), meta-learning (Andrychowicz et al., 2016), and Generative Adversarial Networks (Metz et al., 2017). To the best of our knowledge, we are the first to apply the unrolling technique to adversarial regularization to improve generalization performance.
+
+We summarize our contributions as the following: (1) We propose SALT, which employs a Stackelberg game formulation of adversarial regularization. (2) We use an unrolling algorithm to find the equilibrium of the Stackelberg game. (3) Extensive experiments on NMT and NLU tasks verify the efficacy of our method.
+
+Notation. We use $\mathrm{df}(x) / \mathrm{dx}$ to denote the gradient of $f$ with respect to $x$ . We use $\partial f(x,y) / \partial x$ to denote the partial derivative of $f$ with respect to $x$ . For a $d$ -dimensional vector $v$ , its $\ell_2$ norm is defined as $\| v \|_2 = (\sum_{i=1}^d v_i^2)^{1/2}$ , and its $\ell_\infty$ norm is defined as $\| v \|_\infty = \max_{1 \leq i \leq d} |v_i|$ .
+
+# 2 Background and Related Works
+
+$\diamond$ Neural machine translation has achieved superior empirical performance (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017). We focus on the Transformer architecture (Vaswani et al., 2017), which integrates the attention mechanism in an encoder-decoder structure. The encoder in a Transformer model first maps a source sentence into an embedding space, then the embeddings are fed into several encoding layers to generate hidden representations, where each of the encoding layers contains a self-attention mechanism and a feed-forward neural network (FFN). After which the Transformer decoder layers, each contains a self-attention, a encoder-decoder attention, and a FFN, decode the hidden representations.
+
+Adversarial training was originally proposed for training adversarial robust classifiers in image classification (Szegedy et al., 2014; Goodfellow et al., 2015; Madry et al., 2018). The idea is to synthesize strong adversarial samples, and the classifier is trained to be robust to them. Theoretical understanding (Li et al., 2019) about adversarial training and various algorithms to generate the adversarial samples, such as learning-to-learn (Jiang et al., 2021), are proposed. Besides computer vision, adversarial training can also benefit reinforce
+
+ment learning (Shen et al., 2020). Different from the above fields, in NLP, the goal of adversarial training is to build models that generalize well on the unperturbed test data. Note that robustness and generalization are different concepts. Recent works (Raghunathan et al., 2020; Min et al., 2020) showed that adversarial training can hurt generalization performance, i.e., accuracy on clean data. As such, adversarial training needs to be treated with great caution. Therefore, in NLP, this technique requires refined tuning of, for example, the training algorithm and the perturbation strength.
+
+$\diamond$ Fine-tuning pre-trained language models (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019b; He et al., 2020) is state-of-the-art for natural language understanding tasks such as the GLUE (Wang et al., 2019a) benchmark. Recently, there are works that use adversarial pre-training (Liu et al., 2020a) and adversarial-regularized fine-tuning methods such as SMART (Jiang et al., 2020), FreeLB (Zhu et al., 2019), and FreeAT (Shafahi et al., 2019) to improve model generalization and robustness (Cheng et al., 2021).
+
+# 3 Method
+
+Natural language inputs are discrete symbols (e.g., words), instead of continuous ones. Therefore, a common approach to generate perturbations is to learn continuous embeddings of the inputs and operate on the embedding space (Miyato et al., 2017; Clark et al., 2018; Sato et al., 2018, 2019; Stutz et al., 2019). Let $f(x, \theta)$ be our model, where $x$ is the input embedding, and $\theta$ is the model parameter. Further let $y$ be the ground-truth output corresponding to $x$ . For example, in NMT, $f$ is a sequence-to-sequence model, $x$ is the embedding of the source sentence, and $y$ is the target sentence. In classification tasks, $f$ is a classifier, $x$ is the input sentence/document embedding, and $y$ is the label. In both of these cases, the model is trained by minimizing the empirical risk over the training data, i.e.,
+
+$$
+\min _ {\theta} \mathcal {L} (\theta) = \frac {1}{n} \sum_ {i = 1} ^ {n} \ell (f (x _ {i}, \theta), y _ {i}).
+$$
+
+Here $\{(x_i, y_i)\}_{i=1}^n$ is our dataset, and $\ell$ is a task-specific loss function, e.g., cross-entropy loss.
+
+# 3.1 Adversarial Regularization
+
+Adversarial Regularization (Miyato et al., 2017) is a regularization technique that encourages smooth-
+
+ness of the model outputs around each input data point. Concretely, we define an adversarial regularizer for non-regression tasks as
+
+$$
+\ell_ {v} (x, \delta , \theta) = \operatorname {K L} \left(f (x, \theta) \mid | f (x + \delta , \theta)\right),
+$$
+
+where $\mathrm{KL}(P\mid Q) = \sum_{k}p_{k}\log \frac{p_{k}}{q_{k}}.$
+
+Here $\mathrm{KL}(\cdot ||\cdot)$ is the Kullback-Leibler (KL) divergence, $\delta$ is the perturbation corresponding to $x$ , and $f(\cdot ,\theta)$ is the prediction probability simplex given model parameters $\theta$ . In regression tasks, the model output $f(\cdot ,\theta)$ is a scalar, and the adversarial regularizer is defined as
+
+$$
+\ell_ {v} (x, \delta , \theta) = (f (x, \theta) - f (x + \delta , \theta)) ^ {2}.
+$$
+
+Then the training objective is
+
+$$
+\min _ {\theta} \mathcal {L} (\theta) + \frac {\alpha}{n} \sum_ {i = 1} ^ {n} \max _ {\| \delta_ {i} \| \leq \epsilon} \ell_ {v} \left(x _ {i}, \delta_ {i}, \theta\right), \tag {1}
+$$
+
+where $\alpha$ is a tuning parameter, $\epsilon$ is a pre-defined perturbation strength, and $\| \cdot \|$ is either the $\ell_2$ norm or the $\ell_{\infty}$ norm.
+
+The min and max problems are solved using alternating gradient descent/ascent. We first generate the perturbations $\delta$ by solving the maximization problem using several steps of projected gradient ascent, and then we update the model parameters $\theta$ with gradient descent, subject to the perturbed inputs. More details are deferred to Appendix A.
+
+One major drawback of the zero-sum game formulation (Eq. 1) is that it fails to consider the interaction between the perturbations $\delta$ and the model parameters $\theta$ . This is problematic because a small change in $\delta$ may lead to a significant change in $\theta$ , which renders the optimization ill-conditioned. Thus, the model is susceptible to underfitting and generalize poorly on unperturbed test data.
+
+# 3.2 Adversarial Regularization as Stackelberg Game
+
+We formulate adversarial regularization as a Stackelberg game (Von Stackelberg, 2010):
+
+$$
+\min _ {\theta} \mathcal {F} (\theta) = \mathcal {L} (\theta) + \frac {\alpha}{n} \sum_ {i = 1} ^ {n} \ell_ {v} \left(x _ {i}, \delta_ {i} ^ {K} (\theta), \theta\right),
+$$
+
+$$
+\mathrm {s . t .} \delta_ {i} ^ {K} (\theta) = U ^ {K} \circ U ^ {K - 1} \circ \dots \circ U ^ {1} \left(\delta_ {i} ^ {0}\right). \tag {2}
+$$
+
+Here “ $\circ$ ” denotes operator composition, i.e., $f \circ g(\cdot) = f(g(\cdot))$ . Following conventions, in this
+
+Stackelberg game, we call the optimization problem in Eq. 2 the leader. Further, the follower in Eq. 2 is described using a equality constraint. Note that $U^{K}$ is the follower's $K$ -step composite strategy, which is the composition of $K$ one-step strategies $\{U^{k}\}_{k=1}^{K}$ . In practice, $K$ is usually small. This is because in NLP, we target for generalization, instead of robustness, and choosing a small $K$ prevents over-strong adversaries.
+
+In Eq. 2, $U^{k}$ s are the follower's one-step strategies, and we call them update operators, e.g., $U^{1}$ updates $\delta^{0}$ to $\delta^{1}$ using pre-selected algorithms. For example, projected gradient ascent can be applied as the update procedure, that is,
+
+$$
+\begin{array}{l} \delta^ {k} (\theta) = U ^ {k} (\delta^ {k - 1} (\theta)) \\ = \Pi_ {\| \cdot \| \leq \epsilon} \left(\delta^ {k - 1} (\theta) + \eta \frac {\partial \ell_ {v} (x , \delta^ {k - 1} (\theta) , \theta)}{\partial \delta^ {k - 1} (\theta)}\right) \\ \end{array}
+$$
+
+$$
+\text {f o r} k = 1, \dots , K, \tag {3}
+$$
+
+where $\delta^0\sim \mathcal{N}(0,\sigma^2\mathrm{I})$ is a initial random perturbation drawn from a normal distribution with variance $\sigma^2 I$ , $\eta$ is a pre-defined step size, and $\Pi$ denotes projection to the $\ell_2$ -ball or the $\ell_{\infty}$ -ball.
+
+To model how the follower will react to a leader's decision $\theta$ , we consider the function $\delta^{K}(\theta)$ . Then, adversarial training can be viewed solely in terms of the leader decision $\theta$ .
+
+We highlight that in our formulation, the leader knows the strategy, instead of only the outcome, of the follower. This information is captured by the Stackelberg gradient $\mathrm{d}\mathcal{F}(\theta) / \mathrm{d}\theta$ , defined as the following:
+
+$$
+\begin{array}{l} \frac {\mathrm {d} \mathcal {F} (\theta)}{\mathrm {d} \theta} = \frac {\mathrm {d} \ell (f (x , \theta) , y)}{\mathrm {d} \theta} + \alpha \frac {\mathrm {d} \ell_ {v} (x , \delta^ {K} (\theta) , \theta)}{\mathrm {d} \theta} \\ = \underbrace {\frac {\mathrm {d} \ell (f (x , \theta) , y)}{\mathrm {d} \theta} + \alpha \frac {\partial \ell_ {v} (x , \delta^ {K} , \theta)}{\partial \theta}} _ {\text {l e a d e r}} \\ + \underbrace {\alpha \frac {\partial \ell_ {v} (x , \delta^ {K} (\theta) , \theta)}{\partial \delta^ {K} (\theta)} \frac {\mathrm {d} \delta^ {K} (\theta)}{\mathrm {d} \theta}} _ {\text {l e a d e r - f o l l o w e r i n t e r a c t i o n}}. \tag {4} \\ \end{array}
+$$
+
+The underlying idea behind Eq. $4^{1}$ is that given a leader's decision $\theta$ , we take the follower's strategy into account (i.e., the "leader-follower interaction" term) and find a direction along which the
+
+Algorithm 1: Stackelberg Adversarial Regularization with Unrolled Optimization.
+
+Input: $\mathcal{D}$ : dataset; $T$ total number of training epochs; $\sigma^2$ variance of initial perturbations; $K$ number of unrolling steps; Optimizer: optimizer to update $\theta$ .
+
+Initialize: model parameters $\theta$
+
+for $t = 1,\dots ,T$ do
+
+for $(x,y)\in \mathcal{D}$ do
+
+Initialize $\delta^0\sim \mathcal{N}(0,\sigma^2\mathrm{I})$
+
+for $k = 1,\dots ,K$ do
+
+Compute $\delta^k$ using Eq. 3;
+
+Compute $\mathrm{d}\delta^k (\theta) / \mathrm{d}\theta$ using
+
+Eq. 6;
+
+end
+
+Compute $\mathrm{d}\mathcal{F}(\theta) / \mathrm{d}\theta$ based on
+
+$\mathrm{d}\delta^{K}(\theta) / \mathrm{d}\theta$ using Eq. 4;
+
+$\theta \gets$ Optimizer(dF(0)/d0);
+
+end
+
+end
+
+Output: $\theta$
+
+leader's loss decreases the most. Then we update $\theta$ in that direction. Note that the gradient used in standard adversarial training (Eq. 1) only contains the "leader" term, such that the "leader-follower interaction" is not taken into account.
+
+# 3.3 SALT: Stackelberg Adversarial Regularization
+
+We propose to use an unrolling method (Pearlmutter and Siskind, 2008) to compute the Stackelberg gradient (Eq. 4). The general idea is that since the interaction operator is defined as the composition of the $\{U^k\}$ operators, all of which are known, we can directly compute the derivative of $\delta^K (\theta)$ with respect to $\theta$ . Concretely, we first run a forward iteration to update $\delta$ , and then we differentiate through this update to acquire the Stackelberg gradient.
+
+Note that the updates of $\delta$ can take any form, such as projected gradient ascent in Eq. 3, or more complicated alternatives like Adam (Kingma and Ba, 2015). For notation simplicity, we denote $\Delta(x, \delta^{k-1}(\theta), \theta) = \delta^k(\theta) - \delta^{k-1}(\theta)$ . Accordingly, Eq. 3 can be rewritten as
+
+$$
+\delta^ {k} (\theta) = \delta^ {k - 1} (\theta) + \Delta (x, \delta^ {k - 1} (\theta), \theta). \tag {5}
+$$
+
+The most expensive part in computing the Stackelberg gradient (Eq. 4) is to calculate $\mathrm{d}\delta^{K}(\theta) / \mathrm{d}\theta$
+
+which involves differentiating through the composition form of the follower's strategy:
+
+$$
+\begin{array}{l} \frac {\mathrm {d} \delta^ {k} (\theta)}{\mathrm {d} \theta} = \frac {\mathrm {d} \delta^ {k - 1} (\theta)}{\mathrm {d} \theta} + \frac {\partial \Delta (x , \delta^ {k - 1} , \theta)}{\partial \theta} \\ + \frac {\partial \Delta (x , \delta^ {k - 1} (\theta) , \theta)}{\partial \delta^ {k - 1} (\theta)} \frac {\mathrm {d} \delta^ {k - 1} (\theta)}{\mathrm {d} \theta} \\ \text {f o r} k = 1, \dots , K. \tag {6} \\ \end{array}
+$$
+
+We can compute Eq. 6 efficiently using deep learning libraries, such as PyTorch (Paszke et al., 2019). Notice that $\Delta(x, \delta^{k-1}(\theta), \theta)$ already contains the first order derivative with respect to the perturbations. Therefore, the term $\partial \Delta(x, \delta^{k-1}(\theta), \theta) / \partial \delta^{k-1}(\theta)$ contains the Hessian of $\delta^{k-1}(\theta)$ . As a result, in Eq. 4, the most expensive operation is the Hessian vector product (Hvp). Using the finite difference method, computing Hvp only requires two backpropagation and an extra $O(d)$ complexity operation. This indicates that in comparison with conventional adversarial training, SALT does not introduce significant computational overhead. The training algorithm is summarized in Algorithm 1.
+
+# 4 Experiments
+
+In all the experiments, we use PyTorch² (Paszke et al., 2019) as the backend. All the experiments are conducted on NVIDIA V100 32GB GPUs. We use the Higher package³ (Grefenstette et al., 2019) to implement the proposed algorithm.
+
+# 4.1 Baselines
+
+We adopt several baselines in the experiments.
+
+$\diamond$ Transformer (Vaswani et al., 2017) achieves superior performance in neural machine translation.
+$\diamond$ BERT (Devlin et al., 2019) is a pre-trained language model that exhibits outstanding performance after fine-tuned on downstream NLU tasks.
+Adversarial training (Adv, Sato et al. 2019) in NMT can improve models' generalization by training the model to defend against adversarial attacks.
+$\diamond$ FreeAT (Shafahi et al., 2019) enables "free" adversarial training by recycling the gradient information generated when updating the model parameters. This method was proposed for computer vision tasks, but was later modified for NLU. We further adjust the algorithm for NMT tasks.
+
+| Data | Source | Train | Valid | Test |
| En-Vi | IWSLT'15 | 133k | 768 | 1268 |
| De-En | IWSLT'14 | 161k | 7.2k | 6.7k |
| Fr-En | IWSLT'16 | 224k | 1080 | 1133 |
| En-De | WMT'16 | 4.5m | 3.0k | 3.0k |
+
+Table 1: Dataset source and statistics. Here "k" stands for thousand, and "m" stands for million.
+
+ | En-Vi | De-En | Fr-En |
| Transformer | 30.3 | 34.7 | 38.2 |
| Adv | 31.0 | 34.8 | 38.8 |
| FreeAT | 31.0 | 35.2 | 38.6 |
| FreeLB | 31.6 | 35.3 | 38.7 |
| SMART | 31.5 | 35.5 | 38.9 |
| SALT | 32.8 | 36.8 | 39.7 |
+
+Table 2: BLEU score on three low-resource datasets. All the baseline results are from our re-implementation. We report the mean of three runs.
+
+$\diamond$ FreeLB (Zhu et al., 2019) is a "free" large batch adversarial training method. We modify FreeLB to an adversarial regularization method that better fits our need. This algorithm was originally proposed for NLU. We modify the algorithm so that it is also suitable for NMT tasks.
+
+$\diamond$ SMART (Jiang et al., 2020) is a state-of-the-art fine-tuning method that utilizes smoothness-inducing regularization and Bregman proximal point optimization.
+
+We highlight that we focus on model generalization on clean data, instead of adversarial robustness (a model's ability to defend adversarial attacks). As we will see in the experiments, adversarial training methods (e.g., Adv, FreeAT) suffer from label leakage, and do not generalize as well as adversarial regularization methods.
+
+# 4.2 Neural Machine Translation
+
+Datasets. We adopt three low-resource datasets and a rich-resource dataset. Dataset statistics are summarized in Table 1. For the low-resource experiments, we use: English-Vietnamese from IWSLT'15, German-English from IWSLT'14, and French-English from IWSLT'16. For the rich-resource experiments, we use the English-German dataset from WMT'16, which contains about 4.5 million training samples.
+
+ | RTE Acc | MRPC Acc/F1 | CoLA Mcc | SST-2 Acc | STS-B P/S Corr | QNLI Acc | QQP Acc/F1 | MNLI-m/mm Acc | Average Score |
| BERTLARGE | 71.1 | 86.0/89.6 | 61.8 | 93.5 | 89.6/89.3 | 92.4 | 91.3/88.4 | 86.3/86.2 | 84.0 |
| BERTBASE | 63.5 | 84.1/89.0 | 54.7 | 92.9 | 89.2/88.8 | 91.1 | 90.9/88.3 | 84.5/84.4 | 81.5 |
| FreeAT | 68.0 | 85.0/89.2 | 57.5 | 93.2 | 89.5/89.0 | 91.3 | 91.2/88.5 | 84.9/85.0 | 82.6 |
| FreeLB | 70.0 | 86.0/90.0 | 58.9 | 93.4 | 89.7/89.2 | 91.5 | 91.4/88.4 | 85.4/85.5 | 83.3 |
| SMART | 71.2 | 87.7/91.3 | 59.1 | 93.0 | 90.0/89.4 | 91.7 | 91.5/88.5 | 85.6/86.0 | 83.8 |
| SALT | 72.9 | 88.4/91.8 | 61.0 | 93.6 | 90.4/90.0 | 92.0 | 91.7/88.6 | 86.1/85.8 | 84.5 |
+
+Table 3: Evaluation results on the GLUE development set. All the rows use $BERT_{BASE}$ , except the top one, which is included to demonstrate the effectiveness of our model. Best results on each dataset, excluding $BERT_{LARGE}$ , are shown in **bold**. Results of $BERT_{BASE}$ (Devlin et al., 2019), $BERT_{LARGE}$ (Devlin et al., 2019), FreeAT (Shafahi et al., 2019), and FreeLB (Zhu et al., 2019) are from our re-implementation. SMART results are from Jiang et al. (2020).
+
+ | RTE Acc | MRPC Acc/F1 | CoLA Mcc | SST-2 Acc | STS-B P/S Corr | QNLI Acc | QQP Acc/F1 | MNLI-m/mm Acc | Average Score |
| BERTBASE | 66.4 | 84.8/88.9 | 52.1 | 93.5 | 87.1/85.8 | 90.5 | 71.2/89.2 | 84.6/83.4 | 80.0 |
| FreeLB | 70.1 | 83.5/88.1 | 54.5 | 93.6 | 87.7/86.7 | 91.8 | 72.7/89.6 | 85.7/84.6 | 81.2 |
| SALT | 72.2 | 85.8/89.7 | 55.6 | 94.2 | 88.0/87.1 | 92.1 | 72.8/89.8 | 85.8/84.8 | 82.0 |
+
+Table 4: GLUE test set results on the GLUE evaluation server. All the methods fine-tune a pre-trained BERTBASE model. FreeAT and SMART did not report BERTBASE results in their paper or on the GLUE evaluation server. Model references: BERTBASE (Devlin et al., 2019), FreeLB (Zhu et al., 2019).
+
+ | BLEU |
| Transformer (Vaswani et al., 2017) | 28.4 |
| FreeAT (Shafahi et al., 2019) | 29.0 |
| FreeLB (Zhu et al., 2019) | 29.0 |
| SMART (Jiang et al., 2020) | 29.1 |
| SALT | 29.6 |
+
+Table 5: sacreBLEU score on WMT'16 En-De. All the baseline results are from our re-implementation.
+
+Implementation. Recall that to generate adversarial examples, we perturb the word embeddings. In NMT experiments, we perturb both the source-side and the target-side embeddings. This strategy is empirically demonstrated (Sato et al., 2019) to be more effective than perturbing only one side of the inputs. We use $\text{Fairseq}^5$ (Ott et al., 2019) to implement our algorithms. We adopt the Transformer-base (Vaswani et al., 2017) architecture in all the low-resource experiments, except IWSLT'14 De-En. In this dataset, we use a model smaller than Transformer-base by decreasing the hidden dimension size from 2048 to 1024, and decreasing the number of heads from 8 to 4 (while dimension of each head doubles). For the rich-resource experi
+
+ments, we use the Transformer-big (Vaswani et al., 2017) architecture. Training details are presented in Appendix B.1.
+
+Results. Experimental results for the low-resource experiments are summarized in Table 2. Notice that SMART, which utilizes conventional adversarial regularization, consistently outperforms standard adversarial training (Adv). Similar observations were also reported in Miyato et al. (2017); Sato et al. (2019). This is because Adv generates perturbations using the correct examples, thus, the label information are "leaked" (Kurakin et al., 2017). Additionally, we can see that SALT is particularly effective in this low-resource setting, where it outperforms all the baselines by large margins. In comparison with the vanilla Transformer model, SALT achieves up to 2 BLEU score improvements on all the three datasets.
+
+Table 5 summarizes experiment results on the WMT'16 En-De dataset. We report the sacre-BLEU (Post, 2018) score, which is a detokenized version of the BLEU score that better reflects translation quality. We can see that SALT outperforms all the baseline methods by notable margins, and it improves upon the vanilla Transformer model by 1.2 BLEU score.
+
+
+(a) Number of unrolling steps.
+
+
+(b) Perturbation strength $\epsilon, \ell_2$ case.
+
+
+(c) Perturbation strength $\epsilon$ , $\ell_{\infty}$ case.
+Figure 1: Relation between BLEU score and different factors on the IWSLT'14 De-En dataset.
+
+# 4.3 Natural Language Understanding
+
+Datasets. We demonstrate the effectiveness of SALT on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019a), which is a collection of nine NLU tasks. The benchmark includes question answering (Rajpurkar et al., 2016), linguistic acceptability (CoLA, Warstadt et al. 2019), sentiment analysis (SST, Socher et al. 2013), text similarity (STS-B, Cer et al. 2017), paraphrase detection (MRPC, Dolan and Brockett 2005), and natural language inference (RTE & MNLI, Dagan et al. 2006; Bar-Haim et al. 2006; Giampiccolo et al. 2007; Bentivogli et al. 2009; Williams et al. 2018) tasks. Dataset details can be found in Table 7 (Appendix B.2).
+
+Implementation. We evaluate our algorithm by fine-tuning a pre-trained BERT-base (Devlin et al., 2019) model. Our implementation is based on the MT-DNN code-base (Liu et al., 2019a, 2020b). Training details are presented in Appendix B.2.
+
+Results. Table 3 summarizes experiment results on the GLUE development set. We can see that SALT outperforms BERTBASE in all the tasks. Further, our method is particularly effective for small datasets, such as RTE, MRPC, and CoLA, where we achieve 9.4, 4.3, and 6.3 absolute improvements, respectively. Comparing with other adversarial training baselines, i.e., FreeAT, FreeLB, and SMART, our method achieves notable improvements in all the tasks.
+
+We highlight that SALT achieves a 84.5 average score, which is significantly higher than that of the vanilla BERTBASE (+3.0) fine-tuning approach. Also, our average score is higher than the scores of baseline adversarial training methods (+1.9, +1.2, +0.7 for FreeAT, FreeLB, SMART, respectively). Moreover, the 84.5 average score is even higher
+
+than fine-tuning $\mathrm{BERT}_{\mathrm{LARGE}}(+0.5)$ , which contains three times more parameters than the backbone of SALT.
+
+Table 4 summarizes results on the GLUE test set. We can see that SALT consistently outperforms $\mathrm{BERT}_{\mathrm{BASE}}$ and FreeLB across all the tasks.
+
+# 4.4 Parameter Study
+
+$\diamond$ Robustness to the number of unrolling steps. From Figure 1a, we can see that SALT is robust to the number of unrolling steps. As such, setting the unrolling steps $K = 1$ or 2 suffices to build models that generalize well.
+$\diamond$ Robustness to the perturbation strength. Unrolling is robust to the perturbation strength within a wide range, as indicated in Figure 1b. Meanwhile, the performance of SMART consistently drops when we increase $\epsilon$ from 0.01 to 0.5. This indicates that the unrolling algorithm can withstand stronger perturbations than conventional approaches.
+$\diamond \ell_{2}$ constraints vs. $\ell_{\infty}$ constraints. Figure 1c illustrates model performance with respect to different perturbation strength in the $\ell_{\infty}$ case. Notice that in comparison with the $\ell_{2}$ case (Figure 1b), SALT achieves the same level of performance, but the behavior of SMART is unstable. Additionally, SALT is stable within a wider range of perturbation strength in the $\ell_{2}$ than in the $\ell_{\infty}$ case, which is the reason that we adopt $\ell_{2}$ constraints in the experiments.
+
+We highlight that SALT does not introduce additional tuning parameter comparing with conventional adversarial regularization approaches.
+
+# 4.5 Analysis
+
+Unrolling reduces bias. In Figure 3, we visualize the training and the validation error on the STS-B and the SST datasets from the GLUE benchmark. As mentioned, conventional adversarial regulariza
+
+
+(a) BERTBASE (ECE: $6.09\%$ ).
+
+
+(b) SMART (ECE: $5.08\%$ ).
+
+
+(c) SALT (ECE: $4.06\%$ ).
+
+
+Figure 2: Reliability diagrams on SST. Perfect Calibration: confidence = accuracy; ECE: the lower the better.
+
+
+Figure 3: Training and validation loss of SMART and SALT on STS-B (upper) and SST-2 (lower) datasets.
+
+tion suffers from over-strong perturbations, such that the model cannot fit the unperturbed data well. This is supported by the fact that the training loss of SALT is smaller than that of SMART, which means SALT fits the data better. SALT also yields a smaller loss than SMART on the validation data, indicating that the Stackelberg game-formulated model exhibits better generalization performance.
+
+Adversarial robustness. Even though the primary focus of SALT is model generalization, we still test its robustness on the Adversarial-NLI (ANLI, Nie et al. 2020) dataset. The dataset contains 163k data, which are collected via a human-and-model-in-the-loop approach. From Table 6, we can see that SALT improves model robustness upon conventional methods (i.e., SMART).
+$\diamond$ Probing experiments. For each method, we first fine-tune a $\mathrm{BERT}_{\mathrm{BASE}}$ model on the SST-2 dataset. Then, we only tune a prediction head on other
+
+ | Dev |
| R1 | R2 | R3 | All |
| BERTBASE | 53.3 | 43.0 | 44.7 | 46.8 |
| SMART | 54.1 | 44.4 | 45.3 | 47.8 |
| SALT | 56.6 | 46.2 | 45.9 | 49.3 |
| Test |
| R1 | R2 | R3 | All |
| BERTBASE | 54.1 | 44.9 | 46.6 | 48.4 |
| SMART | 54.3 | 46.4 | 46.5 | 48.9 |
| SALT | 55.4 | 47.7 | 46.7 | 49.7 |
+
+Table 6: Experimental results on the ANLI dataset. Model references: $BERT_{BASE}$ (Devlin et al., 2019), SMART (Jiang et al., 2020).
+
+
+Figure 4: Probing experiments. Each violin plot is based on 10 runs with different random seeds.
+
+datasets while keeping the representations fixed. Such a method directly measures the quality of representations generated by different models. As illustrated in Fig. 4, SALT outperforms the baseline methods by large margins.
+
+$\diamond$ Classification Model Calibration. Adversarial regularization also helps model calibration (Stutz et al., 2020). A well-calibrated model produces reliable confidence estimation (i.e., confidence $\simeq$ actual accuracy), where the confidence is defined as the maximum output probability calculated by the model. We evaluate the calibration performance of BERTBASE, SMART, and SALT by the Expected Calibration Error (ECE, Niculescu-Mizil and Caruana 2005). We plot the reliability diagram (confidence vs. accuracy) on the SST task in Fig. 2 (see
+
+Appendix C for details). As we can see, $\mathrm{BERT}_{\mathrm{BASE}}$ and SMART are more likely to make overconfident predictions. SALT reduces ECE, and its corresponding reliability diagram aligns better with the perfect calibration curve.
+
+Comparison with Unrolled-GAN. The unrolling technique has been applied to train GANs (Unrolled-GAN, Metz et al. 2017). However, subsequent works find that this approach not necessarily improves training (Gnarova et al., 2018; Tran et al., 2019; Doan et al., 2019). This is because Unrolled-GAN unrolls its discriminator, which has a significant amount of parameters. Consequently, the unrolling algorithm operates on a very large space, rendering the stochastic gradients that are used for updating the discriminator considerably noisy. In SALT, the unrolling space is the sample embedding space, the dimension of which is much smaller than the unrolling space of GANs. Therefore, unrolling is more effective for NLP tasks.
+
+# 5 Conclusion
+
+We propose SALT, an adversarial regularization method that employs a Stackelberg game formulation. Such a formulation induces a competition between a leader (the model) and a follower (the adversary). In SALT, the leader is in an advantageous position by recognizing the follower's strategy, and this strategic information is captured by the Stackelberg gradient. We compute the Stackelberg gradient, and hence find the equilibrium of the Stackelberg game, using an unrolled optimization approach. Empirical results NMT and NLU tasks suggest the superiority of SALT to existing adversarial regularization methods.
+
+# Broader Impact
+
+This paper proposes Stackelberg Adversarial Regularization (SALT), an adversarial regularized training framework for NLP tasks. Different from Generative Adversarial Networks (GAN), where the target is to attack existing neural network models, or to improve models' robustness to adversarial attacks, we seek to improve the generalization performance of deep learning models. We demonstrate that the SALT framework can be used for neural machine translation and natural language understanding tasks. In all the experiments, we use publicly available data, and we build our algorithms using public code bases. We do not find any ethical concerns.
+
+# References
+
+Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. 2016. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3981-3989.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. 2006. The second PASCAL recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment.
+Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC'09).
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
+Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, and Jianfeng Gao. 2021. Posterior differential regularization with f-divergence for improving model robustness. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1078-1089, Online. Association for Computational Linguistics.
+Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914-1925, Brussels, Belgium. Association for Computational Linguistics.
+Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05, pages 177-190, Berlin, Heidelberg. Springer-Verlag.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of
+
+deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Thang Doan, João Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, and R. Devon Hjelm. 2019. On-line adaptative curriculum learning for gans. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 3470-3477. AAAI Press.
+William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1126-1135. PMLR.
+Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1243-1252. PMLR.
+Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague. Association for Computational Linguistics.
+Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, and Soumith Chintala. 2019. Generalized inner loop meta-learning. arXiv preprint arXiv:1910.01727.
+Paulina Grnarova, Kfir Y. Levy, Aurélien Lucchi, Thomas Hofmann, and Andreas Krause. 2018. An online learning approach to generative adversarial networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC,
+
+Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR.
+Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
+Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.
+Haoming Jiang, Zhehui Chen, Yuyang Shi, Bo Dai, and Tuo Zhao. 2021. Learning to defend by learning to attack. In The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, pages 577-585. PMLR.
+Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177-2190, Online. Association for Computational Linguistics.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, and Chao Zhang. 2020. Calibrated language model fine-tuning for in- and out-of-distribution data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1326–1340, Online. Association for Computational Linguistics.
+Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Yan Li, Ethan X Fang, Huan Xu, and Tuo Zhao. 2019. Inductive bias of gradient descent based adversarial training on separable data. arXiv preprint arXiv:1906.02931.
+
+Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020a. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994.
+Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.
+Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jianfeng Gao. 2020b. The Microsoft toolkit of multitask deep neural networks for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 118-126, Online. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. 2015. Gradient-based hyperparameter optimization through reversible learning. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 2113-2122. JMLR.org.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
+Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. 2017. Unrolled generative adversarial networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Yifei Min, Lin Chen, and Amin Karbasi. 2020. The curious case of adversarially robust models: More data can help, double descend, or hurt generalization. arXiv preprint arXiv:2002.11080.
+Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+
+Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 2901-2907. AAAI Press.
+Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In Machine Learning, Proceedings of the Twenty-Second International Conference (ICML 2005), Bonn, Germany, August 7-11, 2005, volume 119 of ACM International Conference Proceeding Series, pages 625-632. ACM.
+Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035.
+Barak A Pearlmutter and Jeffrey Mark Siskind. 2008. Reverse-mode ad in a functional framework: Lambda the ultimate backpropagator. ACM Transactions on Programming Languages and Systems (TOPLAS), 30(2):1-36.
+Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages
+
+2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, and Percy Liang. 2020. Understanding and mitigating the tradeoff between robustness and accuracy. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 7909-7919. PMLR.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
+Motoki Sato, Jun Suzuki, and Shun Kiyono. 2019. Effective adversarial regularization for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 204-210, Florence, Italy. Association for Computational Linguistics.
+Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4323-4330. ijcai.org.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John P. Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. 2019. Adversarial training for free! In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3353-3364.
+Qianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, and Tuo Zhao. 2020. Deep reinforcement learning with robust and smooth policy. In Proceed
+
+ings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 8707-8718. PMLR.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
+David Stutz, Matthias Hein, and Bernt Schiele. 2019. Disentangling adversarial robustness and generalization. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6976-6987. Computer Vision Foundation / IEEE.
+David Stutz, Matthias Hein, and Bernt Schiele. 2020. Confidence-calibrated adversarial training: Generalizing to unseen attacks. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9155-9166. PMLR.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
+Ngoc-Trung Tran, Viet-Hung Tran, Ngoc-Bao Nguyen, Linxiao Yang, and Ngai-Man Cheung. 2019. Self-supervised GAN: analysis and improvement with multi-class minimax game. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13232-13243.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Heinrich Von Stackelberg. 2010. Market structure and equilibrium. Springer Science & Business Media.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+
+Dilin Wang, ChengYue Gong, and Qiang Liu. 2019b. Improving neural language modeling via adversarial training. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 6555-6565. PMLR.
+Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
+Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. 2019. Freelb: Enhanced adversarial training for language understanding. arXiv preprint arXiv:1909.11764.
+
+# A Virtual Adversarial Training
+
+Virtual adversarial training (VAT, Miyato et al. 2017) solves the following min-max optimization problem:
+
+$$
+\begin{array}{l} \min _ {\theta} \mathcal {F} (\theta , \delta^ {*}) = \mathcal {L} (\theta) + \frac {\alpha}{n} \sum_ {i = 1} ^ {n} \ell_ {v} (x _ {i}, \delta_ {i} ^ {*}, \theta), \\ \delta_{i}^{*} = \operatorname *{argmax}_{\| \delta_{i}\| \leq \epsilon}\ell_{v}(x_{i},\delta_{i},\theta), \\ \end{array}
+$$
+
+where
+
+$$
+\ell_ {v} (x _ {i}, \delta_ {i}, \theta) = \mathrm {K L} \big (f (x _ {i}, \theta) | | f (x _ {i} + \delta_ {i}, \theta) \big).
+$$
+
+Note that the objective of the minimization problem is a function of both the model parameters and the perturbations.
+
+Because the min problem and the max problem are operating on the same loss function, i.e., the min problem seeks to minimize $\ell_v$ , while the max problem tries to maximize $\ell_v$ , this min-max optimization is essentially a zero-sum game. And we can find the game's equilibrium using gradient descent/ascient algorithms.
+
+Specifically, the adversarial player first generate an initial perturbation $\delta^0$ , and then refines it using $K$ steps of projected gradient ascent, i.e.,
+
+$$
+\delta^ {k} = \Pi_ {\| \cdot \| \leq \epsilon} \left(\delta^ {k - 1} + \eta \frac {\partial \ell_ {v} (x , \delta^ {k - 1} , \theta)}{\partial \delta^ {k - 1}}\right),
+$$
+
+for $k = 1,\dots ,K$
+
+Here $\Pi$ denotes projection onto the $\ell_2$ -ball or the $\ell_{\infty}$ -ball. Empirically, we find that these two choices yield very similar performance, although adversarial training models is robust to $\epsilon$ within a wider range when applying the $\ell_2$ constraint.
+
+After obtaining the $K$ -step refined perturbation $\delta^K$ , we use gradient descent to update the model parameters $\theta$ . Concretely, the gradient of the model parameters is computed as
+
+$$
+\frac {\partial \mathcal {F} (\theta , \delta^ {K})}{\partial \theta} = \frac {\mathrm {d} \ell (f (x _ {i} , \theta) , y _ {i})}{\mathrm {d} \theta} + \alpha \frac {\partial \ell_ {v} (x , \delta^ {K} , \theta)}{\partial \theta}. \tag {7}
+$$
+
+The training algorithm is demonstrated in Algorithm 2.
+
+Note that in this paper, we target for models' generalization performance on the unperturbed test data, therefore we do not want a strong adversary that "traps" the model parameters to a bad local optima. Most of the existing algorithms achieve this goal by carefully tuning the hyper-parameters
+
+$\epsilon$ and $K$ , i.e., a small $\epsilon$ usually generates weaker adversaries, so does a small $K$ . However, these heuristics do not work well, and at times $\delta^K$ is too strong. Consequently, conventional adversarial training results in undesirable underfitting on the clean data.
+
+# Algorithm 2: Virtual Adversarial Training.
+
+Input: $\mathcal{D}$ : dataset; $T$ : total number of training iterations; $\sigma^2$ : variance of initial perturbations; $K$ : number of inner training iterations; $\eta$ : step size to update $\delta$ ; Optimizer: optimizer to update $\theta$ .
+
+Initialize: model parameters $\theta$
+
+for $t = 1,\dots ,T$ do
+
+for $(x,y)\in \mathcal{D}$ do
+
+Initialize $\delta^0\sim \mathcal{N}(0,\sigma^2 I)$
+
+for $k = 1,\dots ,K$ do
+
+$$
+\begin{array}{l} {g ^ {k} \gets \partial \ell_ {v} (x _ {i}, \delta_ {i}, \theta) / \partial \delta_ {i};} \\ {\delta^ {k} \gets \Pi (\delta^ {k - 1} + \eta g ^ {k});} \end{array}
+$$
+
+Compute the gradient $g_{\theta}$ using
+
+Eq. 7;
+
+$\theta \gets \mathrm{Optimizer}(g_{\theta})$
+
+Output: $\theta$
+
+# B Training Details
+
+# B.1 Neural Machine Translation
+
+For the rich-resource WMT'16 En-De dataset, we use the pre-processed data from Ott et al. (2018)7. For the low-resource datasets, we use byte-pair encoding (Sennrich et al., 2016) with 10,000 merge operations to build the vocabulary for the IWSLT ('14, '15, '16) datasets. We follow the scripts in Ott et al. (2019)8 for other pre-processing steps.
+
+We use Adam (Kingma and Ba, 2015) as the leader's (i.e., the upper level problem that solves for model parameters) optimizer, and we set $\beta = (0.9, 0.98)$ . The follower's (i.e., the lower level problem that solves for perturbations) optimizer is chosen from Adam and SGD, where we observe only marginal empirical differences between these two choices. For low-resource translation, we set the batch size to be equivalent to 64k tokens. For example, when running the experiments on 4 GPUs,
+
+| Corpus | Task | #Train | #Dev | #Test | #Label | Metrics |
| Single-Sentence Classification (GLUE) |
| CoLA | Acceptability | 8.5k | 1k | 1k | 2 | Matthews corr |
| SST | Sentiment | 67k | 872 | 1.8k | 2 | Accuracy |
| Pairwise Text Classification (GLUE) |
| MNLI | NLI | 393k | 20k | 20k | 3 | Accuracy |
| RTE | NLI | 2.5k | 276 | 3k | 2 | Accuracy |
| QQP | Paraphrase | 364k | 40k | 391k | 2 | Accuracy/F1 |
| MRPC | Paraphrase | 3.7k | 408 | 1.7k | 2 | Accuracy/F1 |
| QNLI | QA/NLI | 108k | 5.7k | 5.7k | 2 | Accuracy |
| Text Similarity (GLUE) |
| STS-B | Similarity | 7k | 1.5k | 1.4k | 1 | Pearson/Spearman corr |
+
+Table 7: Summary of the GLUE benchmark.
+
+ | Batch | lrleader | lrfollower | σ | ε | K | Beam | Len-Pen |
| En-Vi (IWSLT'15) | 64k | 1 × 10-3 | 1 × 10-5 | 1 × 10-4 | 0.1 | 1 | 10 | 1.0 |
| De-En (IWSLT'14) | 64k | 1 × 10-3 | 1 × 10-4 | 1 × 10-4 | 0.3 | 1 | 9 | 1.5 |
| Fr-En (IWSLT'16) | 64k | 1 × 10-3 | 1 × 10-5 | 1 × 10-5 | 0.3 | 1 | 10 | 2.0 |
| En-De (WMT'16) | 450k | 1 × 10-3 | 1 × 10-4 | 1 × 10-4 | 0.3 | 1 | 4 | 0.6 |
+
+Table 8: Hyper-parameters for machine translation. Here, $\sigma$ is the standard deviation of the initial perturbations, $\epsilon$ is the perturbation strength, $K$ is the number of unrolling steps, Beam is the size of beam search, and Len-Pen is the length penalty parameter during beam search.
+
+we set the tokens-per-GPU to be 8,000, and we accumulate gradients for 2 steps. For rich-resource translation, we set the batch size to be equivalent to $450\mathrm{k}$ tokens. In all the experiments, we constrain each perturbation according to its sentence-level $\ell_2$ norm, i.e., $\| \delta \| _2\leq \epsilon$ . Other hyper-parameters are specified in Table 8.
+
+# B.2 Natural Language Understanding
+
+Details of the GLUE benchmark, including tasks, statistics, and evaluation metrics, are summarized in Table 7.
+
+We use Adam as both the leader's and the follower's optimizer, and we set $\beta = (0.9, 0.98)$ . The learning rate of the leader $\mathrm{lr}_{\mathrm{leader}}$ is chosen from $\{5 \times 10^{-5}, 1 \times 10^{-4}, 5 \times 10^{-4}\}$ , and the follower's learning rate is chosen from $\{1 \times 10^{-5}, \mathrm{lr}_{\mathrm{leader}}\}$ . We choose the batch size from $\{4, 8, 16, 32\}$ , and we train for a maximum 6 epochs with early-stopping based on the results on the development set. We apply a gradient norm clipping of 1.0. We set the dropout rate in task specific layers to 0.1. We choose standard deviation of initial perturbations $\sigma$ from $\{1 \times 10^{-5}, 1 \times 10^{-4}\}$ , and $\ell_2$ constraints
+
+with perturbation strength $\epsilon = 1.0$ are applied. We set the unrolling steps $K = 2$ . We report the best performance on each dataset individually.
+
+# C Model Calibration
+
+Many applications require trustworthy predictions that need to be not only accurate but also well calibrated (Kong et al., 2020). A well-calibrated model is expected to output prediction confidence comparable to its classification accuracy. For example, given 100 data points with their prediction confidence 0.6, we expect 60 of them to be correctly classified. More precisely, for a data point $X$ , we denote by $Y(X)$ the ground truth label, $\widehat{Y}(X)$ the label predicted by the model, and $\widehat{P}(X)$ the output probability associated with the predicted label. The calibration error of the predictive model for a given confidence $p \in (0,1)$ is defined as:
+
+$$
+\mathcal {E} _ {p} = \left| \mathbb {P} \left[ \widehat {Y} (X) = Y (X) | \widehat {P} (X) = p \right] - p \right|. \tag {8}
+$$
+
+Since Eq. 8 involves population quantities, we usually adopt empirical approximations (Guo et al., 2017) to estimate the calibration error. Specifically,
+
+we partition all data points into 10 bins of equal size according to their prediction confidence. Let $\mathcal{B}_m$ denote the bin with prediction confidence bounded between $\ell_m$ and $u_{m}$ . Then, for any $p\in [\ell_m,u_m)$ , we define the empirical calibration error as:
+
+$$
+\widehat {\mathcal {E}} _ {p} = \widehat {\mathcal {E}} _ {m} = \frac {1}{| \mathcal {B} _ {m} |} \Big | \sum_ {i \in \mathcal {B} _ {m}} \left[ \mathbf {1} (\widehat {y} _ {i} = y _ {i}) - \widehat {p} _ {i} \right] \Big |, (9)
+$$
+
+where $y_{i},\widehat{y}_{i}$ and $\widehat{p_i}$ are the true label, predicted label and confidence for sample $i$
+
+Reliability Diagram is a bar plot that compares $\widehat{\mathcal{E}}_p$ against each bin, i.e., $p$ . A perfectly calibrated would have $\widehat{\mathcal{E}}_p = (\ell_m + u_m) / 2$ for each bin.
+
+Expected Calibration Error (ECE) is the weighted average of the calibration errors of all bins (Naeini et al., 2015) defined as:
+
+$$
+\mathrm {E C E} = \sum_ {m = 1} ^ {M} \frac {\left| \mathcal {B} _ {m} \right|}{n} \widehat {\mathcal {E}} _ {m}, \tag {10}
+$$
+
+where $n$ is the sample size.
+
+We remark that the goal of calibration is to minimize the calibration error without significantly sacrificing prediction accuracy. Otherwise, a random guess classifier can achieve zero calibration error.
\ No newline at end of file
diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/images.zip b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3f969b82d8ea6b6230f22d2ee0d5a5ca85af839b
--- /dev/null
+++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b40778268d561a9fd563aca690b9a09099738ecc2c32525a85520ecf9290dca5
+size 617269
diff --git a/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/layout.json b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c54712af3f3277040725b34bf039ce2cf0cd23c6
--- /dev/null
+++ b/adversarialregularizationasstackelberggameanunrolledoptimizationapproach/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f327cb8adc324b019446065725aabe6651fccf095587abc26ea1a83cee97cbd8
+size 633512
diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_content_list.json b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5ae13df4682ced154399644cbaf1fe55a7bb1df5
--- /dev/null
+++ b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0880758c6102c2d151a512fc7da3c0a5297939100b07ac823529b9e3377160aa
+size 101510
diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_model.json b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..da17c7a216f24ad93fc9bdf1e613c56677b5b460
--- /dev/null
+++ b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b2c08ad7c233fac8a15db627d5ae2436d499bb5e952ffd69efbdba512650437
+size 123175
diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_origin.pdf b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..14e91f2f6557fc7c15a180ee1e441dc70891a318
--- /dev/null
+++ b/adversarialscrubbingofdemographicinformationfortextclassification/eb76a784-18e7-4f62-8910-c85b7d0ef99f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7d147604e0372fb2c9313319afd28a3dec6a91c899586ab691b26c7f05f479e
+size 3267780
diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/full.md b/adversarialscrubbingofdemographicinformationfortextclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..961b0816951d2fd664c28ae7daa8799805f9f70a
--- /dev/null
+++ b/adversarialscrubbingofdemographicinformationfortextclassification/full.md
@@ -0,0 +1,435 @@
+# Adversarial Scrubbing of Demographic Information for Text Classification
+
+Somnath Basu Roy Chowdhury
+
+Sayan Ghosh
+
+Yiyuan Li
+
+Junier B. Oliva
+
+{somnath, sayghosh, yiyuanli, joliva}@cs.unc.edu
+
+Shashank Srivastava
+
+Snigdha Chaturvedi
+
+{ssrivastava, snigdha}@cs.unc.edu
+
+UNC Chapel Hill
+
+# Abstract
+
+Contextual representations learned by language models can often encode undesirable attributes, like demographic associations of the users, while being trained for an unrelated target task. We aim to scrub such undesirable attributes and learn fair representations while maintaining performance on the target task. In this paper, we present an adversarial learning framework "Adversarial Scrubber" (ADS), to debias contextual representations. We perform theoretical analysis to show that our framework converges without leaking demographic information under certain conditions. We extend previous evaluation techniques by evaluating debiasing performance using Minimum Description Length (MDL) probing. Experimental evaluations on 8 datasets show that ADS generates representations with minimal information about demographic attributes while being maximally informative about the target task.
+
+# 1 Introduction
+
+Automated systems are increasingly being used for real-world applications like filtering college applications (Basu et al., 2019), determining credit eligibility (Ghailan et al., 2016), making hiring decisions (Chalfin et al., 2016), etc. For such tasks, predictive models are trained on data coming from human decisions, which are often biased against certain demographic groups (Mehrabi et al., 2019; Blodgett et al., 2020; Shah et al., 2020). Biased decisions based on demographic attributes can have lasting economic, social and cultural consequences.
+
+Natural language text is highly indicative of demographic attributes of the author (Koppel et al., 2002; Burger et al., 2011; Nguyen et al., 2013; Verhoeven and Daelemans, 2014; Weren et al., 2014; Rangel et al., 2016; Verhoeven et al., 2016; Blodgett et al., 2016). Language models can often encode such demographic associations even without having direct access to them. Prior works have
+
+shown that intermediate representations in a deep learning model encode demographic associations of the author or person being spoken about (Blodgett et al., 2016; Elazar and Goldberg, 2018; Elazar et al., 2021). Therefore, it is important to ensure that decision functions do not make predictions based on such representations.
+
+In this work, we focus on removing demographic attributes encoded in data representations during training text classification systems. To this end, we present "Adversarial Scrubber" (ADS) to remove information pertaining to protected attributes (like gender or race) from intermediate representations during training for a target task (like hate speech detection). Removal of such features ensures that any prediction model built on top of those representations will be agnostic to demographic information during decision-making.
+
+ADS can be used as a plug-and-play module during training any text classification model to learn fair intermediate representations. The framework consists of 4 modules: Encoder, Scrubber, Bias discriminator and Target classifier. The Encoder generates contextual representation of an input text. Taking these encoded contextual representations as input, the Scrubber tries to produce fair representations for the target task. The Bias discriminator and Target classifier predict the protected attribute and target label respectively from the Scrubber's output. The framework is trained end-to-end in an adversarial manner (Goodfellow et al., 2014).
+
+We provide theoretical analysis to show that under certain conditions Encoder and Scrubber converge without leaking information about the protected attribute. We evaluate our framework on 5 dialogue datasets, 2 Twitter-based datasets and a Biographies dataset with different target task and protected attribute settings. We extend previous evaluation methodology for debiasing by measuring Minimum Description Length (MDL) (Voita and Titov, 2020) of labels given representations,
+
+instead of probing accuracy. MDL provides a finer-grained evaluation benchmark for measuring debiasing performance. We compute MDL using off-the-shelf classifiers1 making it easier to reproduce. Upon training using ADS framework, we observe a significant gain in MDL for protected attribute prediction as compared to fine-tuning for the target task. Our contributions are:
+
+- We present Adversarial Scrubber (ADS), an adversarial framework to learn fair representations for text classification.
+- We provide theoretical guarantees to show that Scrubber and Encoder converge without leaking demographic information.
+- We extend previous evaluation methodology for adversarial debiasing by framing performance in terms of MDL.
+- Experimental evaluations on 8 datasets show that models trained using ADS generate representations where probing networks achieve near random performance on protected attribute inference while performing similar to the baselines on target task.
+- We show that ADS is scalable and can be used to remove multiple protected attributes simultaneously.
+
+# 2 Related Work
+
+Contextual representations learned during training for a target task can be indicative of features unrelated to the task. Such representations can often encode undesirable demographic attributes, as observed in unsupervised word embeddings (Bolukbasi et al., 2016) and sentence embeddings (May et al., 2019). Prior work has analysed bias in different NLP systems like machine translation (Park et al., 2018; Stanovsky et al., 2019; Font and Costa-Jussa, 2019; Saunders and Byrne, 2020), NLI (Rudinger et al., 2017), text classification (Dixon et al., 2018; Kiritchenko and Mohammad, 2018; Sap et al., 2019; Liu et al., 2021), language generation (Sheng et al., 2019) among others.
+
+Debiasing sensitive attributes for fair classification was introduced as an optimization problem by Zemel et al. (2013). Since then, adversarial training (Goodfellow et al., 2014) frameworks have been explored for protecting sensitive attributes for NLP tasks (Zhang et al., 2018; Li et al., 2018; Elazar and Goldberg, 2018; Liu et al., 2020).
+
+
+Figure 1: Architecture of the Adversarial Scrubber (ADS). Encoder receives an input $x$ to produce $e$ . Scrubber uses $e$ to produce $u$ . Bias discriminator $d$ and Target classifier $c$ infer protected attribute $z$ and target task label $y$ from $u$ .
+
+Our work is most similar to Elazar and Goldberg (2018), which achieves fairness by blindness by learning intermediate representations which are oblivious to a protected attribute. We compare the performance of ADS with Elazar and Goldberg (2018) in our experiments.
+
+# 3 Adversarial Scrubber
+
+ADS takes text documents $\{x_{1}, x_{2}, \ldots, x_{n}\}$ as input from a dataset $\mathcal{D}$ with corresponding target labels $\{y_{1}, y_{2}, \ldots, y_{n}\}$ . Every input $x_{i}$ is also associated with a protected attribute $z_{i} \in \{1, 2, \ldots, K\}$ . Our goal is to construct a model $f(x)$ such that it doesn't rely on $z_{i}$ while making the prediction $y_{i} = f(x_{i})$ . The framework consists of 4 modules: (i) Encoder $h(\cdot)$ with weights $\theta_{h}$ , (ii) Scrubber $s(\cdot)$ with weights $\theta_{s}$ , (iii) Bias discriminator $d(\cdot)$ with weights $\theta_{d}$ and (iv) Target classifier $c(\cdot)$ with weights $\theta_{c}$ as shown in Figure 1. The Encoder receives a text input $x_{i}$ , and produces an embedding $e_{i} = h(x_{i})$ , which is forwarded to the Scrubber. The goal of the Scrubber is to produce representation $u_{i} = s(h(x_{i}))$ , such that $y_{i}$ can be easily inferred from $u_{i}$ by the Target classifier, $c$ , but $u_{i}$ does not have the information required to predict the protected attribute $z_{i}$ by the Bias discriminator $d$ . Our setup also includes a Probing network $q$ , which helps in evaluating the fairness of the learned representations.
+
+# Algorithm 1 ADS Training algorithm
+
+1: for number of training iterations do
+2: Sample a minibatch $\{x_i, y_i, z_i\}_{i=1}^m \sim \mathcal{D}$
+3: Bias discriminator $d$ is updated using the gradients:
+
+$$
+\nabla_ {\theta_ {d}} \frac {1}{m} \sum_ {i = 1} ^ {m} \mathcal {L} _ {d} \left(d \left(u _ {i}\right), z _ {i}\right) \tag {1}
+$$
+
+4: Update the Encoder $h$ , Scrubber $s$ , and Task Classifier $c$ using the gradients:
+
+$$
+\nabla_ {\theta_ {c}, \theta_ {s}, \theta_ {h}} \frac {1}{m} \sum_ {i = 1} ^ {m} \left[ \mathcal {L} _ {c} \left(c \left(u _ {i}\right), y _ {i}\right) - \lambda_ {1} H \left(d \left(u _ {i}\right)\right) + \lambda_ {2} \delta \left(d \left(u _ {i}\right)\right) \right] \tag {2}
+$$
+
+In the rest of this section, we describe ADS assuming a single Bias discriminator. However, ADS can easily be extended to incorporate multiple discriminators for removing several protected attributes (discussed in Section 6.1).
+
+Scrubber: The Scrubber receives the input representation $h(x_{i})$ from Encoder and generates representation $u_{i} = s(h(x_{i}))$ . The goal of the Scrubber is to produce representations such that the Bias discriminator finds it difficult to predict the protected attribute $z_{i}$ . To this end, we consider two loss functions:
+
+Entropy loss: In the Entropy loss, the Encoder and Scrubber parameters are jointly optimized to increase the entropy of the prediction probability distribution, $H(d(u_i))$ .
+
+$\delta$ loss: The $\delta$ -loss function penalizes the model if the discriminator assigns a high probability to the correct protected-attribute class. For every input instance, we form an output mask $m_{i}\in \mathbb{R}^{1\times K}$ where $K$ is the number of protected attribute classes. $m_i^{(k)} = 1$ if $z_{i} = k$ and 0 otherwise. The Encoder and Scrubber minimizes the $\delta$ -loss defined as:
+
+$$
+\delta (d (u _ {i})) = m _ {i} ^ {T} \operatorname {s o f t m a x} _ {\text {g u m b l e}} (d (u _ {i})) \tag {3}
+$$
+
+where $\mathrm{softmax}_{\mathrm{gumble}}(\cdot)$ is the gumble softmax function (Jang et al., 2017). In our experiments, we use a combination of the entropy and $\delta$ losses.
+
+Target classifier: The Target classifier predicts the target label $y_{i}$ from $u_{i}$ by optimizing the cross entropy loss: $\mathcal{L}_c(c(u_i),y_i)$ .
+
+The Scrubber, Target classifier, and Encoder parameters are updated simultaneously to minimize
+
+the following loss:
+
+$$
+\begin{array}{r l} \mathcal {L} _ {s} (e _ {i}, y _ {i}) = \mathcal {L} _ {c} (c (u _ {i}), y _ {i}) - \lambda_ {1} H (d (u _ {i})) & \\ + \lambda_ {2} \delta (d (u _ {i})) & \end{array} \tag {4}
+$$
+
+where $\lambda_{1}$ and $\lambda_{2}$ are positive hyperparameters.
+
+Bias discriminator: The Bias discriminator, which predicts the protected attribute $z_{i}$ , is trained to reduce the cross-entropy loss for predicting $z_{i}$ denoted as $\mathcal{L}_d(d(u_i),z_i)$ . The discriminator output is $d(u_{i})\in \mathbb{R}^{K}$ , where $K$ is the number of protected attribute classes.
+
+Training: The Bias discriminator and Scrubber (along with Target classifier and Encoder) are trained in an iterative manner as shown in Algorithm 1. First, the Bias discriminator is updated using gradients from the loss in Equation 1. Then, the Encoder, Scrubber and Target classifier are updated simultaneously using the gradients shown in Equation 2.
+
+Probing Network: Elazar and Goldberg (2018) showed that in an adversarial setup even when the discriminator achieves random performance for predicting $z$ , it is still possible to retrieve $z$ using a separately trained classifier. Therefore, to evaluate the amount of information related to $y$ and $z$ present in representations $u$ , we use a probing network $q$ . After ADS is trained, we train $q$ on representations $h(x)$ and $s(h(x))$ , to predict $y$ and $z$ ( $q$ is trained to predict $y$ and $z$ separately). We consider an information leak from a representation, if $z$ can be predicted from it with above random performance. If the prediction performance of $q$ for $z$ is significantly above the random baseline, it means that there is information leakage of the protected attribute and it is not successfully guarded.
+
+# 4 Theoretical Analysis
+
+Proposition 1. Minimizing $\mathcal{L}_s$ is equivalent to increasing Bias discriminator loss $\mathcal{L}_d$ .
+
+Proof: Entropy and $\delta$ -loss components of $\mathcal{L}_s$ tries to increase the bias discriminator loss. The discriminator cross-entropy loss $\mathcal{L}_d$ can be written as:
+
+$$
+\begin{array}{l} \mathcal {L} _ {d} \left(v _ {i}, o _ {i}\right) = H \left(v _ {i}, o _ {i}\right) \tag {5} \\ = D _ {K L} \left(v _ {i}, o _ {i}\right) + H \left(v _ {i}\right) \\ \end{array}
+$$
+
+where $o_i = d(u_i)$ , the Bias discriminator output probability distribution and $v_i$ is a one-hot target distribution $\{v_i \in \mathbb{R}^K, v_i^k = 1 | z_i = k\}$ . As $H(o_i)$ increases (Equation 4), $D_{KL}(o_i, v_i)$ value also increases (since $v_i$ is a one-hot vector), thereby increasing $\mathcal{L}_d(o_i, v_i)$ (in Equation 5). Therefore, $\mathcal{L}_d$ increases as we minimize the Scrubber loss component $-H(o_i)$ .
+
+The same holds true for the $\delta$ -loss component. $\delta(o_i)$ reduces the probability assigned to the true output class which increases the cross entropy loss $\mathcal{L}_d$ (detailed proof provided in Appendix A.2 due to space constraint). Minimizing the entropy and $\delta$ -loss components of the Scrubber loss $\mathcal{L}_s$ increases $\mathcal{L}_d$ for a fixed Bias discriminator. Therefore, assuming our framework converges to $(\theta_s^*, \theta_h^*, \theta_d^*)$ using gradient updates from $\mathcal{L}_s$ we have:
+
+$$
+\mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \geq \mathcal {L} _ {d} \left(\theta_ {s}, \theta_ {h}, \theta_ {d} ^ {*}\right) \tag {6}
+$$
+
+where $(\theta_s,\theta_h)$ can be any Scrubber and Encoder parameter setting.
+
+Proposition 2. Let the discriminator loss $\mathcal{L}_d$ be convex in $\theta_d$ , and continuous differentiable for all $\theta_d$ . Let us assume the following:
+
+(a) $\theta_h^{(0)}$ and $\theta_s^{(0)}$ are Encoder and Scrubber parameters when the Scrubber output representation $s(h(x))$ does not have any information about $z$ (one trivial case would be when $s(h(x)) = \vec{0}$ , if $\theta_s = \vec{0} \vee \theta_h = \vec{0}$ ).
+(b) $\theta_d^{(0)}$ minimizes $\mathcal{L}_d$ when $s(h(x))$ does not have any information about $z$ (this is achieved when $d(\cdot)$ always predicts the majority baseline for $z$ ). $\forall (\theta_s, \theta_h)$ , the following holds true:
+
+$$
+\mathcal {L} _ {d} (\theta_ {s}, \theta_ {h}, \theta_ {d} ^ {(0)}) = \mathcal {L} _ {d} (\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)})
+$$
+
+(c) the adversarial framework converges with parameters $\theta_s^*$ , $\theta_h^*$ and $\theta_d^*$ .
+
+Then, $\mathcal{L}_d(\theta_s^*,\theta_h^*,\theta_d^*) = \mathcal{L}_d(\theta_s^*,\theta_h^*,\theta_d^{(0)})$ which implies that the Bias discriminator loss does not
+
+benefit from updates of $\theta_{s}$ and $\theta_{h}$ .
+
+Proof: As the Bias discriminator converges to $\theta_d^*$ , we have:
+
+$$
+\mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \leq \mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {(0)}\right) \tag {7}
+$$
+
+$\theta_h$ and $\theta_s$ are updated using gradients from $\mathcal{L}_s$ (Equation 4). Since the Encoder and the Scrubber parameters converge to $\theta_h^*$ and $\theta_s^*$ respectively, from Proposition 1 (Equation 6) we have:
+
+$$
+\mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \geq \mathcal {L} _ {d} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {*}\right) \tag {8}
+$$
+
+We can show that:
+
+$$
+\begin{array}{l} \mathcal {L} _ {d} (\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {(0)}) \\ \geq \mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \quad (Equation7) \\ \geq \mathcal {L} _ {d} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {*}\right) \quad (Equation8) \\ \geq \mathcal {L} _ {d} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)}\right) \quad (\text {A s s u m p t i o n 2 b}) \\ = \mathcal {L} _ {d} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {(0)}\right) \quad \text {(A s s u m p t i o n 2 b)} (9) \\ \end{array}
+$$
+
+Therefore, $\mathcal{L}_d(\theta_s^*,\theta_h^*,\theta_d^*) = \mathcal{L}_d(\theta_s^*,\theta_h^*,\theta_d^{(0)})$
+
+Proposition 3. Let us assume that the Bias discriminator $d(\cdot)$ is strong enough to achieve optimal accuracy of predicting $z$ from $s(h(x))$ and assumptions in Proposition 2 hold true. Then, Encoder and Scrubber converge to $(\theta_h^*, \theta_s^*)$ without leaking information about the protected attribute $z$ .
+
+Proof: An optimal Bias discriminator $d(\cdot)$ minimizes the prediction entropy, thereby increasing the entropy and $\delta$ -loss. Given $(\theta_h^{(0)},\theta_s^{(0)})$ , the Scrubber loss $\mathcal{L}_s$ is maximized for an optimal $\theta_d^{(0)}$ (From Proposition 1, $\mathcal{L}_s(\theta_s^{(0)},\theta_h^{(0)},\theta_d^{(0)}) \geq \mathcal{L}_s(\theta_s^{(0)},\theta_h^{(0)},\theta_d)$ , since $\mathcal{L}_d$ is decreasing with $\delta(o_i)$ and $-H(o_i)$ ). Then, for any other discriminator $\theta_d^*$ we have:
+
+$$
+\mathcal {L} _ {s} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {*}\right) \leq \mathcal {L} _ {s} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)}\right) \tag {10}
+$$
+
+Following assumption 2b, do where $\theta_d^{(0)}$ is the optimal Bias discriminator we can show that:
+
+$$
+\begin{array}{l} \mathcal {L} _ {s} (\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)}) \\ \geq \mathcal {L} _ {s} \left(\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {*}\right) \quad (E q u a t i o n 1 0) \\ \geq \mathcal {L} _ {s} \left(\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}\right) \quad \left(\mathcal {L} _ {s} \text {c o n v e r g e s}\right) \tag {11} \\ \end{array}
+$$
+
+Therefore, $\mathcal{L}_s(\theta_s^*,\theta_h^*,\theta_d^*)\leq \mathcal{L}_s(\theta_s^{(0)},\theta_h^{(0)},\theta_d^{(0)})$
+
+From $(\theta_s^{(0)},\theta_h^{(0)},\theta_d^{(0)})$ , our framework converges to $(\theta_s^*,\theta_h^*,\theta_d^*)$ as the Scrubber loss $\mathcal{L}_s$ decreases
+
+| DATASET | Split |
| Train | Dev | Test |
| Funpedia | 24K | 2.9K | 2.9K |
| Wizard | 3.5K | 0.1K | 0.1K |
| ConvAI2 | 69K | 4.5K | 4.5K |
| LIGHT | 38K | 2.2K | 4.5K |
| OpenSub | 210K | 25K | 29K |
| DIAL | 166K | - | 151K |
| PAN16 | 160K | - | 9K |
| PAN16 | 160K | - | 10K |
| Biographies | 257K | 40K | 99K |
+
+(Equation 11). Then, from Proposition 2 we have
+
+$$
+\mathcal {L} _ {d} (\theta_ {s} ^ {*}, \theta_ {h} ^ {*}, \theta_ {d} ^ {*}) \geq \mathcal {L} _ {d} (\theta_ {s} ^ {(0)}, \theta_ {h} ^ {(0)}, \theta_ {d} ^ {(0)})
+$$
+
+As $\mathcal{L}_d$ does not decrease, and $d(\cdot)$ is optimal it shows that no additional information about $z$ is revealed which the Bias discriminator can leverage to reduce $\mathcal{L}_d$ . This shows that starting from $(\theta_s^{(0)},\theta_h^{(0)},\theta_d^{(0)})$ where assumptions in Proposition 2 hold, our framework converges to $(\theta_s^*,\theta_h^*,\theta_d^*)$ without revealing information about $z$ .
+
+# 5 Experiments
+
+In this section, we describe our experimental setup and evaluate ADS on several benchmark datasets.
+
+# 5.1 Dataset
+
+We evaluate ADS on 5 dialogue datasets, 2 Twitter-based datasets and a Biographies dataset.
+
+(a) Multi-dimensional bias in dialogue systems: We evaluate ADS on 5 dialogue datasets: Funpedia, ConvAI2, Wizard, LIGHT and OpenSub, introduced by Dinan et al. (2020). These datasets are annotated with multi-dimensional gender labels: the gender of the person being spoken about, the gender of the person being spoken to, and gender of the speaker. We consider the gender of the person being spoken about as our protected attribute. The target task in our setup is sentiment classification. For obtaining the target label, we label all instances using the rule-based sentiment classifier VADER (Hutto and Gilbert, 2014), into three classes: positive, negative and neutral. The dialogue datasets: Funpedia, Wizard, ConvAI2, LIGHT and OpenSub were downloaded from "md_gender" dataset in huggingface library. We use the same data split provided in huggingface for these dataset.
+
+Table 1: Dataset statistics.
+
+| DATASET | z | y | Epoch | λ1 | λ2 |
| Funpedia | Gender (3) | Sentiment (3) | 2 | 1 | 1 |
| Wizard | Gender (2) | Sentiment (3) | 3 | 1 | 0 |
| ConvAI2 | Gender (2) | Sentiment (3) | 1 | 1 | 0 |
| LIGHT | Gender (2) | Sentiment (3) | 2 | 1 | 0 |
| OpenSub | Gender (2) | Sentiment (3) | 2 | 1 | 0 |
| DIAL | Race (2) | Sentiment (2) | 8 | 10 | 0 |
| PAN16 | Gender (2) | Mention (2) | 5 | 10 | 0 |
| PAN16 | Age (2) | Mention (2) | 3 | 10 | 0 |
| Biographies | Gender (2) | Occupation (28) | 2 | 10 | 0 |
+
+Table 2: Hyperparameter settings. Each entry for $z / y$ are shown the format "Attribute Name (c)", where $c$ is the number of classes for that attribute.
+
+(b) Tweet classification: We experiment on two Twitter datasets. First, we consider the DIAL dataset (Blodgett et al., 2016), where each tweet is annotated with "race" information of the author, which is our protected attribute and the target task is sentiment classification. We consider two race categories: non-Hispanic blacks and whites. Second, we consider the PAN16 (Rangel et al., 2016) dataset where each tweet is annotated with the author's age and gender information both of which are protected attributes. The target task is mention detection. We use the implementation3 of Elazar and Goldberg (2018) to annotate both datasets.
+(c) Biography classification: We evaluate ADS on biographies dataset (De-Arteaga et al., 2019). The target task involves classification of biographies into 28 different profession categories, and protected attribute is the gender of the person. The dataset has been downloaded and processed from this open-sourced project.4 We use the same traindev-test split of 65:10:25 as the authors.
+
+All datasets used in our experiments are balanced. The dataset statistics are reported in Table 1.
+
+# 5.2 Implementation details
+
+We use a 2-layer feed-forward neural network with ReLU non-linearity as our Scrubber network $s$ . We use BERT-base (Devlin et al., 2019) as our Encoder $h$ . Bias discriminator $d$ and Target classifier $c$ take the pooled output of BERT [CLS] representation followed by a single-layer neural network. All the models were using AdamW optimizer with a learning rate of $2 \times 10^{-5}$ . Hyperparameter details for different datasets are mentioned in Table 2. $z$ and $y$ sections in the table report the protected attribute and the target task for each dataset. For each
+
+
+(a) Pre-trained $h(x)$
+
+
+(b) w/o adversary $h(x)$
+
+
+(c) AdS - $h(x)$
+Figure 2: Evaluation setup. We evaluate the performance of the probing network on 4 different representations. (a) Pre-trained $h(x)$ obtained using pre-trained Encoder (b) w/o adversary $h(x)$ when the Encoder $h$ was fine-tuned on the target task (c) ADS $h(x)$ Encoder embeddings and (d) ADS - $s(h(x))$ embeddings from the Scrubber are representations obtained from ADS.
+
+
+(d) AdS - $s(h(x))$
+
+task we also report the number of output classes in parenthesis (e.g. Sentiment (3)). The implementation of this project is publicly available here: https://github.com/brcsomnath/AdS.
+
+# 5.3 Evaluation Framework
+
+In our experiments, we compare representations obtained from 4 different settings as shown in Figure 2. Figure 2(a), (b) and (c) are our baselines. In Figure 2(a), we retrieve $h(x)$ from pre-trained BERT model. In Figure 2(b), we retrieve $h(x)$ from BERT fine-tuned on the target task. In Figure 2(c), Encoder output $h(x)$ from ADS is evaluated. In Figure 2(d), Scrubber output, $s(h(x))$ is evaluated. This represents our final setup ADS - $s(h(x))$ .
+
+# 5.4 Metrics
+
+We report the F1-score (F1) of the probing network for each evaluation. However, previous work has shown that probing accuracy is not a reliable metric to evaluate the degree of information related to an attribute encoded in representations (Hewitt and Liang, 2019). Therefore, we also report Minimum Description Length (MDL) (Voita and Titov, 2020) of labels given representations. MDL captures the amount of effort required by a probing network to achieve a certain accuracy. Therefore, it provides a finer-grained evaluation benchmark which can even differentiate between probing models with comparable accuracies. We compute the online code (Rissanen, 1984) for MDL. In the online setting, blocks
+
+of labels are encoded by a probabilistic model iteratively trained on incremental blocks of data (further details about MDL is provided in Appendix A.1). We compute MDL using sklearn's MLClassifier at timesteps corresponding to $0.1\%$ , $0.2\%$ , $0.4\%$ , $0.8\%$ , $1.6\%$ , $3.2\%$ , $6.25\%$ , $12.5\%$ , $25\%$ , $50\%$ and $100\%$ of each dataset as suggested by Voita and Titov (2020). A higher MDL signifies that more effort is required to achieve the probing performance. Hence, we expect the debiased representations to have higher MDL for predicting $z$ and a lower MDL for predicting $y$ .
+
+# 6 Results
+
+The evaluation results for all datasets are reported in Table 3. For all datasets, we report performances in 4 settings described in Section 5.3.
+
+Dialogue and Biographies dataset: First, we focus on the results on the dialogue and biographies datasets reported in Table 3 (first two rows). We observe the following: (i) for pre-trained $h(x)$ , MDL of predicting $z$ is lower than $y$ for these datasets. This means that information regarding $z$ is better encoded in the pre-trained $h(x)$ , than the target label $y$ . (ii) In "w/o adversary $h(x)$ " setup, the Encoder is fine-tuned on the target task (without debiasing), upon which MDL for $y$ reduces significantly (lowest MDL achieved in this setting for all datasets) accompanied by a rise in MDL for $z$ . However, it is still possible to predict $z$ with a
+
+| Dataset → Setup ↓ | FUNPEDIA | WIZARD | CONVAI2 |
| Gender (z) | Sentiment (y) | Gender (z) | Sentiment (y) | Gender (z) | Sentiment (y) |
| F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ | F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ | F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ |
| Random | 33.3 | - | 33.3 | - | 50.0 | - | 33.3 | - | 50.0 | - | 33.3 | - |
| Pre-trained h(x) | 56.8 | 24.7 | 62.3 | 46.3 | 78.6 | 3.8 | 46.5 | 7.6 | 80.3 | 100.6 | 62.7 | 133.7 |
| w/o adversary h(x) | 51.0 | 30.9 | 92.8 | 2.8 | 67.4 | 5.2 | 85.1 | 0.2 | 72.8 | 109.0 | 95.6 | 6.5 |
| ADS - h(x) | 44.1 | 35.4 | 90.3 | 10.3 | 63.4 | 6.5 | 88.1 | 0.3 | 58.3 | 134.0 | 95.3 | 10.9 |
| ADS - s(h(x)) | 29.8 | 41.4 | 90.2 | 10.8 | 54.7 | 6.9 | 93.2 | 0.2 | 56.0 | 133.5 | 95.3 | 11.0 |
| Dataset → Setup ↓ | LIGHT | OPENSUB | BIORPHIES |
| Gender (z) | Sentiment (y) | Gender (z) | Sentiment (y) | Gender (z) | Occupation (y) |
| F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ | F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ | F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ |
| Random | 50.0 | - | 33.3 | - | 50.0 | - | 33.3 | - | 50.0 | - | 3.6 | - |
| Pre-trained h(x) | 78.6 | 47.1 | 60.5 | 88.7 | 72.3 | 192.4 | 63.9 | 426.2 | 99.2 | 27.6 | 74.3 | 499.9 |
| w/o adversary h(x) | 75.3 | 55.9 | 91.4 | 8.2 | 70.2 | 311.9 | 97.5 | 25.1 | 62.3 | 448.9 | 99.9 | 2.2 |
| ADS - h(x) | 60.4 | 73.8 | 92.2 | 16.7 | 40.7 | 371.9 | 96.9 | 37.4 | 62.1 | 444.7 | 99.9 | 3.0 |
| ADS - s(h(x)) | 52.8 | 74.7 | 92.3 | 16.4 | 40.7 | 373.7 | 96.9 | 37.1 | 57.1 | 449.5 | 99.9 | 3.3 |
| Dataset → Setup ↓ | DIAL | PAN16 |
| Race (z) | Sentiment (y) | Gender (z) | Mention (y) | Age (z) | Mention (y) |
| F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ | F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ | F1 ↓ | MDL ↑ | F1 ↑ | MDL ↓ |
| Random | 50.0 | - | 50.0 | - | 50.0 | - | 50.0 | - | 50.0 | - | 50.0 | - |
| Pre-trained h(x) | 74.3 | 242.6 | 63.9 | 300.7 | 60.9 | 300.5 | 72.3 | 259.7 | 57.7 | 302.0 | 72.8 | 262.6 |
| w/o adversary h(x) | 81.7 | 176.2 | 76.9 | 99.0 | 68.6 | 267.6 | 89.7 | 4.0 | 59.0 | 295.4 | 89.3 | 4.8 |
| ADS - h(x) | 69.7 | 273.0 | 72.4 | 51.0 | 62.3 | 304.2 | 89.7 | 7.1 | 62.4 | 302.8 | 89.3 | 5.3 |
| ADS - s(h(x)) | 58.2 | 290.6 | 72.9 | 56.9 | 48.6 | 313.9 | 89.7 | 7.6 | 50.5 | 315.1 | 89.2 | 6.0 |
+
+Table 3: Evaluation results for all datasets. Expected trends for a metric are shown in $\uparrow$ - higher scores and $\downarrow$ - lower scores. Statistically significant best probing performances for $z$ (lowest F1/highest MDL) and $y$ (highest F1/lowest MDL) are in bold. $^{6}$ ADS - $s(h(x))$ performs the best in guarding information leak of $z$ for all datasets.
+
+| SETUP | DIAL | PAN16 |
| Race (z) | Gender (z) | Age (z) |
| Δz | Accy | Δz | Accy | Δz | Accy |
| w/o adversary LSTM | 14.5 | 67.4 | 10.1 | 77.5 | 9.4 | 74.7 |
| Elazar and Goldberg (2018) | 4.8 | 63.8 | 4.1 | 74.3 | 5.7 | 70.1 |
| w/o adversary BERT | 31.2 | 76.4 | 18.5 | 89.7 | 10.1 | 89.3 |
| ADS - s(h(x)) | 8.2 | 72.9 | 0.8 | 89.8 | 4.7 | 89.2 |
+
+Table 4: Comparing ADS with existing baseline. The best and second best performances are in bold and underlined respectively. ADS - $s(h(x))$ achieve the best performance on both settings in the PAN16 dataset and is able to reduce $\Delta_z$ better than baseline on DIAL.
+
+F1-score significantly above the random baseline, (iii) "ADS - $h(x)$ " setup achieves similar F1 score for predicting $y$ , but still has a F1-score for $z$ significantly above the random baseline. (iv) "ADS - $s(h(x))$ " performs the best in terms of guarding the protected attribute $z$ (lowest prediction F1-score and highest MDL) by achieving near random F1-score across all datasets. It is also able to maintain performance on the target task, as we observe only a slight drop compared to the fine-tuning performance ("w/o adversary $h(x)$ " for predicting $y$ ).
+
+DIAL & PAN16: Next, we focus on the Twitter-based datasets DIAL & PAN16, where the target task is sentiment classification/mention detection and the protected attribute is one of the demographic associations (race/gender/age) of the author. The evaluation results are reported in Table 3 (third row). For these datasets, we observe that (i) "w/o adversary $h(x)$ " representations have higher F1 and lower MDL for predicting $z$ , compared to "Pre-trained $h(x)$ ". This shows that fine-tuning on the target task $y$ encodes information about the protected attribute $z$ . (ii) "ADS - $h(x)$ " performs similar to "w/o adversary $h(x)$ " representations on the target task but still leaks significant information about $z$ , unlike the previous datasets. (iii) "ADS - $s(h(x))$ " achieves the best performance in terms of guarding the protected variable $z$ (achieves almost random performance in PAN16 dataset), without much performance drop in the target task.
+
+Comparison with Prior Work: We report two metrics following Elazar and Goldberg (2018): (i) $\Delta_z$ - which denotes the performance above the random baseline for $z$ (50% for both PAN16 and DIAL) (ii) $\mathrm{Acc}_y$ - is the probing accuracy on the
+
+| SETUP | PAN16 |
| Age (z1) | Gender (z2) | Mention (y) |
| F1↓ | MDL↑ | F1↓ | MDL↑ | F1↑ | MDL↓ |
| Random | 50.0 | - | 50.0 | - | 50.0 | - |
| w/o adversary h(x) | 66.5 | 196.4 | 69.3 | 192.0 | 88.6 | 6.8 |
| ADS s(h(x)) - (age) | 61.5 | 224.2 | 62.6 | 218.7 | 88.7 | 14.3 |
| ADS s(h(x)) - (gender) | 60.6 | 222.6 | 64.2 | 216.8 | 88.6 | 12.9 |
| ADS s(h(x)) - (both) | 53.8 | 231.5 | 54.4 | 230.9 | 88.6 | 5.5 |
+
+target task. Our framework cannot be directly compared with Elazar and Goldberg (2018) as they have used LSTM Encoder. Therefore, we report the baseline Encoder performances as well. In Table 4, we observe that it possible to retrieve $z$ and $y$ from "w/o adversary BERT" with a higher performance compared to "w/o adversary LSTM". This indicates that BERT encodes more information pertaining to both $y$ and $z$ compared to LSTM. In the DIAL dataset, ADS is able to reduce $\Delta_z$ by an absolute margin of $25\%$ compared to $9.7\%$ by Elazar and Goldberg (2018), while the absolute drop in $\mathrm{Acc}_y$ is $3.5\%$ compared to $3.6\%$ by Elazar and Goldberg (2018). In PAN16 dataset, ADS achieves the best $\Delta_z$ and $\mathrm{Acc}_y$ performance for both setups with protected attributes: age and gender respectively. ADS - $s(h(x))$ also achieves performance comparable to the "w/o adversary BERT" setup, which is fine-tuned on the target task. Therefore, ADS is successful in scrubbing information about $z$ from the representations of a stronger encoder compared to Elazar and Goldberg (2018).
+
+# 6.1 Scrubbing multiple protected attributes
+
+In this experiment, we show that using ADS it is possible to guard information about multiple protected attributes. $\mathcal{L}_s$ in this setup is defined as:
+
+$$
+\begin{array}{l} \mathcal {L} _ {s} (e _ {i}, y _ {i}) = \mathcal {L} _ {c} (c (u _ {i}), y _ {i}) - \lambda_ {1} \sum_ {n = 1} ^ {N} H (d _ {n} (u _ {i})) \\ + \lambda_ {2} \sum_ {n = 1} ^ {N} \delta \left(d _ {n} \left(u _ {i}\right)\right) \\ \end{array}
+$$
+
+where $N$ is the number of protected attributes and $d_{n}(\cdot)$ is the Bias discriminator corresponding to the $n^{th}$ protected attribute $z_{n}$ .
+
+We evaluate on PAN16 dataset considering two protected attributes $z_{1}$ (age) and $z_{2}$ (gender). The target task is mention prediction. We consider the
+
+Table 5: Evaluation results of protecting multiple attributes using ADS. Statistically significant best performances are in bold. Expected trends for a metric are shown in $\uparrow$ - higher scores and $\downarrow$ - lower scores. "ADS $s(h(x))$ - (both)" achieves the best performance. $^7$
+
+| Scrubber loss | Gender (z) | Sentiment (y) |
| F1↓ | P↓ | R↓ | F1↑ | P↑ | R↑ |
| Random | 33.3 | 33.3 | 33.3 | 33.3 | 33.3 | 33.3 |
| δ-loss (w/o entropy) | 49.5 | 47.7 | 53.9 | 91.2 | 91.2 | 91.2 |
| Entropy (w/o δ-loss) | 35.7 | 36.4 | 53.2 | 91.5 | 91.6 | 91.5 |
| Entropy + δ-loss | 29.8 | 33.3 | 27.0 | 90.2 | 90.5 | 89.9 |
+
+Table 6: Ablation experiments on Funpedia using F1-score (F1), Precision (P) and Recall (R). Expected trends for a metric are shown in $\uparrow$ - higher scores and $\downarrow$ - lower scores. ADS with both loss components performs the best in guarding $z$ .
+
+subset of PAN16 that contains samples with both gender and age labels. This subset has 120K training instances and 30K test instances. Evaluation results are reported in Table 5. Similar to previous experiments, we observe that "w/o adversary $h(x)$ " (fine-tuned BERT) leaks information about both protected attributes age and gender. We evaluate the information leak when "ADS $s(h(x))$ " is retrieved from a setup with single Bias discriminator (age/gender). We observe a significant gain in MDL for the corresponding $z_{n}$ in both cases, indicating that the respective $z_{n}$ is being protected. Finally, we train ADS using two Bias discriminators and "ADS - $s(h(x))$ (both)" representations achieve the best performance in guarding $z_{1} \& z_{2}$ , while performing well on the target task. This shows that ADS framework is scalable and can be leveraged to guard multiple protected attributes simultaneously.
+
+# 6.2 Efficacy of different losses
+
+We experiment with different configurations of the Scrubber loss $\mathcal{L}_s$ to figure out the efficacy of individual components. We show the experimental results on the Funpedia dataset in Table 6 (with $\lambda_1 = \lambda_2 = 1$ ). We observe that most leakage in $z$ (increase in prediction F1-score) occur when the entropy loss is removed. Removing $\delta$ -loss also results in a slight increase in leakage accompanied by a gain in performance for predicting $y$ . This shows that both losses are important for guarding $z$ .
+
+Empirically, we found that $\delta$ -loss is not suitable for binary protected attributes. This is because during training when the Scrubber is encouraged to learn representations that do not have information about $z$ , it learns to encode representations in a manner such that the Bias discriminator predicts the opposite $z$ class. Hence, the information about $z$ is still present and is retrievable using a probing network $q$ . For this reason, we use $\delta$ -loss for only Funpedia ( $\lambda_2$ values in Table 2) where we
+
+
+(a) Pre-trained
+
+
+(b) After training
+Figure 3: UMAP projection of Scrubber output representations $s(h(x))$ from Biographies corpus with profession as "professor". Blue and red labels indicate female and male biographies respectively. (a) Pre-trained BERT representations (b) BERT representations post training in ADS.
+
+considered 3 gender label classes.
+
+# 6.3 Visualization
+
+We visualize the UMAP (McInnes et al., 2018) projection of Encoder output representations, $h(x)$ , in Figure 3. Blue and red labels indicate female and male biographies respectively. Figure 3a and Figure 3b show representations before and after ADS training. In Figure 3a, male and female labeled instances are clearly separated in space. This shows that text representations encode information relating to gender attributes. In Figure 3b, we observe that after training in our adversarial framework both male and female labeled instances are difficult to segregate. This indicates that post training in ADS, it is difficult to identify biography representations on the basis of gender.
+
+# 7 Conclusion
+
+In this work, we proposed Adversarial Scrubber (ADS) to remove demographic information from contextual representations. Theoretical analysis showed that under certain conditions, our framework converges without leaking information about protected attributes. We extend previous evaluation metrics to evaluate fairness of representations by using MDL. Experimental evaluations on 8 datasets show that ADS is better at protecting demographic attributes than baselines. We show that our approach is scalable and can be used to remove multiple protected attributes simultaneously. Future work can explore leveraging ADS towards learning fair representations in other NLP tasks.
+
+# 8 Acknowledgement
+
+This work was supported in part by grants NIH 1R01AA02687901A1 and NSF IIS2133595.
+
+# Ethical considerations
+
+We propose ADS, an adversarial framework to prevent text classification modules from taking biased decisions. ADS is intended to be used in scenarios, where the user is already aware of the input attributes they want to protect. ADS can only be trained on data where protected attributes are annotated. It is possible that representations retrieved from ADS, contain sensitive information which were not defined as the protected variables. Even in such a scenario, ADS won't reveal information more than its already available in the dataset. One potential way of misusing ADS would to define relevant features for a task (e.g. experience for a job application) as a protected attribute, then the classification system may be forced to rely on sensitive demographic information for predictions. In such cases, it is possible to flag systems by evaluating the difference in True Positive Rate (TPR) when the protected attribute is changed $(\mathrm{GAP}_{z,y}^{\mathrm{TPR}})$ metric (De-Arteaga et al., 2019)). All experiments were performed on publicly available data, where the identity of author was anonymous. We did not perform any additional data annotation.
+
+# References
+
+Kanadpriya Basu, Treena Basu, Ron Buckmire, and Nishu Lal. 2019. Predictive models of student college commitment decisions using machine learning. Data, 4(2):65.
+Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476, Online. Association for Computational Linguistics.
+Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1119-1130, Austin, Texas. Association for Computational Linguistics.
+Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In
+
+Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4349-4357.
+John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1301-1309, Edinburgh, Scotland, UK. Association for Computational Linguistics.
+Aaron Chalfin, Oren Danieli, Andrew Hillis, Zubin Jelveh, Michael Luca, Jens Ludwig, and Sendhil Mullainathan. 2016. Productivity and selection of human capital with machine learning. American Economic Review, 106(5):124-27.
+Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnamaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In proceedings of the Conference on Fairness, Accountability, and Transparency, pages 120-128.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, and Adina Williams. 2020. Multi-dimensional gender bias classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 314-331, Online. Association for Computational Linguistics.
+Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.
+Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11-21, Brussels, Belgium. Association for Computational Linguistics.
+Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2021. Amnesic probing: Behavioral explanation with amnesic counterfactuals. Transactions of the Association for Computational Linguistics, 9:160-175.
+
+Joel Escudé Font and Marta R Costa-Jussa. 2019. Equalizing gender biases in neural machine translation with word embeddings techniques. arXiv preprint arXiv:1901.03116.
+Omar Ghailan, Hoda MO Mokhtar, and Osman Hegazy. 2016. Improving credit scorecard modeling through applying text analysis. institutions, 7(4).
+Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. arXiv preprint arXiv:1406.2661.
+John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics.
+Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the International AAAI Conference on Web and Social Media, volume 8.
+Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. Open-Review.net.
+Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43-53, New Orleans, Louisiana. Association for Computational Linguistics.
+Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically categorizing written texts by author gender. _Literary and linguistic computing_, 17(4):401-412.
+Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25-30, Melbourne, Australia. Association for Computational Linguistics.
+Haochen Liu, Wei Jin, Hamid Karimi, Zitao Liu, and Jiliang Tang. 2021. The authors matter: Understanding and mitigating implicit bias in deep text classification. arXiv e-prints, pages arXiv-2105.
+Haochen Liu, Wentao Wang, Yiqi Wang, Hui Liu, Zitao Liu, and Jiliang Tang. 2020. Mitigating gender bias for neural dialogue generation with adversarial learning. In Proceedings of the 2020 Conference on
+
+Empirical Methods in Natural Language Processing (EMNLP), pages 893-903, Online. Association for Computational Linguistics.
+Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Minnesota. Association for Computational Linguistics.
+Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.
+Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635.
+Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. "how old do you think i am?" a study of language and age in twitter. In Proceedings of the International AAAI Conference on Web and Social Media, volume 7.
+Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799-2804, Brussels, Belgium. Association for Computational Linguistics.
+Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. 2016. Overview of the 4th author profiling task at pan 2016: cross-genre evaluations. Working Notes Papers of the CLEF, 2016:750-784.
+Jorma Rissanen. 1984. Universal coding, information, prediction, and estimation. IEEE Transactions on Information theory, 30(4):629-636.
+Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 74-79, Valencia, Spain. Association for Computational Linguistics.
+Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668-1678, Florence, Italy. Association for Computational Linguistics.
+Danielle Saunders and Bill Byrne. 2020. Reducing gender bias in neural machine translation as a domain adaptation problem. In Proceedings of the 58th Annual Meeting of the Association for
+
+Computational Linguistics, pages 7724-7736, Online. Association for Computational Linguistics.
+Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5248-5264, Online. Association for Computational Linguistics.
+Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407-3412, Hong Kong, China. Association for Computational Linguistics.
+Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.
+Ben Verhoeven and Walter Daelemans. 2014. CLiPS stylometry investigation (CSI) corpus: A Dutch corpus for the detection of age, gender, personality, sentiment and deception in text. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3081-3085, Reykjavik, Iceland. European Language Resources Association (ELRA).
+Ben Verhoeven, Walter Daelemans, and Barbara Plank. 2016. TwiSty: A multilingual Twitter stylometry corpus for gender and personality profiling. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1632-1637, PortoRož, Slovenia. European Language Resources Association (ELRA).
+Elena Voita and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computational Linguistics.
+Edson RD Weren, Anderson U Kauer, Lucas Mizusaki, Viviane P Moreira, J Palazzo M de Oliveira, and Alejandro K Wives. 2014. Examining multiple features for author profiling. Journal of information and data management, 5(3):266-266.
+Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, volume 28 of JMLR Workshop and Conference Proceedings, pages 325-333. JMLR.org.
+
+Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 335-340.
+
+# A Appendix
+
+# A.1 Minimum Description Length
+
+Minimum Description Length (MDL) measures the description length of labels given a set of representations. MDL captures the amount of effort required to achieve a certain probing accuracy, characterizing either complexity of probing model, or amount of data required.
+
+Estimating MDL involves a dataset $\{(x_1, y_1), \ldots, (x_n, y_n)\}$ , where $x_i$ 's are data representations from a model and $y_i$ 's are task labels. Now, a sender Alice wants to transmit labels $\{y_1, \ldots, y_n\}$ to a receiver Bob, when both of them have access to the data representations $x_i$ 's. In order to transmit the labels efficiently, Alice needs to encode $y_i$ 's in an optimal manner using a probabilistic model $p(y|x)$ . The minimum codelength (Shannon-Huffman code), required to transmit the labels losslessly is: $\mathcal{L}_p(y_{1:n}|x_{1:n}) = -\sum_{i=1}^{n} \log_2 p(y_i|x_i)$ .
+
+There are two ways of evaluating MDL for transmitting the labels $y_{1:n}$ (a) variational code - transmit $p(y|x)$ explicitly and then use it to encode the labels (b) online code - encodes the model and labels without explicitly transmitting the model. In our experiments, we evaluate the online code for estimating MDL. In the online setting, the labels are transmitted in blocks in $n$ timesteps $\{t_0,\dots ,t_n\}$ . Alice encodes the first block of labels $y_{1:t_1}$ using a uniform code. Bob learns a model $p_{\theta_1}(y|x)$ using the data $\{(x_i,y_i)\}_{i = 1}^{t_1}$ , Alice then transmits the next block of labels $y_{t_1 + 1:t_2}$ using $p_{\theta_1}(y|x)$ . In the next iteration, the receiver trains a new model using a larger chunk of data $\{(x_i,y_i)\}_{i = 1}^{t_2}$ , which encodes $y_{t_2 + 1:t_3}$ . This continues till the whole set of labels $y_{1:n}$ is transmitted. The total codelength required for transmission using this setting is given as:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\text {o n l i n e}} \left(y _ {1: n} \mid x _ {1: n}\right) = t _ {1} \log_ {2} C - \\ \sum_ {i = 1} ^ {n - 1} \log_ {2} p _ {\theta_ {i}} \left(y _ {t _ {i} + 1: t _ {i + 1}} \mid x _ {t _ {i} + 1: t _ {i + 1}}\right) \tag {12} \\ \end{array}
+$$
+
+where $y_{i}\in \{1,2,\ldots ,C\}$ . The online codelength $\mathcal{L}_{online}(y_{1:n}|x_{1:n})$ is shorter if the probing model is
+
+| DATASET | Time/ epoch (min.) |
| FUNPEDIA | 2 |
| WIZARD | 1 |
| CONVAI2 | 14 |
| LIGHT | 4 |
| OPENSUB | 15 |
| BIOGRAPHIES | 260 |
| DIAL | 16 |
| PAN16 (gender) | 15 |
| PAN16 (age) | 15 |
+
+Table 7: Runtime for each dataset.
+
+able to perform well using fewer training instances, therefore capturing the effort needed to achieve a prediction performance.
+
+# A.2 Theoretical Analysis
+
+Proposition. Minimizing $\delta$ -loss is equivalent to increasing the Bias discriminator loss $\mathcal{L}_d$ .
+
+Proof: The $\delta$ -loss function can be written as:
+
+$$
+\begin{array}{l} \delta \left(o _ {i}\right) = m _ {i} ^ {T} \operatorname {s o f t m a x} _ {\text {g u m b l e}} \left(o _ {i}\right) \\ = \frac {\exp \left(\frac {\log o _ {i} ^ {k} + g _ {k}}{\tau}\right)}{\sum_ {j} \exp \left(\frac {\log o _ {i} ^ {j} + g _ {j}}{\tau}\right)} \tag {13} \\ \end{array}
+$$
+
+where $o_i^j$ is the raw logit assigned to the $j^{th}$ output class, the true output class is $k = z_{i}$ and $g_{j},g_{k}$ are i.i.d samples from Gumble(0,1) distribution. The cross entropy loss of the bias discriminator $\mathcal{L}_d$ can be written as:
+
+$$
+\mathcal {L} _ {d} = - \log \frac {\exp \left(o _ {i} ^ {k}\right)}{\sum_ {j} \exp \left(o _ {i} ^ {j}\right)} \tag {14}
+$$
+
+The gumble softmax generates a peaked version of the normal softmax distribution. But the individual gumble softmax logit values (Equation 13) are still proportional to vanilla softmax logits (Equation 14): $\delta(o_i) \propto \frac{\exp o_i^k}{\sum_j \exp o_i^j}$ . Then, bias discriminator loss $\mathcal{L}_d$ can be written as:
+
+$$
+\mathcal {L} _ {d} \propto - \log \delta \left(o _ {i}\right) \tag {15}
+$$
+
+Therefore, minimizing $\delta (o_i)$ increases $\mathcal{L}_d$
+
+# A.3 Implementation Details
+
+All experiments are conducted in PyTorch framework using Nvidia GeForce RTX2080 GPU with
+
+| DATASET | Pre-trained h(x) | w/o adversary h(x) | ADS h(x) | ADS s(h(x)) |
| \( \overrightarrow{MDL}_z \) | \( \overrightarrow{MDL}_y \) | \( \overrightarrow{MDL}_z \) | \( \overrightarrow{MDL}_y \) | \( \overrightarrow{MDL}_z \) | \( \overrightarrow{MDL}_y \) | \( \overrightarrow{MDL}_z \) | \( \overrightarrow{MDL}_y \) |
| FUNPEDIA | 1.03 | 1.94 | 1.29 | 0.12 | 1.48 | 0.43 | 1.73 | 0.45 |
| WIZARD | 1.08 | 2.15 | 1.47 | 0.06 | 1.84 | 0.09 | 1.95 | 0.06 |
| CONVAI2 | 1.46 | 1.94 | 1.58 | 0.09 | 1.94 | 0.16 | 1.93 | 0.16 |
| LIGHT | 1.21 | 2.28 | 1.44 | 0.21 | 1.89 | 0.43 | 1.92 | 0.42 |
| OPENSUB | 0.92 | 2.03 | 1.49 | 0.12 | 1.77 | 0.18 | 1.78 | 0.18 |
| BIOGRAPHIES | 0.11 | 1.94 | 1.74 | 0.01 | 1.73 | 0.01 | 1.74 | 0.01 |
| DIAL | 1.46 | 1.81 | 1.06 | 0.60 | 1.65 | 0.31 | 1.75 | 0.34 |
| PAN16 (gender) | 1.87 | 1.62 | 1.67 | 0.03 | 1.90 | 0.04 | 1.96 | 0.05 |
| PAN16 (age) | 1.89 | 1.64 | 1.85 | 0.03 | 1.89 | 0.03 | 1.97 | 0.04 |
+
+Table 8: Probing performance of representations retrieved from different settings in terms of $\overrightarrow{\mathrm{{MDL}}}$ .
+
+12GB memory. We use an off-the-shelf MLPClassifier from sklearn as our probing network $q$ . ADS has a total of 110M parameters (all 4 modules combined). The average runtime per epoch for each dataset is reported in Table 7.
+
+# A.4 Measuring Fairness in Representations
+
+MDL scales linearly with the dataset size (Equation 12), therefore making it hard to compare across different datasets. In order to make it comparable, we measure a normalized description length measure for transmitting 1000 labels:
+
+$$
+\overrightarrow {\mathrm {M D L}} = \frac {1 0 0 0 \times \mathrm {M D L}}{| \mathcal {D} |} \tag {16}
+$$
+
+$|\mathcal{D}|$ is the dataset size. Performance using this measure are reported in Table 8 for all datasets. In all experiments we report the MDL required for transmitting the labels in the training set.
\ No newline at end of file
diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/images.zip b/adversarialscrubbingofdemographicinformationfortextclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4d04feb293a441c33550a5aefdfcf923e7d63bfc
--- /dev/null
+++ b/adversarialscrubbingofdemographicinformationfortextclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:224eec6c80cdc61acbaea1c9d4996085a2ecfbb823a69449bfbd070e866c7486
+size 646873
diff --git a/adversarialscrubbingofdemographicinformationfortextclassification/layout.json b/adversarialscrubbingofdemographicinformationfortextclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c4a409537473ae37e0ec72f2fa742ae6544d7b98
--- /dev/null
+++ b/adversarialscrubbingofdemographicinformationfortextclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:daa65476d063128595e4c68a1b867046bc17a5a365dc4dc4e24292c97da14a1f
+size 663489
diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_content_list.json b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8f310eca7204cee789fe6b55df1698eeaeef298a
--- /dev/null
+++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:74dce01bda1ccd68b685288645a495236ebab7e27192d14fe51df8fd0d5cccf2
+size 94501
diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_model.json b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fb460119ad8fd2308eaddcadf64ad938b3f6843e
--- /dev/null
+++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ffedd8a6b9c005f33a6def5a64132f10dc02bdd8507b7c78c84d27f7b469566
+size 108746
diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_origin.pdf b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7a49a56eba3251417a2dceb055298e0f92739986
--- /dev/null
+++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/01cf985f-1ea4-401f-8cae-ff50ce2a393d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33451011de2461634808d4bc16a79c4353a29316b2cb2070b5686de656f80c25
+size 1612464
diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/full.md b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ab3b01656b8c5db7b68190d9b6e0af66f28aad25
--- /dev/null
+++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/full.md
@@ -0,0 +1,325 @@
+# AESOP: Paraphrase Generation with Adaptive Syntactic Control
+
+Jiao Sun $^{1,2}$ , Xuezhe Ma $^{1,2}$ and Nanyun Peng $^{1,2,3}$
+
+1Computer Science Department, University of Southern California
+
+$^{2}$ Information Sciences Institute, University of Southern California
+
+3Computer Science Department, University of California, Los Angeles
+
+jiaosun@usc.edu, xuezhema@usc.edu, violetpeng@cs.ucla.edu
+
+# Abstract
+
+We propose to control paraphrase generation with carefully chosen target syntactic structures to generate more proper and higher quality paraphrases. Our model, AESOP, leverages a pretrained language model and purposefully selected syntactical control via a retrieval-based selection module to generate fluent paraphrases. Experiments show that AESOP achieves state-of-the-art performances on semantic preservation and syntactic conformation on two benchmark datasets with ground-truth syntactic control from human-annotated exemplars. Moreover, with the retrieval-based target syntax selection module, AESOP generates paraphrases with even better qualities than the current best model using human-annotated target syntactic parses according to human evaluation. We further demonstrate the effectiveness of AESOP to improve classification models' robustness to syntactic perturbation by data augmentation on two GLUE tasks.
+
+# 1 Introduction
+
+Syntactically-controlled paraphrase generation, which aims to generate paraphrases that conform with given syntactic structures, has drawn increasing attention in the community. On the one hand, paraphrase generation has benefited a wide range of NLP applications, such as neural machine translation (Yang et al., 2019), dialogue generation (Gao et al., 2020), as well as improving model robustness (Huang et al., 2021) and interpretability (Jiang et al., 2019). On the other hand, syntactically-controlled paraphrasing has been used for diverse question generation (Yu and Jiang, 2021), diversifying creative generation (Tian et al., 2021) and improving model robustness (Iyyer et al., 2018; Huang and Chang, 2021).
+
+However, selecting suitable target syntactic structures to control paraphrase generation for diverse and high-quality results is a lesser explored direction. Prior works usually use a fixed set of syntactic
+
+
+Figure 1: Given a source sentence, AESOP selects target syntactic parses adaptively to guide paraphrase generation. Paraphrases here are all generated by AESOP, which preserve the semantics from source sentences and conform with the selected syntactic parses.
+
+structures for all input sentences (Iyyer et al., 2018; Huang and Chang, 2021). A challenge with this method is that not all sentences can be paraphrased into the same set of syntactic structures. For example, it is impossible to turn a long sentence with multiple clauses into a noun phrase. Thus, Chen et al. (2019b) proposed to use crowd-sourcing to collect exemplars that can provide compatible syntax with the source sentence to guide generation. Disadvantages with this method are that the crowd-sourcing process is costly, and one exemplar sentence can only provide a specific syntactic guidance, while there are many syntactic parses that can properly guide the paraphrase generation (as shown in Figure 1).
+
+In contrast, we propose to automatically select multiple syntactic parse structures to control paraphrase generation for more diverse and higher quality generation. Our first contribution is the proposal of AESOP (Adaptive Syntactically-Controlled Paraphrasing), a model that integrates pretrained Language Models (LMs) with a novel retrieval-based target syntactic parse selection module to control paraphrase generation. By leveraging the expressiveness of pretrained LMs and
+
+
+Figure 2: Prune a constituency parse tree at heights H.
+
+| H=2 | (ROOT (S (NP) (VP) (.))) |
| H=3 | (ROOT (S (NP (DT)) (VP (VBZ) (ADJP)) (.))) |
| H=4 | (ROOT (S (NP (DT)) (VP (VBZ) (ADJP (JJ)) (.))) |
+
+the adaptive selection module, AESOP is capable of generating fluent and syntactically-diverse paraphrases. With ground-truth target syntactic parses from human-annotated exemplars, AESOP achieves the state-of-the-art performance on both semantic preservation and syntactic conformation metrics. By human evaluation, we show that AESOP can generate paraphrases with even better quality than the current best model using human annotated exemplars, which points out the importance of studying the adaptive target parse selection for future works on controlled paraphrase generation.
+
+Our second contribution is the construction of two datasets containing adversarial examples with syntactic perturbation generated by AESOP that are further validated and labeled by crowd workers. Experiments show that the two datasets are challenging to current classification models, and using AESOP to augment the training data can effectively improve classification models' robustness to syntactic attacks. $^{1}$
+
+# 2 Task Formulation
+
+We formulate the task of adaptive syntactically-controlled paraphrase generation as: given an input sentence $X$ , find a set of proper syntactic controls $Y$ to generate paraphrases $Z$ , such that $Z$ 's syntax conforms to $Y$ while retaining the semantics of $X$ .
+
+We use the term target syntactic parses to refer to the syntactic structure that guides the generation, which could be from exemplar sentences, a set of fixed templates, or our adaptive selection module.
+
+Algorithm 1 Adaptive Target Parse Selection
+Input: source parse at level $H$ : $T_s^H$ ; all (source parse, target parse) combinations in the training data $\{(T_{s1}^H, T_{t1}^H), \dots, (T_{sn}^H, T_{tn}^H)\}$ ; frequencies for each combination $\{F_1, \dots, F_n\}$ .
+Output: $k$ target parse $T_t^H$
+1: for $i \in \{1, 2, \dots, N\}$ do
+2: calculate the similarity score $S$ of $(T_s^H, T_i^H)$
+3: end for
+4: m parses with highest $S$ with $T_s^H$ : $\{T_{s1}'^H, \dots, T_{sm}'^H\}$
+5: for $T_{si}'^H \in \{T_{s1}'^H, \dots, T_{sm}'^H\}$ do
+6: // freq. distribution of possible target parses for $T_{si}'^H$
+7: sample $k/m$ target parses for $T_{si}'^H$ by distribution
+8: end for
+
+# 3 AESOP: Adaptive Syntactically-Controlled Paraphrasing
+
+AESOP has two components: i) a retrieval-based module that adaptively selects a set of target syntactic parses to guide the paraphrase generation; ii) an encoder-decoder architecture that leverages BART (Lewis et al., 2020) to generate paraphrases.
+
+# 3.1 Adaptive Target Syntactic Parse Selection
+
+In AESOP, we propose a retrieval-based strategy to select target syntactic parse adaptively (i.e., Algorithm 1). For a given syntactic parse of source sentence pruned at height $H$ (as shown in Figure 2), denoted as $T_{s}^{H}$ , we aim to find $k$ suitable target syntactic parses to guide the generation. First, we collect (source sentence $X$ , paraphrase $Z$ ) pairs from the training data. Then, we prune $X$ and $Z$ 's constituency parse trees at height $H$ simultaneously and get corresponding $(T_{s}^{H}, T_{t}^{H})$ pairs. By counting, we have the frequencies of all unique paired combination of pruned source parses with target syntactic parses from their paraphrases, as $\{(T_{s1}^{H}, T_{t1}^{H}), ..., (T_{sn}^{H}, T_{tn}^{H})\}$ .
+
+Ranker. For a pruned source parse $T_{s}^{H}$ , we calculate the similarity between $T_{s}^{H}$ and all other unique parses at height $H$ in the training data $\{T_{1}^{H},\dots,T_{i}^{H},\dots,T_{N}^{H}\}$ , where $N$ is the number of unique parses pruned at level $H$ . We linearize both $T_{s}^{H}$ and $T_{i}^{H}$ as constituency parse strings and calculate their similarity score $S$ by calculating weighted ROUGE scores (Lin, 2004) between parse strings:
+
+$$
+S \left(T _ {s} ^ {H}, T _ {i} ^ {H}\right) = a * \text {R O U G E 1} + b * \text {R O U G E 2} + c * \text {R O U G E L}. ^ {2} \tag {1}
+$$
+
+Retriever. We rank and get $m$ parses that have the highest similarity scores with $T_{s}^{H}$ , denoted as
+
+
+Figure 3: AESOP Framework. With a source sentence as input, AESOP has i) a retrieval-based selection module that adaptively chooses a set of target syntactic parses as control signals, together with ii) an encoder-decoder architecture to generate fluent paraphrases. With ground-truth target syntactic parses from exemplars, AESOP leverages the syntactic information at different heights from exemplars to guide the generation.
+
+$\{T_{s1}^{\prime H}, \dots, T_{sm}^{\prime H}\}$ . Then, for each parse $T_{si}^{\prime H}$ , we retrieve all possible target syntactic parses from pairwise parse combinations from the training data. For each combination, we count how many time it occurs in the training data. For one certain combination with its occurrence frequency $\# (T_{si}^{\prime H}, T_t^H)$ , we divide its frequency over the sum of frequencies for all possible target syntactic parses for $T_{si}^{\prime H}$ and get a list of frequency ratios. We use the ratio distribution as probabilities to select $k / m$ target syntactic parses $T_t^H$ for each of $m$ parse $T_{si}^{\prime H}$ as shown in Equation 2, which results in $k (= m * k / m)$ target syntactic parses in total.
+
+$$
+T _ {t} ^ {H} \sim P (T _ {t} ^ {H} | T _ {s i} ^ {' H}) = \frac {\# (T _ {s i} ^ {' H} , T _ {t} ^ {H})}{\sum_ {j = 1} ^ {N} \# (T _ {s i} ^ {' H} , T _ {t j} ^ {H})}. (2)
+$$
+
+In our later experiments, we use the ranker in Equation 1 to retrieve top-tanked target syntactic parses and their corresponding paraphrases. Using the two-step strategy instead of ranking all syntactic parses based on similarity, we aim to find diverse target syntactic parses suitable for the source sentence. We use the weighted sampling strategy rather than directly choose the most frequently occurred combinations to take care of compatible combinations that occur less in a specific dataset.
+
+# 3.2 Architecture of AESOP
+
+AESOP takes as inputs the source sentence $X$ , its full syntactic parse $T_{S}$ and target syntactic parse(s) $Y$ , and generates as outputs a paraphrase $Z$ of $X$ together with a duplication of the target parse $Y$ . Specifically, given source sentences $X$ , we tokenize and get their constituency-based parse trees3, denoted as $T_{s}$ (shown as source parse tree in Figure 3). Similar to previous works (Iyyer et al., 2018; Chen et al., 2019a; Kumar et al., 2020), we linearize the constituency parse tree to a sequence (shown as source full syntactic parse in Figure 3).
+
+To utilize the encoder-decoder BART (Lewis et al., 2020) model for syntactic-controlled paraphrase generation, we propose an effective design of having source sentencesource full syntactic parsetarget syntactic parse as the input sequence for the encoder. The output sequence from the decoder is the sequence of target syntactic parseparaphrase. We will showcase the efficiency of our model design in Section 4 and provide a visual interpretation that AESOP successfully disentangles the semantic and syntactic information in Section 5. During training, we get gold target syntactic parses directly from parallel-annotated paraphrases.
+
+ | Model | BLEU↑ | ROUGE-1↑ | ROUGE-2↑ | ROUGE-L↑ | METEOR↑ | TED-R↓ | TED-E↓ |
| QQP -Pos | source-as-output | 17.2 | 51.9 | 26.3 | 52.9 | 31.1 | 16.2 | 16.7 |
| exemplar-as-output | 16.8 | 38.2 | 20.5 | 43.2 | 17.6 | 4.8 | 0.0 |
| CGEN (Chen et al., 2019a) | 34.9 | 62.6 | 42.7 | 65.4 | 37.4 | 6.7 | 6.0 |
| SGCP-F (Kumar et al., 2020) | 36.7 | 66.9 | 45.0 | 69.6 | 39.8 | 4.8 | 1.8 |
| ⊕ SGCP-R (Kumar et al., 2020) | 38.0 | 67.6 | 45.3 | 70.0 | 24.8 | 6.6 | 5.7 |
| AESOP-H2 | 36.8 | 67.1 | 43.8 | 69.0 | 42.2 | 8.0 | 8.6 |
| AESOP-H3 | 43.4 | 71.3 | 50.9 | 73.1 | 46.5 | 6.7 | 7.0 |
| AESOP-H4 | 47.3 | 73.3 | 54.1 | 75.1 | 49.7 | 5.6 | 5.6 |
| AESOP-F | 40.5 | 69.6 | 49.3 | 72.0 | 43.8 | 4.8 | 1.9 |
| Para NMT -small | source-as-output | 18.8 | 50.6 | 23.2 | 47.7 | 28.8 | 12.0 | 13.1 |
| exemplar-as-output | 3.3 | 24.4 | 7.5 | 29.1 | 5.9 | 6.0 | 0.0 |
| CGEN (Chen et al., 2019a) | 13.6 | 44.8 | 21.0 | 48.3 | 24.8 | 6.7 | 3.3 |
| SGCP-F (Kumar et al., 2020) | 15.3 | 46.6 | 21.8 | 49.7 | 25.9 | 6.1 | 1.4 |
| ⊕ SGCP-R (Kumar et al., 2020) | 16.4 | 49.4 | 22.9 | 50.3 | 28.8 | 8.7 | 7.0 |
| AESOP-H2 | 20.7 | 51.4 | 27.1 | 53.1 | 30.6 | 8.7 | 9.5 |
| AESOP-H3 | 21.3 | 53.0 | 28.3 | 55.2 | 31.9 | 7.5 | 7.2 |
| AESOP-H4 | 22.9 | 54.4 | 29.8 | 56.4 | 32.7 | 6.9 | 5.7 |
| AESOP-F | 20.4 | 52.0 | 27.8 | 55.3 | 30.0 | 6.1 | 1.9 |
+
+Table 1: Performance comparison with ground-truth syntactic control. With coarse syntactic control from shallow height of pruning, $\spadesuit$ AESOP started to outperform the current state-of-the-art model $\oplus$ SGCP. $\spadesuit$ AESOP-H4 outperforms $\oplus$ SGCP across all semantic preservation (BLUE, ROUGE Scores and METEOR) and syntactic conformation metrics (TED-R and TED-E). $\uparrow$ means higher is better, while $\downarrow$ means lower is better. With the full syntactic parse (-F), AESOP achieves its best controllability, which is comparable to previous best performance. source-as-input and exemplar-as-output are for quality check purpose and not for comparison.
+
+In our setting, we train separate models using pruned trees of target parses at different heights $H$ . During inference, the target syntactic parses are either from exemplar sentences, fixed templates or our adaptive selection module.
+
+# 4 Paraphrase Generation with Syntactic Control
+
+We train and evaluate AESOP on ParaNMT-small (Chen et al., 2019b) and QQP-Pos (Kumar et al., 2020). Our train/dev/test split follows previous work (Kumar et al., 2020). During our experiments, we aim to answer three research questions:
+
+- Q1: Will AESOP conform with the syntactic control while preserving the semantics, given ground-truth target parses? (Section 4.1, Table 1)
+- Q2: Can AESOP generate fluent paraphrases with the adaptive target parse selection module when ground-truth target parses are unavailable? (Section 4.2, Table 2)
+- Q3: Does the adaptive selection module produce high-quality target parses? (Section 4.3, Table 3)
+
+Baselines. For supervised models that utilize exemplar sentences to get target parses, we compare with CGEN (Chen et al., 2019a) and two versions
+
+of SGCP (Kumar et al., 2020): SGCP-R and SGCP-F. SGCP prunes constituency parse trees of exemplar sentences from height 3 up to 10. During the evaluation, SGCP-R chooses the best paraphrase out of many, and SGCP-F uses the full parse tree. To the best of our knowledge, SGCP-R is the current state-of-the-art model under this setting. For models that utilize a fixed set of target syntactic parses, we compare with SCPN (Iyyer et al., 2018) that proposes 10 syntactic parses at height 2 to guide the generation.
+
+# 4.1 Ground-truth Syntactic Control
+
+To answer Q1, we evaluate AESOP on both datasets with ground-truth target syntactic parses from exemplar sentences.
+
+Experiment Setup. First, we get the constituency parse trees of exemplar sentences. Then, we remove all leaf nodes (i.e., tokens in the sentences) from the constituency parse trees to prevent any semantics propagating from exemplar sentences into generation. We further prune the parse trees of exemplars at different heights to get different levels of syntactic specifications. Technically, the deeper we prune the parse tree, the more fine-grained syntactic information the model can use. Practically, it is less likely to provide fine
+
+ | Model | BLEU↑ | ROUGE-1↑ | ROUGE-2↑ | ROUGE-L↑ | METEOR↑ | TED-E@2↓ | Valid@100↑ | Votes↑ |
| QQP -Pos | ⊕ SGCP-R | 38.0 | 67.6 | 45.3 | 70.0 | 24.8 | 0.8 | 41.0 | 19.3 |
| SCPN | 14.9 | 45.9 | 20.9 | 48.1 | 25.4 | 0.7 | 32.0 | 15.3 |
| AESOP-static | 18.5 | 52.5 | 27.6 | 52.0 | 30.6 | 2.5 | 57.0 | 28.3 |
| AESOP | 24.6 | 56.2 | 31.5 | 57.6 | 32.8 | 1.1 | 61.0 | 37.0 |
| Para NMT -small | ⊕ SGCP-R | 16.4 | 49.4 | 22.9 | 50.3 | 28.8 | 0.7 | 30.0 | 12.0 |
| SCPN | 12.1 | 35.7 | 15.1 | 32.9 | 23.3 | 0.5 | 54.0 | 30.0 |
| AESOP-static | 14.4 | 46.0 | 20.5 | 46.5 | 25.5 | 2.9 | 62.0 | 22.0 |
| AESOP | 15.0 | 47.0 | 21.3 | 47.3 | 26.1 | 2.6 | 68.0 | 36.0 |
+
+Table 2: Performance of AESOP without ground-truth target parse. Valid@100 is the validity check for the best paraphrases of first 100 test instances, and Votes is the percent of received votes for a paraphrase from one model to be the best among 4 models. Human evaluation indicates AESOP generate even better-quality paraphrases than the current best model $\oplus$ SGCP that uses the human-annotated target syntactic parse from exemplars.
+
+grained target syntactic parses. For example, it is easy to provide a target syntactic parse at height 2 containing a verb phrase and a noun phrase as (ROOT (S (NP) (VP) (.)) ), but it is hard to provide more fine-grained syntactic information even for experts. In AESOP, we try to use the syntactic information from exemplar sentences as shallow as possible. We train separate models by using target syntactic parses from pruning the constituency parse tree of paraphrases at heights 2, 3 and 4. Correspondingly, we denote them as AESOP(-H2/H3/H4). During evaluation, we only use the target syntactic parse from the exemplar sentences at that corresponding height.
+
+Evaluation Metrics. We evaluate the quality of paraphrases with: 1) alignment-based metrics to examine the semantics preservation: including BLEU (Papineni et al., 2002), ROUGE scores (Lin, 2004) and METEOR (Iyer et al., 2016) between the generated paraphrase and gold paraphrase. 2) syntactic conformation metrics: Tree-Edit Distances (TED) scores (Zhang and Shasha, 1989) between the constituency parse trees of generated paraphrases versus exemplar sentences (TED-E) and parallel-annotated paraphrases (TED-R).
+
+Quality Check. We use source sentences and exemplar sentences to check the quality of the datasets in Table 1. Using the source sentences as paraphrases will lead to high semantic preservation scores, but they have distinct syntactic structure with paraphrases, so TED-R scores are poor. On the other hand, exemplar sentences have distinct semantics with both the source sentences and paraphrases, which lead to poor semantic-preservation
+
+metrics. From TED-R scores, we can see that the tree-edit-distance between parse trees of exemplar sentences and paraphrases is low but not 0. It indicates that the quality of such human-annotated exemplar sentences are good yet imperfect.
+
+Experiment Results. Table 1 shows the performance comparison. Unsurprisingly, the deeper we prune target syntactic parse from exemplars, AESOP gets more syntactic information to achieve better controllability. With full target syntactic parse tree, AESOP achieves its best syntactic controllability, which is comparable to previous best performance. On the other hand, AESOP outperforms SGCP-R in semantic-preservation metrics by only using coarse syntactic information from height 2 (AESOP-H2) for ParaNMT-small and height 3 (AESOP-H3) for QQP-Pos. With more syntactic information, AESOP-H4 outperforms the current state-of-the-art SGCP-R in both semantics preservation and syntactic conformation metrics. It showcases AESOP's great ability of syntactically-controlled paraphrase generation.
+
+# 4.2 Adaptive Target Parse Selection
+
+To answer Q2, we evaluate AESOP without annotated exemplars. By having SGCP-R in our experiments, we aim to evaluate if AESOP can generate even better paraphrases compared to the current best model with human-annotated exemplars.
+
+Experiment Setup. How to select suitable target syntactic parses to guide the generation is still an open problem in the paraphrase generation community. To fairly compare with SCPN which proposes 10 syntactic templates at height 2, we also adopt AESOP trained at height 2 (shown as AESOP-H2
+
+ | top-1 | top-3 | top-5 | top-7 | top-10 |
| QQP -Pos | SCPN | 32.2 (±7.8) | 32.2 (±3.3) | 33.4 (±1.3) | 32.6 (±0.0) | 33.0 (±0.0) |
| AESOP-static | 58.6 (±4.5) | 58.7 (±1.8) | 57.5 (±2.1) | 57.9 (±0.9) | 58.0 (±0.0) |
| AESOP | 100.0 (±0.0) | 94.7 (±0.0) | 90.8 (±0.0) | 84.3 (±0.0) | 65.0 (±0.0) |
| Para NMT -small | SCPN | 16.2 (±4.1) | 16.9 (±1.5) | 18.0 (±1.1) | 17.2 (±0.9) | 17.4 (±0.0) |
| AESOP-static | 47.0 (±6.2) | 48.9 (±2.0) | 48.6 (±1.3) | 48.6 (±1.3) | 49.0 (±0.0) |
| AESOP | 90.0 (±0.0) | 86.7 (±0.0) | 84.4 (±0.0) | 80.0 (±0.0) | 70.6 (±0.0) |
+
+Table 3: Human validity check of top-k selected target syntactic parses. All numbers are 10-round mean with standard deviation. In AESOP, we use the ranker in Equation 1 to sort and get top-k target parses, while others use random selection. High validity rate of paraphrases indicate the high quality of our retrieved target syntactic parses. The trend that higher-ranked syntactic parses have higher validity rates verifies the efficiency of our ranker.
+
+in Table 1). Unlike previous work, AESOP uses the adaptive selection module to decide a set of target syntactic parses automatically. For a fair comparison, we also feed the same 10 syntactic target parses from SCPN to AESOP, denoted as AESOP-static. It is hard to evaluate retrieved target syntactic parses because paraphrases are intrinsically diverse, so that many target syntactic parses could be reasonable. Therefore, we use the quality of generated paraphrases, which is our end goal, to reflect the quality of retrieved target syntactic parses. For evaluation, we use automatic metrics together with extensive human evaluations.
+
+Automatic Metrics. First, we generate 10 paraphrases from each model. To establish a strong baseline, we chose the best paraphrase with the highest BLEU scores with source sentences across all models. As shown in Table 2, the improvement from AESOP-static to AESOP indicates the effectiveness of our adaptive selection strategy. SCPN performs better at TED-E@2 metrics on both datasets. After qualitative checks, we share the same finding with previous works (Kumar et al., 2020; Chen et al., 2019a) that SCPN tends to strictly adhere to syntactic parses at the cost of semantics. On the other hand, AESOP leans towards generating fluent paraphrases and can make up for the case when the target syntactic parse is less reasonable – AESOP achieves a better syntactic conformation when the syntactic control signal is more accurate, indicated by the decreases of TED-E@2 scores in Table 2.
+
+Human Evaluation. We validate the chosen paraphrases for the first 100 instances in the test sets on Amazon Mturk, and report as Valid@100 in
+
+Table 2. Besides, we show workers 4 paraphrases from all models and ask them to vote for which one is the best. Then we report the percentage of votes that each model got as votes. In result, AESOP generates more valid paraphrases than all baselines and gets the most votes, even than SGCP-R that utilizes human-annotated exemplars. Such finding demonstrates the effectiveness of AESOP and points out the importance of studying automatic target parse selection in paraphrase generation. $^{8}$
+
+# 4.3 Quality of Retrieved Syntactic Parsing
+
+To answer Q3, we evaluate the quality of retrieved top- $k$ target syntactic parses by checking the validity of their corresponding paraphrases. We generate 10 paraphrases for each of the first 50 test instances (500 in total) using SCPN, AESOP-static, and AESOP and ask workers to validate. After annotation, we use the similarity ranker in Equation 1 to rank and get the top- $k$ target syntactic parses and their corresponding paraphrases for AESOP. For other baselines, as they use a fixed set of target syntactic parses and do not have any ranking mechanism, we do random permutation to rank target parses to get top- $k$ paraphrases. We run the experiments for 10 rounds and report the validity rate of paraphrases for top- $k$ target syntactic parses in Table 3. Comparing to pre-designed syntactic parses, the higher validity rates of paraphrases from AESOP indicate the better quality of our retrieved target syntactic parses. The trend that higher-ranked syntactic parses have higher validity rates also verifies the efficiency of our ranker.
+
+# 5 Model Analysis and Interpretation
+
+Ablation Studies. We take out each part of sequence in both encoder and decoder and conduct
+
+ | Model | BLEU↑ | ROUGE-1↑ | ROUGE-2↑ | ROUGE-L↑ | METEOR↑ | TED-R↓ | TED-E↓ |
| QQP -Pos | 1 AESOP | 47.3 | 73.3 | 54.1 | 75.1 | 49.7 | 5.6 | 5.6 |
| 2 w/o tp in dec | 39.9 (+7.4) | 68.4 (+4.9) | 49.0 (+5.1) | 70.5 (+4.6) | 44.5 (+5.2) | 8.1 (+2.5) | 8.1 (+2.5) |
| 3 w/o fp in enc | 42.3 (+5.0) | 71.6 (+1.7) | 50.9 (+3.2) | 73.4 (+1.7) | 45.3 (+4.4) | 6.4 (+0.9) | 6.2 (+0.6) |
| 4 w/o fp, tp in enc, tp in dec | 23.9 (+23.4) | 56.2 (+17.1) | 32.2 (+21.9) | 57.6 (+17.5) | 34.0 (+15.7) | 12.9 (+7.3) | 13.4 (+7.8) |
| 5 w/o fp in enc, tp in dec | 38.2 (+9.1) | 67.7 (+5.6) | 47.5 (+6.6) | 70.0 (5.1) | 42.4 (+7.3) | 8.0 (+2.4) | 7.9 (+2.3) |
| Para NMT -small | 1 AESOP | 22.9 | 54.4 | 29.8 | 56.4 | 32.7 | 6.9 | 5.7 |
| 2 w/o tp in dec | 19.2 (+3.7) | 51.3 (+3.1) | 27.3 (+2.5) | 53.5 (+2.9) | 30.8 (+1.9) | 9.7 (+1.8) | 8.8 (+2.9) |
| 3 w/o fp in enc | 24.0 (-1.1) | 54.8 (-0.4) | 30.5 (-0.7) | 57.1 (-0.7) | 33.4 (-0.7) | 6.8 (-0.1) | 5.7 (0.0) |
| 4 w/o fp, tp in enc, tp in dec | 16.7 (+6.2) | 49.8 (+0.6) | 25.2 (+4.6) | 50.4 (+6.0) | 29.1 (+3.6) | 11.7 (+4.8) | 12.8 (7.1) |
| 5 w/o fp in enc, tp in dec | 20.0 (+2.9) | 53.7 (+0.7) | 29.3 (+0.5) | 55.7 (+0.7) | 31.6 (+1.1) | 8.7 (+1.8) | 7.7 (+2.0) |
+
+Table 4: Ablation studies that justify our model design. + shows how much better AESOP is compared to the that design, while - shows how much worse (dec, enc: decoder, encoder. tp: target parse, fp: source full parse).
+
+several ablation studies on AESOP-H4 with exemplars. We show how each part of sequences would influence AESOP's performance in Table 4. Takeaways from our ablation studies are: 1) AESOP's performance plummets without any syntactic specifications (row1&row4). 2) Taking out target parse $(tp)$ in the output sequence will lead to worse performance in both semantic preservation and syntactic controllability (row1&row2, row3&row4). We will visually interpret the benefit of such design later in this section. 3) Taking out each part in the input sequence for the encoder will leads to a significant performance drop of AESOP on QQP-Pos dataset for both criteria (i.e., semantic preservation and syntactic controllability). The trend is the same for ParaNMT-small dataset, except only taking out the full parse $(fp)$ will leads to around $1\%$ improvement on semantic preservation metrics, while the syntactic controllability stays almost the same. Considering the much larger performance drop on criteria, we decided the current design of AESOP.
+
+Interpretation. In Figure 4, we visualize cross attentions between encoder and decoder for two designs, i.e., AESOP with (right) and without (left) target syntactic parse in the decoder on the test set of ParaNMT-small. Technically, we search for the final output with $\text{beam} = 4$ and take the average of cross attention scores of 12 attention heads from the last layer of the decoder. Finally, we add the attention of all tokens within each component ( $ss$ , $fp$ and $tp$ ). To manifest the difference, we denote the highest attention scores as 100, and calculate the relative cross attention to the highest.
+
+Compared to the design without target syntactic parse in the decoder, cross attention between paraphrases and source sentences stays the highest in AESOP. However, the ratio of cross attention
+
+scores of (paraphrases, target parses) and (paraphrases, full source parses) decreases. Such decreases indicate that having target parses in the decoder helps to disentangle semantic and syntactic information from the input sequence. Instead, AESOP learns the syntactic information from target syntactic parses through self-attention in the decoder. As a result, it leads to a performance boost in Table 4. At the same time, target parses influence paraphrase generation directly during decoding through the decoder's self-attention, which leads to better controllability of AESOP. Take the example in Figure 4, without target parse in the decoder, the model outputs a large black dog sits in the corner beside him. as the paraphrase to by his side crouched a huge black wolfish dog .. After adding the target parse in the decoder, the model no longer generates prepositional phrase in the corner and outputs a large black dog sits beside him., which matches better with the input target parse.
+
+# 6 Improve Robustness
+
+Recent works show that powerful LMs (e.g., BERT (Devlin et al., 2019)) are capturing the superficial lexical features McCoy et al. (2019) and are vulnerable to simple perturbations (Jin et al., 2020). Motivated by this, we first test if BERT is robust to syntactic perturbations by paraphrasing.
+
+We fine-tune BERT models on two GLUE (Wang et al., 2018) tasks (SST-2 and RTE). Then, we generate 10 paraphrases using AESOP-H2 for each test instance in the dev set and choose top-5 to get 2 larger dev sets. We run trained BERT models on new dev sets again.
+
+Human Annotation. We collect the paraphrases where models fail but succeeded at their original
+
+
+
+
+
+
+Figure 4: Cross Attention without and with $tp$ (target parse) in the decoder. Line thickness is proportional to relative cross attention scores. By duplicating $tp$ in the decoder, relative cross attention scores for both (paraphrases, full source parse) and (paraphrases, target parse) decrease. It indicates that duplicating target syntactic parses in the decoder lets AESOP disentangle the semantics and syntactic information from the input sequence.
+
+ | Original Dev | Collected | Combined |
| Dataset | Model | Before | After | ParaGAP | Before | After | ParaGAP | Before | After | ParaGAP | |
| SST-2 | SCPN (Iyyer et al., 2018) | 91.9 | 89.7 | -2.2 | 18.6 | 46.5 | +27.9 | 68.0 | 76.1 | +8.1 | |
| SynPG (Huang and Chang, 2021) | 85.3 | -6.6 | 47.0 | +28.7 | 73.3 | +5.3 | |
| AESOP-tp | 88.9 | -3.0 | 49.5 | +30.9 | 76.4 | +8.4 | |
| AESOP | 91.1 | -0.8 | 48.5 | +29.9 | 77.6 | +9.6 | |
| RTE | SCPN (Iyyer et al., 2018) | 62.8 | 68.6 | +5.8 | 46.9 | 49.6 | +2.7 | 56.0 | 58.1 | +2.1 | |
| SynPG (Huang and Chang, 2021) | 61.7 | -1.1 | 49.0 | +2.1 | 56.6 | +0.6 | |
| AESOP-tp | 60.3 | -2.5 | 55.7 | +8.8 | 57.8 | +1.8 | |
| AESOP | 62.5 | -0.3 | 58.4 | +11.5 | 61.0 | +5.0 | |
+
+sentence as adversarial examples. We then put all these examples on MTurk and ask workers to re-associate. $^{10}$ For SST-2, we ask workers to assign sentiment labels as positive, negative or undecided (mixed sentiments). For RTE, one test instance has sentence1 and sentence2 with a label if sentence1 entails sentence2. We generate paraphrases for sentence2 and ask workers to binary-decide if sentence1 entails generated paraphrases. We show the statistics of collected adversarial set and original dev set in Table 6. Researchers can test their models' robustness to syntactic perturbations on our collected datasets.
+
+Augmentation. We augment each training instance with 5 best paraphrases from AESOP-H2. For SynPG and SCPN, as the pre-designed templates for SynPG is a subset of SCPN's. We generate 5 paraphrases using selected templates in SynPG. Then, we retrain BERT models with augmented training data from each model. Then, we re
+
+Table 5: ParaGAP is the accuracy difference between BERT models after and before using paraphrases augment the training data. Among 4 models, AESOP improves BERT's robustness to syntactic perturbations the most.
+
+ | Original | Collected | Combined |
| SST-2 | 872 | 404 | 1276 |
| RTE | 277 | 341 | 618 |
+
+Table 6: Dataset statistics. Combined is the combination of the original dev set and collected data.
+
+train BERT models after augmentation and get their test accuracies. We define ParaGAP as the accuracy difference for after- and before-augmentation using paraphrase generation models. ParaGAP indicates how efficient the augmentation is to improve the model robustness to syntactic perturbations.
+
+Experiment Result. As shown in Table 5, BERT models perform poorly in our collected datasets before augmentation, which indicate that our collected adversarial datasets are challenging, and BERT is vulnerable to syntactic perturbations. After using 4 different paraphrasing models to augment the training data, models' robustness to such perturbations all get improved. Among all models,
+
+AESOP yields the best ParaGAP on the combined dataset of original dev sets and collected datasets, which shows that using AESOP improves the classification model's robustness to syntactic perturbations more effectively.[11]
+
+# 7 Related Work
+
+Recent advances have been using neural models for syntactically controlled paraphrase generation. From the modeling perspective, there are roughly two categories: unsupervised and supervised methods. Unsupervised models do not use parallel paraphrases during training. Wieting and Gimpel (2018); Wieting et al. (2017) use back-translation to generate paraphrases. Huang and Chang (2021) propose a transformer-based model SynPG for paraphrase generation. AESOP is a supervised paraphrase generation model, which means that we require parallel paraphrases during training. Previous supervised paraphrase models are mostly RNN-based models, including SCPN (Iyyer et al., 2018), CGEN (Chen et al., 2019a) and SGCP (Kumar et al., 2020). Such models suffer from generating long sentences and do not utilize the power of recent pretrained language models. Goyal and Durrett (2020a) is a concurrent work with ours that also builds on BART to generate paraphrases but has a different model design. For syntactic control, Goyal and Durrett (2020b) use target syntactic parses to reorder source sentences to guide the generation, while other works, including AESOP, directly use target syntactic parses to guide the generation. CGEN (Chen et al., 2019a) and SGCP (Kumar et al., 2020) use target syntactic parses from crowd-sourced exemplars, SCPN (Iyyer et al., 2018) and SynPG (Huang and Chang, 2021) use pre-designed templates, while AESOP retrieves target syntactic parses automatically.
+
+# 8 Conclusion and Future Works
+
+In this work, we propose AESOP for paraphrase generation with adaptive syntactic control. One interesting and surprising finding of this paper is that using automatically retrieved parses to control paraphrase generation can result in better qualities than the current best model using human-annotated exemplars. Such finding manifests the benefits of adaptive target parse selection for controlled paraphrase generation – it does not only generate
+
+diverse paraphrases, but also higher quality paraphrases. This suggests future works on syntactically controlled paraphrase generation to pay more attention to target parse selection, and we hope AESOP can serve as a strong baseline for this direction. In our work, we use generated paraphrases to reflect the quality of automatically-selected target parses; future works can design specific metrics to evaluate the quality of retrieved syntactic parses. In addition, we find that having the control signal in the decoder can lead to better controllability of AESOP. Future works can test the generalizability of this modeling strategy in other controlled generation tasks. In addition, we show that AESOP can effectively attack classification models and contribute two datasets to test models' robustness to syntactic perturbation. We find that using AESOP to augment training data can effectively improve classification models' robustness to syntactic perturbations.
+
+# Acknowledgments
+
+Many thanks to I-Hung Hsu for his constructive suggestion and fruitful discussion for AESOP. We thank Kuan-Hao Huang, Sarik Ghazarian, Yu Hou and anonymous reviewers for their great feedback to improve our work.
+
+# Ethical Consideration
+
+Our proposed model AESOP utilizes a pretrained language model to generate paraphrases. Trained on massive online texts, it is well-known that such pretrained language models could capture the bias reflecting the training data. Therefore, AESOP could potentially generate offensive or biased content. We suggest interested parties carefully check the generated content before deploying AESOP in any real-world applications. Note that AESOP might be used for malicious purposes because it does not have a filtering mechanism that checks the toxicity, bias, or offensiveness of source sentences from the input. Therefore, AESOP can generate paraphrases for harmful content that may offend certain groups or individuals.
+
+Our collected datasets are based on the development sets of two public classification tasks on GLUE, including SST-2 for sentiment analysis and RTE for textual entailment. These do not contain any explicit detail that leaks information about a user's name, health, racial or ethnic origin, religious or philosophical affiliation or beliefs.
+
+# References
+
+Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019a. Controllable paraphrase generation with a syntactic exemplar. Association for Computational Linguistics.
+Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019b. A multi-task approach for disentangling syntax and semantics in sentence representations. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics.
+Silin Gao, Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020. Paraphrase augmented task-oriented dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
+Tanya Goyal and Greg Durrett. 2020a. Neural syntactic preordering for controlled paraphrase generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
+Tanya Goyal and Greg Durrett. 2020b. Neural syntactic preordering for controlled paraphrase generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
+James Y. Huang, Kuan-Hao Huang, and Kai-Wei Chang. 2021. Disentangling semantics and syntax in sentence embeddings with pre-trained language models. In *NAACL (short)*.
+Kuan-Hao Huang and Kai-Wei Chang. 2021. Generating syntactically controlled paraphrases without using annotated parallel pairs. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Association for Computational Linguistics.
+Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics.
+Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics.
+
+Zhengbao Jiang, F. F. Xu, J. Araki, and Graham Neubig. 2019. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438.
+Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In AAAI.
+A. Kumar, Kabir Ahuja, Raghuram Vadapalli, and P. Talukdar. 2020. Syntax-guided controlled generation of paraphrases. Transactions of the Association for Computational Linguistics, 8:330-345.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
+Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Association for Computational Linguistics.
+Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics.
+Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
+M. McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia Medica, 22:276 - 282.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
+Yufei Tian, Arvind krishna Sridhar, and Nanyun Peng. 2021. Hypogen: Hyperbole generation with commonsense and counterfactual knowledge. In Findings of the Association for Computational Linguistics: EMNLP.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018.
+
+GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics.
+John Wieting and Kevin Gimpel. 2018. ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics.
+John Wieting, Jonathan Mallinson, and Kevin Gimpel. 2017. Learning paraphrastic sentence embeddings from back-translated bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics.
+Xuewen Yang, Yingru Liu, Dongliang Xie, Xin Wang, and Niranjan Balasubramanian. 2019. Latent part-of-speech sequences for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics.
+Xiaojing Yu and Anxiao Jiang. 2021. Expanding, retrieving and infilling: Diversifying cross-domain question generation with flexible templates. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Association for Computational Linguistics.
+K. Zhang and D. Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM J. Comput., 18:1245-1262.
+
+# A Appendix
+
+# A.1 Implementation Details
+
+Parameters. We use a learning rate $3 \times e^{-5}$ to train AESOP. We use 6 layers of encoder and 6 layers of decoder with model dimension of 768 and 12 heads. For the input sequence, we set the max length to 128 and max output sequence length as 62. We train 25 epochs for each model. It takes about one days to finish training for ParaNMT-small and about half a day for QQP-Pos on one NVIDIA GeForce RTX 2080.
+
+Optimization. We use Adam $(\beta_{1} = 0.9, \beta_{2} = 0.999)$ with a linear learning rate decay schedule for optimization. All experiments are done using Huggingface library (Wolf et al., 2020).12
+
+# A.2 Diverse Syntax with Deeper Pruning
+
+Table 7 is a supplementary to Table 2. Using AESOP-H2 yields a better performance in terms of the semantic preservation metrics. We share the same finding from Section 4.1 that the syntactic controllability will get better when we use the deeper heights of syntactic parse trees. However, the semantic preservation metrics get worse with more fine-grained syntactic control, we hypothesize this is because deeper-level of control signals can be misleading, but such signals restrict models to generate paraphrases that conform to the provided misleading syntactic signals, which impairs the ability of pretrained language models to generate fluent texts.
+
+# A.3 Validity Check on Paraphrases
+
+In Section A.3.1, we will give more details of the human validity check in Table 2 and more details of human evaluation of Table 3 in Section A.3.2.
+
+# A.3.1 Validity@100 and Votes
+
+We choose the best paraphrases among 10 generated paraphrases from SCPN, AESOP-static and AESOP for the first 100 test instances in the both datasets. For SGCP, we take its output paraphrases that uses the exemplar sentences. Then, we perform the human validity check of these 400 paraphrases on Amazon Mturk platform. For each source sentence, we provide all 4 paraphrases from these four models to three workers. In our instruction, we ask them to annotate three-level of validity: invalid paraphrase, imperfect paraphrase that does
+
+not lose key information, and perfect paraphrases. We binarize worker's labels with both imperfect and perfect paraphrases as a valid instance, otherwise invalid. Then, we the majority vote of labels among three workers as the final label. We calculate the ratio of valid instances over 100 and report the ratio as Validity@100 in Table 2. As a supplementary, Table 8 shows the breakdown annotation of three-level validity check. In addition, we ask workers to vote for the best paraphrase among the four paraphrases, and report the ratio of total votes the model gets over all 300 votes as Votes in Table 2 to reduce the influence of personal preference. We use fleiss's kappa scores (McHugh, 2012) to measure the Inter Annotator Agreement (IAA). The IAA for validity@100 is 0.63, which indicates a substantial agreement among workers.
+
+Mturk Setup Details. We set the qualification as the location needs to be in the US, completed HITs no less than 5000, and approval rate no lower than $98\%$ . Our one HIT contains 10 instances. For one HIT, we have three respondents (workers) to work on it. For payment, we pay workers $0.8 per HIT with a potential bonus of $1 if they participate over 5 HITs published by us.
+
+# A.3.2 Validity@500
+
+The annotators of the human evaluation in Section 4.3 are three graduate students from our institute. None of them are involved in this project. We have two of them work on validity checks for ParaNMT-small and QQP-Pos, and there was one student who worked on both. We check their understandings about paraphrases before the study and instruct them to only label a paraphrase as valid when the paraphrase is natural, fluent, and preserves the semantics of the source sentence. To understand the Inter Annotator Agreement (IAA), we randomly selected 50 samples of (source sentence, paraphrase) pairs and asked them to annotate if they are valid paraphrases independently. After the annotation, we count it as an agreement if they agree on the same label (either valid paraphrase or invalid). The average IAA is 0.9 between the three of them, which indicates a good agreement. Then, we have these three works to annotate all instances sampled on Table 2. After annotation, we count a paraphrase as a valid paraphrase only if both of 2 annotators think it is valid.
+
+ | Model | BLUE | ROUGE-1 | ROUGE-2 | ROUGE-L | METEOR | TED-E |
| ParaNMT -small | AESOP-H2 | 15.0 | 47.0 | 21.3 | 47.3 | 26.1 | 2.6 |
| AESOP-H3 | 12.1 | 41.5 | 16.7 | 42.6 | 22.8 | 2.3 |
| AESOP-H4 | 8.4 | 33.2 | 10.9 | 35.3 | 18.3 | 1.5 |
| QQP-Pos | AESOP-H2 | 24.6 | 56.2 | 31.5 | 57.6 | 32.8 | 0.7 |
| AESOP-H3 | 22.5 | 54.8 | 29.7 | 56.1 | 31.5 | 0.8 |
| AESOP-H4 | 19.9 | 51.4 | 25.9 | 52.0 | 30.6 | 1.1 |
+
+Table 7: A supplementary to Table 2. When we use the deeper levels of syntactic parse trees, the syntactic controllability of AESOP will get better. However, the semantic preservation metrics get worse, because such signals can be misleading and it restricts models to generate paraphrases that conform to the control signal.
+
+| Dataset | Model | Invalid | Imperfect | Perfect |
| QQP -Pos | SGCP | 59 | 19 | 22 |
| SCPN | 68 | 12 | 20 |
| AESOP-static | 43 | 23 | 34 |
| AESOP | 39 | 24 | 37 |
| Para -NMT -small | SGCP | 70 | 21 | 9 |
| SCPN | 46 | 24 | 30 |
| AESOP-static | 38 | 20 | 42 |
| AESOP | 32 | 27 | 41 |
+
+Table 8: Three-level validity annotation breakdowns for Validity@100.
+
+# A.4 Case Study with Invalid Target Syntax
+
+Strict conformation to inappropriate target syntactic parse sometimes leads to semantics lost and abrupt termination of sentences, which hurts the goal of generating fluent and natural paraphrases as indicated in Section 4.2. For example, given the input sentence $i$ had a dream yesterday and it was about you and a target syntactic parse with height 2 (ROOT(S(ADVP)(NP)(VP)(.))), SCPN generates maybe it was about you . that has the same syntactic parse with the target parse, while AESOP generates you were in my dream last night . whose syntax parse at height 2 is (ROOT(S(NP)(VP)(.))).
+
+# A.5 Qualitative Comparison
+
+We provide a qualitative comparison between AESOP and other competitive paraphrase generation models under both settings with or without exemplar sentences in Table 9. We show that with ground-truth syntactic control (Setting I), AESOP can generate paraphrases that are closer to ground-truth paraphrases. Without ground truth, AESOP can generate diverse paraphrases that are more natural and better preserve the semantics than SCPN.
+
+# A.6 Adversarial Set Collection
+
+We contribute two datasets constructed from AESOP in Section 6 by crowd-sourcing. We collect all adversarial examples that successfully attacked
+
+the models, as shown in the all column of Table 5 and put them on Amazon MTurk to annotate if the paraphrases are valid. We set the qualification as the location needs to be in the US, completed HITs no less than 5000, and approval rate no lower than $98\%$ . One HIT contains 12 instances and have 3 respondents (workers) work on it. For payment, we pay workers $\$0.4$ per HIT as qualification test. After selecting qualified workers, we pay them $\$1$ per HIT with another potential bonus of $\$1$ if they participate over 5 HITs published by us. On average, experienced workers spent around 10 minutes to complete one HIT, which means our payment is above the federal minimum wage in the US.
+
+Instruction and Annotation. As sentiment analysis on SST-2 is intuitive, we list examples as an instruction to guide the annotation. We count it as an agreement if all of three workers given the same label to one instance (i.e., positive, negative or undecided), and we calculate IAA as the ratio of agreements over all instances for qualified workers. The average IAA of three workers among all instances are 0.8, which indicates a good agreement. During the dataset collection, we use the majority vote to decide the final label of one instance. For textual entailment on RTE dataset, we refer to the guideline from the original guide line of RTE-4 $^{13}$ to explain the textual entailment task itself with examples. The IAA for RTE annotation is 0.71.
+
+# A.7 AESOP Helps to Improve the Decision Boundary
+
+We conduct a study on how augmenting the training data would influence models' decision boundaries. More specifically, we test BERT models before and after augmentation with AESOP, on the combination of the original gold test set and our collected adversarial datasets on two downstream tasks. For
+
+ | Model | Exemplar/Selected Target Parses | Generated Paraphrases |
| QQP -Pos | source sentence: what is the best way to get manchester united tickets? |
| SGCP | (ROOT (FRAG (NP (NP (NNS )) | whats the way ? |
| AESOP-H4 | (NP (DT ) (NN ))) (.))) | how can i get free manchester united tickets ? |
| SCPN | (ROOT (S (NP ) (VP ) (.)) | that ’s the best way to get manchester united tickets ? |
| (ROOT (FRAG (SBAR ) (.)) | what ’s the best way to get manchester ? |
| AESOP | (ROOT (SBARQ (WHADVP ) (SQ ) (.))) | how can i get free manchester united tickets ? |
| (ROOT (SQ (VBZ ) (NP ) (VP ) (.))) | is there any way to get free manchester united tickets ? |
| Para NMT -small | source sentence: by his side crouched a huge black wolfish dog . |
| SGCP | (ROOT (S (NP (DT ) (JJ ) (JJ ) (NN )) | his side waving a huge black dog . |
| AESOP-H4 | (VP (VBZ ) (PP (IN ) (NP )) (.))) | a large black dog sits beside him . |
| SCPN | (ROOT (S (NP ) (VP ) (.)) | his side was a huge black dog . |
| (ROOT (NP (NP ) (.)) | a huge black dog on his side . |
| AESOP | (ROOT (S (S ) (NP ) (VP ) (.))) | there was a big black wolf lying next to him . |
| (ROOT (NP (NP ) (.))) | a large , black , wolf like dog lay beside him . |
+
+Table 9: A qualitative comparison of generated paraphrases with or without exemplar sentences from AESOP. SGCP and AESOP-H4 use target syntactic parses from exemplar sentences to guide the generation. SCPN use fixed target syntactic templates, while AESOP retrieves target syntactic parses automatically.
+
+
+(a) SST before augmentation
+
+
+(b) SST after augmentation
+
+
+(c) RTE before augmentation
+
+
+(d) RTE after augmentation
+Figure 5: AESOP helps to improve the model decision boundary. For visualization, we use TSNE to reduce the dimension of [CLS] token from the last layer of BERT model combining the collected data and dev set for SST-2 and RTE.
+
+visualization, we use TSNE to reduce the dimension of [CLS] token from the last layer of BERT model. Figure 5 show that AESOP helps BERT models to improve the decision boundary to be more clear, which is also indicated by Table 5 in the main content.
\ No newline at end of file
diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/images.zip b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4f1a09021d6e49b273c1b9a6984333521f255bb2
--- /dev/null
+++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8bbc22db7ade5fefcad245b5f1af1ae7c0a9ae266c672bcf50ac8cc620a6b19e
+size 882263
diff --git a/aesopparaphrasegenerationwithadaptivesyntacticcontrol/layout.json b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c6c4e348bce8e33679589e034743f03c2c591d62
--- /dev/null
+++ b/aesopparaphrasegenerationwithadaptivesyntacticcontrol/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:519caa5a8abfbc032dbe1a741a5fa5e4709abf50e10989b54150e38062c8b2b1
+size 407166
diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_content_list.json b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ac771f3dd045f7c51398d612089c3a97a4675ccd
--- /dev/null
+++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:491c33f42cee81803441d87d63713fda630bb4c17c880ab05e9d31f1be09218e
+size 86399
diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_model.json b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5e4971fb555ba127f8f104085b8682c52faa8e89
--- /dev/null
+++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3d1f16a9f6e2226f7d93520c183c93a52f8ab01966cb04350d647a5728183802
+size 104328
diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_origin.pdf b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5410c858af70a70dda299d9ebcb81742f5ce1a4c
--- /dev/null
+++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/820849dc-81f0-468f-a78f-08c5b047087b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35aa015e4623f5395f824ceeca5f61105d32be38b131e64764459aaed2d4a988
+size 424820
diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/full.md b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f0e2a46e90ac9efaac843edb6c3e9cdc8c8db0af
--- /dev/null
+++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/full.md
@@ -0,0 +1,360 @@
+# A Fine-Grained Domain Adaption Model for Joint Word Segmentation and POS Tagging
+
+Peijie Jiang $^{1}$ Dingkun Long Yueheng Sun $^{2}$ Meishan Zhang $^{1*}$ Guangwei Xu Pengjun Xie
+
+$^{1}$ School of New Media and Communication, Tianjin University, China
+
+$^{2}$ College of Intelligence and Computing, Tianjin University, China
+
+{jzx555,yhs,zhangmeishan}@tju.edu.cn
+
+{longdingkun1993,ahxgwOnePiece,xpjandy}@gmail.com
+
+# Abstract
+
+Domain adaption for word segmentation and POS tagging is a challenging problem for Chinese lexical processing. Self-training is one promising solution for it, which struggles to construct a set of high-quality pseudo training instances for the target domain. Previous work usually assumes a universal source-to-target adaption to collect such pseudo corpus, ignoring the different gaps from the target sentences to the source domain. In this work, we start from joint word segmentation and POS tagging, presenting a fine-grained domain adaption method to model the gaps accurately. We measure the gaps by one simple and intuitive metric, and adopt it to develop a pseudo target domain corpus based on fine-grained subdomains incrementally. A novel domain-mixed representation learning model is proposed accordingly to encode the multiple subdomains effectively. The whole process is performed progressively for both corpus construction and model training. Experimental results on a benchmark dataset show that our method can gain significant improvements over a variety of baselines. Extensive analyses are performed to show the advantages of our final domain adaption model as well.
+
+# 1 Introduction
+
+Chinese Word Segmentation (CWS) and Part-Of-Speech (POS) tagging are two fundamental tasks for natural language processing (NLP) in Chinese (Emerson, 2005; Jin and Chen, 2008), serving as backbones for a number of downstream NLP tasks. The joint models of the two tasks can lead to better performance because they are closely-related and the pipeline models suffer from the error propagation problem (Ng and Low, 2004; Zhang and Clark, 2008; Wang et al., 2011; Zeng et al., 2013; Zhang et al., 2018; Tian et al., 2020a), which can be alleviated in the joint architecture.
+
+
+Figure 1: The idea of fine-grained domain adaption.
+
+Currently, joint CWS and POS tagging has gained great achievements with BERT inputs (Tian et al., 2020a,b). Our preliminary results show that the F1-score of joint POS tagging can be close to $95\%$ when the training and test corpus both belong to a standard newswire domain. Unfortunately, it is not always the case in real applications. The performance might be degraded dramatically when the source and target domains are highly different. Taken the ZhuXian (a novel from Internet) as an example (Zhang et al., 2014), the same model can only obtain an F1-score of $89\%$ for POS tagging according to our results.
+
+It is a typical domain adaption problem targeted to joint CWS and POS tagging. Self-training could be one promising solution (Inoue et al., 2018; Zou et al., 2019; Saito et al., 2020) which can accomplish the goal in a fully-automatic manner without any human intervention (Liu and Zhang, 2012). By using a source model to automatically label a large-scale raw corpus of the target domain, and then selecting a set of high-confidence pseudo-labeled instances as additional training data, we can obtain boosted performance on the target domain. The quality of pseudo corpus is the key to success. For the target sentences which are far from the source domain, the generated corpus based on them might be of extremely-low quality (Shu et al., 2018; Zhao et al., 2019). Thus, these sentences should be either filtered, resulting in a biased corpus to the target domain, or be kept with great noises to degrade the overall target performance.
+
+In this work, we suggest a fine-grained domain
+
+adaption method to alleviate the above problem of self-training. We define a simple and intuitive metric to measure the distance (gap) of a target sentence to the source domain. Based on the metric, we create a set of high-quality training corpora incrementally according to the distances of the target sentences to the source domain. Figure 1 shows the main idea. The process is conducted by several iterations in a progressive manner, where at each new iteration, we add a small set of high-quality instances which are not as distant from the previous iteration. Finally, we arrive at a training corpus covering the target domain of various distances fully. At each iteration, we go only a little further by the distance, thus the quality of the pseudo corpus can be greatly ensured by the previous model.
+
+By the fine-grained domain adaption, we can obtain a training corpus of multiple types from different iterations, where each type differs from the other in both quality and input distribution. During the early iterations, the produced instances are possible with higher quality and close to the source domain, while for the later iterations, the quality might be lower and the distance to the source domain is larger. To make full use of the corpus together with the source training set, we present a domain-mixed model for sophisticated representation learning to capture domain-aware and domain-invariant features (Daumé III, 2007; Ganin et al., 2016; Tzeng et al., 2017), which is also strengthened progressively by the incremental style of the fine-grained domain adaption.
+
+We conduct experiments on a benchmark ZhuXian dataset (Zhang et al., 2014) to show the effectiveness of our method. In detail, the Penn Chinese Treebank version (Xue et al., 2005) 6.0 (CTB6) is used as the source corpus, belonging to the newswire domain, while the target ZhuXian corpus is from an Internet novel. Experimental results show that our fine-grained domain adaption is significantly better than previous self-training studies. Moreover, we find that our domain-mixed representation learning model suits the fine-grained framework perfectly. We also conduct extensive analyses to understand our model comprehensively. We will release our codes at github.com/JZX555/FGDA under Apache License 2.0 to help the reproduction.
+
+# 2 Joint CWS and POS Tagging
+
+This section describes the basic model of our joint CWS and POS tagging. Concretely, we regard our
+
+joint task as a character-level sequence labeling problem following Tian et al. (2020a). Given an input character sequence $X = [x_{1},\dots,x_{n}]$ , the output labels $Y = [y_{1},\dots,y_{n}]$ are concatenations of word boundaries (i.e., BMES) and POS tags for all sentential characters. We exploit an ADBERT-BiLSTM-CRF model as our basic model, which is very strong in performance and highly parameter efficient. The model includes two parts sequentially: (1) ADBERT for character representation, (2) BiLSTM-CRF for feature extraction, label inference and training. Below, we introduce the ADBERT directly and the BiLSTM-CRF is exactly the same as Tian et al. (2020a) which can be referred to in their work for the details.
+
+Adapter $\circ$ BERT We exploit BERT (Devlin et al., 2019) to derive character representations for a given sentence $X = [x_{1},\dots,x_{n}]$ , as it brings state-of-the-art performances for a range of Chinese language processing tasks. In particular, we patch BERT with adapters (Houlsby et al., 2019) inside all the included transformer units. By this way, fine-tuning BERT parameters is no longer necessary across different tasks, and we only need to tune the adapter parameters. More particularly, we let all adapters across different transformer units use a shared set of parameters to reduce the scale of tunable model parameters of our joint task. Here we refer to this method as ADBERT:
+
+$$
+\boldsymbol {e} _ {1}, \dots , \boldsymbol {e} _ {n} = \mathrm {A D B E R T} (X = [ x _ {1}, \dots , x _ {n} ]), \quad (1)
+$$
+
+where the detailed network of transformer with adapters is illustrated in our Appendix A.
+
+# 3 Our Method
+
+The above joint CWS and POS tagging model can perform well on the standard setting when the test domain is similar to the training domain (Tian et al., 2020a,b). However, the performance might be degraded dramatically when the test (i.e., target) domain differs from the training (i.e., source) domain significantly. There have two studies for cross-domain of joint CWS and POS tagging (Liu and Zhang, 2012; Zhang et al., 2014), both of which have exploited self-training due to its effectiveness as well as simplicity for domain adaption. The self-training aims to produce a set of high-confidence training instances of the target domain which are used to train a target model. Here we follow this line of work, presenting a novel fine-grained domain adaption strategy.
+
+The fine-grained domain adaption is an extension of the standard self-training, aiming to produce a helpful pseudo training corpus of the target domain. The line of work is essentially orthogonal to the representation learning methods which aim to learn sophisticated (e.g., domain-aware and domain-invariant) features for domain adaption. Thus, we also present a novel domain-mixed model based on the basic ADBERT-BiLSTM-CRF for effective exploration of our fine-grained domain adaption. In the following, we first describe the fine-grained domain adaption method in detail, and then introduce our representation learning model.
+
+# 3.1 Fine-Grained Domain Adaption
+
+The overall flow of self-training includes three steps: (1) first, we train an initial model by the source corpus; (2) second, we apply the source model onto a large-scale raw corpus, obtaining auto-labeled pseudo instances of the target domain; (3) finally, we select a set of high-confidence instances from the pseudo corpus which would be added to train the target model. The flow can be conducted repeatedly by several iterations, where the model in step 1 is trained by the progressively added step-3 instances. However, according to our preliminary results, the plain iterative self-training can only achieve very marginal improvement.
+
+The reason may lie in that the above process is difficult to ensure the quality of the selected instances, especially when the input target sentences are very distant from the source domain (Sohn et al., 2020). The step-1 models do not perform well on these sentences without any specialization. If these sentences are excluded because of their low quality, the final target model would be trained on a biased corpus, while these sentences are added into the target training corpus, great noises are introduced which would degrade the overall performance. Aiming for the problem, we propose a fine-grained domain adaption strategy to alleviate the influence of the large gaps during the automatic corpus construction.
+
+Concretely, we guide the iterative self-training by a specific explicit distance metric. At each iteration, we add a set of high-confidence pseudo instances whose distances are only a little larger than the previous iteration. The sentences during each selection can be regarded as from a special fine-grained subdomain of the target domain. By this way, the target model is gradually adapted to
+
+Algorithm 1: Fine-Grained Adaption
+Data: Source domain training dataset $S$ Target domain raw corpus $D_{1}$
+Output: Latest model M
+1 Initial training dataset $\mathrm{T_1} = S$
+2 for $i = 1,2,3\dots$ until converge do
+3 Model training: $M_{i} = \mathrm{Train}(T_{i})$
+4 Data auto-labeling: $\hat{D}_i = M_i(D_i)$
+5 Lexicon: $L_{\mathrm{tgt}} = L_{\mathrm{tgt}}\cup L_{\mathrm{top - K}}(\hat{D}_i)$
+6 Progress ith auto instances: $\mathrm{ST}_i = \{\}$
+7 foreach instance $(\hat{X},\hat{Y})$ in $\hat{D}_i$ do
+8 Coov: numOOV $\leq i$
+9 Clex: all oov in $L_{\mathrm{tgt}}$
+10 Cconf: $p(\hat{Y} |\hat{X})\geq p_{\mathrm{threshold}}$
+11 if Coov && Clex && Cconf then
+12 $\mathrm{ST}_i = \mathrm{ST}_i + \{(\hat{X},\hat{Y})\}$
+13 end
+14 end
+15 $\mathrm{T}_{i + 1} = \mathrm{T}_i + \mathrm{ST}_i$ .
+16 $D_{i + 1} = D_{i}\backslash \mathrm{ST}_{i}.X$ .
+17 end
+
+the distant sentences far away from the source domain, producing a higher-quality corpus of various distances. Compared with the direct source-to-target adaption, we adopt the OOV (i.e., the newly-generated words which are out of the training vocabulary) number as the distance measurement, which is highly simple and intuitive. We construct a set of high-quality automatic corpora by choosing from the zero/one-number-OOV target sentences to the large-number-OOV target sentences progressively.
+
+Algorithm 1 shows the pseudo codes of fine-grained domain adaption. Initially, we set the first-iteration training dataset by the source corpus $S$ , and then execute the pseudo codes of lines 3-16 repeatedly. First, we train a model $M_{i}$ by current-iteration training dataset $\mathrm{T}_i$ , and apply the model to the remaining raw corpus of the target domain, resulting in auto-labeled corpus $\hat{D}_i$ , as shown by the codes at lines 3-4. Next, we conduct a lexicon building process at line 5 which would be used for quality assurance. At each iteration, we collect a set of top-K confident word-POS pairs $L_{\mathrm{top - K}}$ by their weighted frequencies in $\hat{D}_i$ ,1 which are added to the target lexicon $L_{\mathrm{tgt}}$ . Then, the key arrives
+
+
+Figure 2: The structure of the domain-mixed model, where the four objectives are defined in Equation 4.
+
+at lines 6-15 for new training dataset selection to obtain $\mathrm{ST}_i$ , which advances the training corpus to $\mathrm{T}_{i + 1}$ . We traverse all instances in $\hat{D}_i$ , and add the instances which satisfy $\mathrm{C_{oov}}$ , $\mathrm{C_{lex}}$ and $\mathrm{C_{conf}}$ together, where $\mathrm{C_{oov}}$ indicates the OOV number to control the distance to the source domain, and $\mathrm{C_{lex}}$ and $\mathrm{C_{conf}}$ ensure the instance quality. Finally, at line 16, we remove the selected instances from the target domain corpus and start the next iteration.
+
+# 3.2 Our Domain-Mixed Model
+
+By fine-grained domain adaptation, we can obtain a training corpus of multiple types (i.e., $S$ , $\mathrm{ST}_1, \dots, \mathrm{ST}_n$ (n denotes the last iteration) in Algorithm 1) where each type corresponds to a domain (i.e., $S$ ) or subdomain (i.e., $\mathrm{ST}_n$ ). Thus, the exploration of the training corpus can be regarded as multi-source domain adaption (Zhang et al., 2015; Sun et al., 2015). To better explore the corpus, we propose a novel domain-mixed model to fully benefit from the fine-grained domain adaptation.
+
+Our domain-mixed model follows a standard representation learning framework of domain adaptation, which attempts to capture effective domain-aware and domain-invariant features. Figure 2 shows the overall architecture of the model, where two individual ADBERT-BiLSTM-CRF components are included, which are used for domain-aware and domain-invariant feature learning, respectively. The feature learning modules are both adapted at the ADBERT, and a shared BiLSTM-CRF is exploited across the two components. In the below, we introduce the (sub)domain-aware and (sub)domain-invariant components, respectively, and then describe the overall inference and training.
+
+The (Sub)Domain-Aware Component A major problem of our basic ADBERT-BiLSTM-CRF model is that it treats all (sub)domain types of our final training corpus equally. Here we take the (sub)domain types as inputs along with the sentences deriving domain-aware features. Concretely, we follow Jia et al. (2019) and Üstün et al. (2020), exploiting Parameter Generator Network (PGN) on the adapter layers to achieve our goal, which generates (sub)domain-aware parameters for the adapters inside the ADBERT.
+
+We pack all parameters of the adapter layers into a single vector $V$ by reshaping and concatenation, which can be reverse unpacked perfectly for adapter calculation. As shown in Figure 2(a), we refer to ADBERT with PGN as PGN-ADBERT. Taken the input sentence and (sub)domain type pair by $(X,\mathrm{dt})$ , and the overall calculation of the (sub)domain-aware character representations is formalized as follow:
+
+$$
+\begin{array}{l} e _ {1} ^ {\mathrm {d m}}, \dots , e _ {n} ^ {\mathrm {d m}} = \operatorname {P G N - A D B E R T} (X, \mathrm {d t}) \tag {2} \\ = \operatorname {A D B E R T} (X, \boldsymbol {V} = \Theta e ^ {\mathrm {d t}}), \\ \end{array}
+$$
+
+where $\Theta$ is a learnable parameter in this component, $e^{\mathrm{dt}}$ is the (sub)domain type embedding, and PGN-ADBERT is a special case of ADBERT with specified module parameters $\mathbf{V}$ . The resulted representations are then fed into BiLSTM-CRF for our joint task.
+
+The (Sub)Domain-Invariant Component The domain-invariant features have been extensively investigated because of their generalization capability across different domains (Daumé III, 2007). Here we present a (sub)domain-invariant component to learn these general features across our source domain and fine-grained target subdomains, parallel to the (sub)domain-aware component. Figure 2(b) shows the architecture of this part. Firstly, the character inputs $X$ go through ADBERT, deriving the domain-invariant features $e_1^{\mathrm{iv}}, \ldots, e_n^{\mathrm{iv}}$ , and then we reconstruct the domain-aware features $\bar{e}_1^{\mathrm{dm}}, \ldots, \bar{e}_n^{\mathrm{dm}}$ by specifying the input (sub)domain type dt, which are then fed into BiLSTM-CRF for our joint task following our basic model.
+
+The domain-invariant features $e_1^{\mathrm{iv}}, \ldots, e_n^{\mathrm{iv}}$ , are learned in an adversarial manner (Ganin and Lempitsky, 2015; Ganin et al., 2016) for sentence-level (sub)domain type classification. We derive sentence-level representation $v$ by averaged pooling over these features, and then determine the (sub)domain type of the input sentence by a simple
+
+linear classifier. Note that we will intentionally cheat the classifier to make the $\pmb{v}$ domain irrelevant, aiming to obtain good domain-invariant features.
+
+In natural, the domain-invariant component tries to reconstruct and approximate the domain-aware component since they share the same decoding part. We unite the domain-invariant features $e_1^{\mathrm{iv}}, \ldots, e_n^{\mathrm{iv}}$ and the (sub)domain type dt to reconstruct the domain-aware features, which are then used for our joint task. The advantages of this manner are that we can maximize the capacity of the domain-invariant features and further enhance the interaction between the domain-aware and domain-invariant features.
+
+Concretely, the reconstruction is implemented by a variational module with reparameterization (Kingma and Welling, 2014). Given the (sub)domain type dt and the character representation $e_i^{\mathrm{iv}}(i \in [1, n])$ , the domain-aware representation can be calculated by:
+
+$$
+\boldsymbol {\mu} _ {i} = \operatorname {B i A f f i n e} _ {\text {m e a n}} \left(\boldsymbol {e} _ {i} ^ {\mathrm {i v}}, \boldsymbol {e} ^ {\mathrm {d t}}\right),
+$$
+
+$$
+\log \left(\sigma_ {i} ^ {2}\right) = \operatorname {B i A f f i n e} _ {\operatorname {v a r}} \left(e _ {i} ^ {\mathrm {i v}}, e ^ {\mathrm {d t}}\right), \tag {3}
+$$
+
+$$
+\bar {\boldsymbol {e}} _ {i} ^ {\mathrm {d m}} \sim \mathcal {N} (\boldsymbol {\mu} _ {i}, \sigma_ {i} ^ {2}),
+$$
+
+where we use BiAffine operations to generate a Gaussian distribution and then sample the domain-aware features $\overline{e}_i^{\mathrm{dm}}$ from the distribution.
+
+# 3.3 Inference and Training
+
+We regard the (sub)domain-aware component as our major component, which outputs the final joint CWS and POS tagging results. The (sub)domain-invariant component is an auxiliary component to help the learning of the major one. Intuitively, through an alignment between the major and auxiliary components, the learned features of our major component can be naturally decomposed into domain-aware and domain-invariant features.
+
+Inference For inference, we use the (sub)domain types of $S$ and $\mathrm{ST}_{\mathfrak{n}}$ (i.e., the last fine-grained sub-domain type) to perform decoding of the source and target domains, respectively.
+
+Training We exploit four optimization objectives for training, as shown in Figure 2:
+
+$$
+\mathcal {L} _ {\text {m a j o r}} (X, Y, \mathrm {d t}) = - \log p _ {\text {m a j o r}} (Y | X, \mathrm {d t}),
+$$
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {a u x}} (X, Y, \mathrm {d t}) = - \log p _ {\mathrm {a u x}} (Y | X, \mathrm {d t}), \\ \mathcal {L} _ {\mathrm {a u x}} (Y, \mathrm {d t}) = 1 \quad \text {(d t} | Y) \end{array} \tag {4}
+$$
+
+$$
+\mathcal {L} _ {\mathrm {a d v}} (X, \mathrm {d t}) = \log p _ {\mathrm {a d v}} (\mathrm {d t} | X),
+$$
+
+$$
+\mathcal {L} _ {\mathrm {m s e}} (X) = \left\| \boldsymbol {E} ^ {\mathrm {d m}} - \bar {\boldsymbol {E}} ^ {\mathrm {d m}} \right\| ^ {2},
+$$
+
+| Data Set | #sents | #words | #chars |
| CTB6 | Train | 23,401 | 641,372 | 1,055,586 |
| Devel | 2,078 | 59,929 | 100,276 |
| Test | 2,795 | 81,579 | 134,149 |
| ZX | Test | 1,394 | 34,355 | 48,075 |
| Raw | 32,023 | N/A | 1,417,418 |
+
+Table 1: Data statistics of CTB6 and ZhuXian.
+
+where the first two are the losses of the two components of joint CWS and POS tagging, the third one is referred to as the adversarial loss to deceive the (sub)domain type classification, and the last is to minimize the distance of the domain-aware features between our two components leading to highly-resembled (aligned) character representations from variational reconstruction. Further, we sum the four objectives together:
+
+$$
+\begin{array}{l} \mathcal {L} = \mathcal {L} _ {\text {m a j o r}} (X, Y, \mathrm {d t}) + \mathcal {L} _ {\text {a u x}} (X, Y, \mathrm {d t}) \tag {5} \\ + \lambda_ {1} \mathcal {L} _ {\mathrm {a d v}} (X, \mathrm {d t}) + \lambda_ {2} \mathcal {L} _ {\mathrm {m s e}} (X), \\ \end{array}
+$$
+
+resulting in the final objective of our domain-mixed model, where $\lambda_{1}$ and $\lambda_{2}$ are two hyperparameters.
+
+# 4 Experiment
+
+# 4.1 Datasets
+
+We use the CTB6 dataset as the source domain (newswire), splitting the dataset into training, development and test sections following Tian et al. (2020a). To verify the effectiveness of our proposed domain adaption method, we exploit the ZhuXian dataset (Zhang et al., 2014) as the target domain, which belongs to a novel from Internet and is the only-one benchmark dataset for domain adaption of joint CWS and POS tagging. We strictly follow unsupervised domain adaptation where there is only a test corpus of the target domain. Table 1 shows the data statistics, where the detailed sentence, word as well as character numbers are reported. For the Zhuxian dataset, we use only the raw text and test corpus, which is available from Zhang et al. (2014).
+
+# 4.2 Setting
+
+Evaluation We adopt the standard word-level matching method to evaluate the performance of CWS and POS tagging. In particular, the joint strategy is used for POS tagging evaluation, considering word boundaries as well as POS tags as a whole. We calculate precision (P), recall (R) values, and use their F1-score as the major evaluation metric.
+
+| Model | CTB6 | ZhuXian |
| CWS | POS | CWS | POS |
| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 |
| (1) Baseline |
| Vanilla | 97.29 | 96.85 | 97.07 | 94.73 | 94.30 | 94.51 | 94.12 | 93.61 | 93.87 | 89.19 | 88.70 | 88.94 |
| (2) Self-Training |
| Vanilla | 97.18 | 96.76 | 96.97 | 94.31 | 93.91 | 94.11 | 94.23 | 93.94 | 94.08 | 89.24 | 88.96 | 89.10 |
| +Iterative | 97.17 | 96.85 | 97.01 | 94.44 | 94.13 | 94.29 | 94.30 | 93.89 | 94.10 | 89.36 | 88.96 | 89.16 |
| +Domain-PGN | 97.21 | 96.84 | 97.03 | 94.40 | 94.04 | 94.22 | 94.25 | 94.03 | 94.14 | 89.27 | 89.06 | 89.17 |
| +Domain-Mixed | 97.29 | 96.90 | 97.09 | 94.55 | 94.17 | 94.36 | 94.45 | 94.12 | 94.28 | 89.61 | 89.29 | 89.45 |
| (3) Fine-Grained Domain Adaption |
| Vanilla | 97.17 | 96.9 | 97.03 | 94.51 | 94.24 | 94.37 | 94.44 | 94.86 | 94.65 | 89.67 | 90.07 | 89.87 |
| +Domain-PGN | 97.33 | 97.04 | 97.19 | 94.57 | 94.29 | 94.43 | 94.74 | 94.71 | 94.72 | 90.07 | 90.04 | 90.06 |
| +Domain-Mixed | 97.44 | 97.18 | 97.31 | 94.83 | 94.58 | 94.71 | 94.99 | 95.14 | 95.07 | 90.51 | 90.65 | 90.58 |
+
+Table 2: Main results, where the instance selection of self-training is simply implemented by ranking the auto-labeled sentences according to their output probabilities during the decoding, the Vanilla model refers to as the ADBERT-BiLSTM-CRF model, Iterative indicates the vanilla model with iterative self-training, and Domain-PGN indicates the model with only the (sub)domain-aware way.
+
+Considering that there is no development corpus available for the target domain in a real scenario, we use the CTB6 development set to select the best-performing models.
+
+Hyperparameters All hyperparameters are set empirically according to the previous studies as well as our preliminary findings (Tian et al., 2020a,b). Most importantly, our fine-grained domain adaption consumes 12 iterations to reach the peak, and the values for all other hyperparameters are described in our Appendix B.
+
+# 4.3 Main Results
+
+Table 2 shows the main results on the test datasets of both CTB6 and ZhuXian. The CTB6 results are reported to show whether the domain-adapted models could handle the source domain as well. First, we examine the F1 values of the baseline performances. Our vanilla (i.e., ADBERT-BiLSTM-CRF) model can obtain comparable performances on both CWS and POS tagging with state-of-the-art models such as Tian et al. (2020a) ${}^{2}$ . We can see that the model performances can drop significantly on the ZhuXian domain, resulting in decreases of ${97.07} - {93.87} = {3.20}$ and ${94.51} - {88.94} = {5.57}$ for CWS and POS tagging,respectively. The observation indicates that domain adaption is very important for our joint task.
+
+Next, we compare fine-grained domain adaption with various self-training. Based on the vanilla
+
+model, the self-training obtains very small performance gains (including iterative self-training), i.e., only close to $0.2\%$ which is very insignificant. The result is inconsistent with Zhang et al. (2014) which shows large improvements by simple self-training. The main reason might be due to the strong baseline with the BERT representations.
+
+With fine-grained domain adaption, we can generate a higher quality pseudo corpus. Therefore, the gains by the vanilla model are very significant over the baseline, where the improvements are 0.78 and 0.93 for CWS and POS tagging, respectively, significantly better than the vanilla self-training systems due to the quality differences of pseudo corpora. By using the final domain-mixed model, our fine-grained domain adaption can be improved further, leading to another improvement of 0.42 and 0.71 for CWS and POS tagging. The observations indicate that our method is highly effective for domain adaption of joint CWS and POS tagging.
+
+We can see that our domain-mixed model can help the normal self-training as well, showing the effectiveness of the representation learning for domain adaption. We also compare our proposed domain-mixed model with the major component alone (Domain-PGN for short), where the latter has been demonstrated to be effective in a different scenario (Jia et al., 2019). According to the results, the Domain-PGN gives slightly better performances on CWS and POS tagging for both self-training and fine-grained domain adaption compared with the counterpart baseline. Our final domain-mixed
+
+| Model | CTB6 | ZhuXian | Trainable Params Size |
| CWS | POS | CWS | POS |
| Finetuning | 97.24 | 94.74 | 93.91 | 88.95 | 120M |
| Adapter | 97.36 | 94.81 | 93.81 | 88.98 | 35M |
| Adapter (shared) | 97.07 | 94.51 | 93.87 | 88.94 | 14M |
+
+model is much better, leading to significant performance increases on both tasks especially in fine-grained domain adaption.
+
+Interestingly, we find that our final model is capable of bringing better performances on the source CTB6 test dataset as well, unlike the observations as shown in the self-training models which can hurt the source performance to a certain extent. The finding indicates that our final model is with strong practical values, since it enables one model to perform well on multiple domains.
+
+# 4.4 Analysis
+
+In this subsection, we conduct detailed experimental analyses for a comprehensive understanding of our method in-depth.
+
+The Exploration of BERT Our work exploits ADBERT instead of the standard exploration of BERT finetuning. Here we examine the differences between them considering both performance and the size of trainable model parameters. Since ADBERT freezes all parameters of BERT, the number of trainable model parameters would be reduced greatly. Table 3 shows the comparison results, where Finetuning indicates the standard BERT-CRF model with BERT parameters tunable, Adapter denotes the ADBERT model that all adapters own separate parameters, and Adapter (shared) indicates our final ADBERT that all adapters across different transformer layers share the same parameters. As shown, we can see that our final choice can achieve comparable performance to the others with much fewer number of trainable parameters, thus our final ADBERT is highly parameter efficient.
+
+The Instance Selection Strategy As mentioned in Algorithm 1, we include three conditions for instance selection at each iteration: $C_{\mathrm{oov}}$ , $C_{\mathrm{lex}}$ and $C_{\mathrm{conf}}$ . Here we conduct ablation experiments to check the necessity of them. Note that when $C_{\mathrm{oov}}$ is excluded, we select at most 2K instances at each
+
+Table 3: Comparisons between BERT fine-tuning and ADBERT.
+
+| Model | P | R | F1 | ΔFi |
| Final | 90.51 | 90.65 | 90.58 | - |
| -CooV | 90.20 | 90.16 | 90.18 | -0.40 |
| -Clex | 90.39 | 90.25 | 90.32 | -0.26 |
| -Cconf | 90.28 | 90.38 | 90.33 | -0.25 |
| -CooV - Clex | 90.02 | 89.99 | 90.00 | -0.58 |
| Self-Training | 89.61 | 89.29 | 89.45 | -1.13 |
+
+Table 4: Ablation study of the instance selection strategies of our final model (F1 values of POS are reported).
+
+
+Figure 3: The POS tagging performance with respect to the number of the pseudo training instances.
+
+iteration by the probabilities from high to low. Table 4 shows the results. As shown, we can see that all conditions are useful, and in addition, all results outperform the plain iterative self-training method. In particular, the model $-\mathrm{C_{oov}} - \mathrm{C_{lex}}$ is degraded into the self-training with iterative adaptation combined with the domain-mixed model. The comparison further demonstrates the advantage of our domain-mixed model.
+
+The Size of Pseudo Training Corpus It is very interesting to compare the fine-grained domain adaption with (one-iteration) self-training under the view of the pseudo training dataset size. We align the iteration of fine-grained domain adaption with self-training by the added training corpus size of the ZhuXian domain. Figure 3 shows the comparison results. As shown, the performance of self-training would be hardly increasing after 3K instances, while our fine-grained method can give significant improvements continually until iteration 12 (consuming 20K corpus). The comparison shows that our fine-grained domain adaption is much more effective than self-training. However, our iterative fine-grained domain adaption needs more time to training than non-iterative self-training4.
+
+| Model | P | R | F1 |
| Baseline | Vanilla | 93.99 | 93.55 | 93.77 |
| Self-Training | Vanilla | 94.01 | 94.08 | 94.04 |
| +Iterative | 94.18 | 94.06 | 94.12 |
| +Domain-PGN | 94.20 | 94.01 | 94.11 |
| +Domain-Mixed | 94.47 | 94.07 | 94.27 |
| Fine-Grained
+Adaption | Vanilla | 94.65 | 94.47 | 94.51 |
| +Domain-PGN | 94.70 | 94.51 | 94.60 |
| +Domain-Mixed | 95.27 | 94.64 | 94.86 |
+
+Table 5: The results of independent CWS task using our method on ZhuXian dataset.
+
+The Independent CWS Task Our major goal is for joint CWS and POS tagging, while it is expected to examine our method for the CWS task alone. Here we also use the CTB6 dataset as the source corpus and the ZhuXian dataset as the target domain. The basic model can be exactly the same. Table 5 shows the final results. Our method can achieve significant improvements on the CWS alone, resulting in increases of $94.86 - 93.77 = 1.09$ , which means that our fine-grained domain adaption method can be suitable for CWS as well. The other model tendencies are consistent with the joint task. Interestingly, we find that the independent CWS model has a lower improvement in recall. The reason may be that the POS tagging can provide several additional features, which let the joint model prefer more fine-grained segmentation, leading to a larger recall value.
+
+Domain-Aware vs. Domain-Invariant It is interesting to compare our (sub)domain-aware (PGN) and (sub)domain-invariant (VAR) components comprehensively. In fact, the two components alone can serve for domain adaption as well besides our integrated usage. The PGN can be used directly for inference, while for VAR, we can perform decoding by setting $\bar{e}_i^{\mathrm{dm}} = \mu_i$ in Equation 3. Here we analyze four models, PGN and VAR alone, and the integrated model inferencing with PGN (Final-PGN) and VAR (Final-VAR), respectively. All four models are trained on the same and full training corpus (i.e., $S + \mathrm{ST}_1,\dots,S + \mathrm{ST}_1+\ldots +\mathrm{ST}_n$ , respectively and gradually). Figure 4 shows the results. As shown, we can see that PGN and VAR are actually comparable to each other, and in our final model, PGN is slightly better than VAR. We find that in our integrated model, both PGN and VAR are much better than using them alone, which shows the importance of the joint learning by the carefully-designed $\mathcal{L}_{\mathrm{mse}}$ .
+
+
+Figure 4: Comparisons between (sub)domain-aware (PGN) and (sub)domain-invariant (VAR) components, where PGN and VAR indicate that they are exploited separately for representation learning, and Final-PGN and Final-VAR denote our final model by using PGN/VAR for decoding, respectively.
+
+
+Figure 5: The results of Domain-Mixed model on different OOV distribution test datas use self-training and fine-grained adaption.
+
+The Sentential OOV Number Our fine-grained domain adaption is mainly advanced by the sentential OOV numbers with respect to the source training dataset. Thus, it is meaningful to examine the model performance on sentences with different OOV numbers. We divide the ZhuXian test dataset by four categories according to the OOV number in sentence, which are respectively [0-1], [2-3], [4-5] and $\geq 6$ . All categories include a sufficient number of sentences for statistical comparisons. Based on the division, we compare the performance of the fine-grained adaption, self-training as well as baseline models. Figure 5 shows the results. We can see that with the increase of OOV number, the model performance can be decreased as a whole, which is reasonable. In addition, our final model can significantly improve the model performance with higher OOV numbers in sentence.
+
+The Subdomain Type of Our Final Inference For the training of our final model, we have several fine-grained subdomain types of the target domain, and we select the last subdomain type for the final inference, which might be unmatched with the real subdomain type. Here we analyze the input domain
+
+| Domain Type | ZhuXian-CWS | ZhuXian-POS |
| P | R | F1 | P | R | F1 |
| ST1 | 95.00 | 95.17 | 95.08 | 90.49 | 90.63 | 90.56 |
| ST6 | 94.98 | 95.16 | 95.07 | 90.49 | 90.65 | 90.57 |
| ST11 | 94.99 | 95.14 | 95.07 | 90.51 | 90.65 | 90.58 |
+
+Table 6: The influence of using different domain types.
+
+type selection in depth by comparing the model performance with the first $(\mathrm{ST}_1)$ , median $(\mathrm{ST}_6)$ and last $(\mathrm{ST}_{11})$ subdomain types. Table 6 shows the results. As shown, there is almost no difference between the three selections for the ZhuXian domain, indicating that the selection of fine-grained subdomain types is not important in our final model. The observation is reasonable since the test corpus cover a range of the specified subdomains and fixed selection can face the same issue, thus the final selection could be totally empirical.
+
+# 5 Related Work
+
+CWS and POS tagging are closely-related tasks for Chinese processing, which could be handled either jointly or in a pipeline way (Ng and Low, 2004; Shi and Wang, 2007; Zhang and Clark, 2008; Jiang et al., 2008; Kruengkrai et al., 2009; Jiang et al., 2009; Sun, 2011). The joint models are able to obtain better performances, as they can alleviate the error propagation problem between two tasks (Ng and Low, 2004; Zhang and Clark, 2008; Jiang et al., 2009; Wang et al., 2011). Recently, neural models lead to state-of-the-arts for joint CWS and POS tagging (Zheng et al., 2013; Shao et al., 2017; Zeng et al., 2013; Tian et al., 2020a). In particular, the BERT representations (Devlin et al., 2019) and the BiLSTM neural network (Graves et al., 2013; Huang et al., 2015) have shown impressive results for the joint task (Zhang et al., 2018; Diao et al., 2019; Tian et al., 2020a,b). In this work, we adopt both BERT and BiLSTM to reach a strong baseline for cross-domain adaption.
+
+Domain adaptation has been extensively studied in both the machine learning and NLP communities (Daumé III, 2007; Ben-David et al., 2007; Chen et al., 2011; Søgaard, 2013; Zou et al., 2019; Saito et al., 2020). The typical methods of domain adaptation can be divided into two categories mainly. The first category aims to create a set of pseudo training corpora for the target domain, while the second category attempts to learn transferable features from the source domain to the target. Self-training is one most representative method of the first category
+
+(McClosky et al., 2006; Yu et al., 2015; Zou et al., 2019). For the second category, the representation learning of domain-specific and domain-invariant features has received the most attention recently (Glorot et al., 2011; Ganin et al., 2016; Tzeng et al., 2017; Long et al., 2017; Hoffman et al., 2018).
+
+For the joint CWS and POS tagging task, Liu and Zhang (2012) and Zhang et al. (2014) investigate the task under the cross-domain adaption setting, both of which exploit self-training. In particular, Zhang et al. (2014) suggest a lexicon-based type-supervised model for further enhancement, and meanwhile publish a benchmark dataset which is publicly available for cross-domain adaption of joint CWS and POS tagging. Unfortunately, there is no future work for the joint task since then, while the majority of studies focus on the cross-domain of the two individual tasks (Liu et al., 2014; Schnabel and Schütze, 2014; Peng and Dredze, 2016; Huang et al., 2017; Zhou et al., 2017; Gui et al., 2017; Ding et al., 2020). We propose a novel fine-grained domain adaption method with a domain-mixed representation learning model for the joint task.
+
+# 6 Conclusion
+
+We suggested a novel fine-grained domain adaption method for joint word segmentation and POS tagging. We started from self-training strategy, which exploits various transfers to generate pseudo training instances for the target domain, and argued that the strategy might lead to low-quality of the auto-labeled instances when the target sentences are distant from the source domain. To address the problem, we proposed fine-grained domain adaption, regarding the OOV number to the source training corpus as the main advancing indicator to construct a higher quality corpus progressively. In addition, we combined our method with another line of representation learning of domain adaption, presenting a domain-mixed model for full exploration of the produced training instances. We evaluated our method on the benchmark ZhuXian dataset by using CTB6 as the source domain. The results showed that our method is highly effective, and our final model can achieve significant improvements on the joint task.
+
+# Acknowledgments
+
+This work is supported by grants from the National Key Research and Development Program of China (No. 2018YFC0832101) and the National Natural Science Foundation of China (No. 62176180).
+
+# References
+
+Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. 2007. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19:137.
+Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21.
+Minmin Chen, Kilian Q Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In Proceedings of NeurIPS.
+Hal Daumé III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th ACL, pages 256-263.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL, pages 4171-4186.
+Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. 2019. Zen: pre-training chinesetext encoder enhanced by n-gram representations.arXiv preprint arXiv:1911.00720.
+Ning Ding, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Xiaobin Wang, and Haitao Zheng. 2020. Coupling distant annotation and adversarial training for cross-domain Chinese word segmentation. In Proceedings of the ACL, pages 6662-6671.
+Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing.
+Yaroslav Ganin and Victor S. Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the ICML, pages 1180-1189.
+Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Lavi-olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. JMLR, 17(1):2096-2030.
+Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the ICML.
+Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE.
+
+Tao Gui, Qi Zhang, Haoran Huang, Minlong Peng, and Xuan-Jing Huang. 2017. Part-of-speech tagging for twitter with adversarial neural networks. In Proceedings of the EMNLP, pages 2411-2420.
+Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. 2018. Cycada: Cycle-consistent adversarial domain adaptation. In Proceedings of the ICML, pages 1989-1998.
+Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In Proceedings of the ICML, pages 2790-2799.
+Shen Huang, Xu Sun, and Houfeng Wang. 2017. Addressing domain adaptation for Chinese word segmentation with global recurrent structure. In Proceedings of the IJCNLP, pages 184-193.
+Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm- crf models for sequence tagging. arXiv preprint arXiv:1508.01991.
+Naoto Inoue, Ryosuke Furuta, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2018. Cross-domain weakly-supervised object detection through progressive domain adaptation. In Proceedings of the CVPR, pages 5001-5009.
+Chen Jia, Xiaobo Liang, and Yue Zhang. 2019. Cross-domain ner using cross-domain language modeling. In Proceedings of the ACL, pages 2464-2474.
+Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Automatic adaptation of annotation standards: Chinese word segmentation and pos tagging-a case study. In Proceedings of the ACL-IJCNLP, pages 522-530.
+Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan Lu. 2008. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of ACL, pages 897-904.
+Guangjin Jin and Xiao Chen. 2008. The fourth international chinese language processing bakeoff: Chinese word segmentation, named entity recognition and chinese pos tagging. In Proceedings of the sixth SIGHAN workshop on Chinese language processing.
+Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of the ICLR.
+Canasai Kruengkrai, Kiyotaka Uchimoto, Yiou Wang, Kentaro Torisawa, Hitoshi Isahara, et al. 2009. An error-driven word-character hybrid model for joint chinese word segmentation and pos tagging. In Proceedings of the ACL-IJCNLP, pages 513-521.
+Yang Liu and Yue Zhang. 2012. Unsupervised domain adaptation for joint segmentation and POS-tagging. In Proceedings of COLING 2012: Posters, pages 745-754.
+
+Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for crf-based chinese word segmentation using free annotations. In Proceedings of the EMNLP, pages 864-874.
+Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the ICML, pages 2208-2217.
+David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of the ACL-COLING, pages 337-344.
+Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part-of-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Proceedings of the EMNLP, pages 277-284.
+Nanyun Peng and Mark Dredze. 2016. Multi-task domain adaptation for sequence tagging. In Proceedings of the 2nd Workshop on Representation Learning for NLP.
+Kuniaki Saito, Donghyun Kim, Stan Sclaroff, and Kate Saenko. 2020. Universal domain adaptation through self supervision. arXiv preprint arXiv:2002.07953.
+Tobias Schnabel and Hinrich Schütze. 2014. Flors: Fast and simple domain adaptation for part-of-speech tagging. TACL, 2:15-26.
+Yan Shao, Christian Hardmeier, Jörg Tiedemann, and Joakim Nivre. 2017. Character-based joint segmentation and POS tagging for Chinese using bidirectional RNN-CRF. In Proceedings of the IJCNLP, pages 173-183.
+Yanxin Shi and Mengqiu Wang. 2007. A dual-layer crfs based joint decoding method for cascaded segmentation and labeling tasks. In Proceedings of the IJCAI, pages 1707-1712.
+Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. 2018. A dirt-t approach to unsupervised domain adaptation.
+Anders Søgaard. 2013. Semi-supervised learning and domain adaptation in natural language processing. Synthesis Lectures on Human Language Technologies, 6(2):1-103.
+Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. 2020. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685.
+Shiliang Sun, Honglei Shi, and Yuanbin Wu. 2015. A survey of multi-source domain adaptation. Information Fusion, 24:84-92.
+Weiwei Sun. 2011. A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of the ACL, pages 1385-1394.
+
+Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiao-jun Quan, Tong Zhang, and Yonggang Wang. 2020a. Joint chinese word segmentation and part-of-speech tagging via two-way attentions of auto-analyzed knowledge. In Proceedings of the ACL, pages 8286-8296.
+Yuanhe Tian, Yan Song, and Fei Xia. 2020b. Joint Chinese word segmentation and part-of-speech tagging via multi-channel attention of character n-grams. In Proceedings of the 28th COLING, pages 2073-2084.
+Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the CVPR, pages 7167-7176.
+Ahmet Üstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language adaptation for truly Universal Dependency parsing. In Proceedings of the EMNLP, pages 2302-2315.
+Yiou Wang, Yoshimasa Tsuruoka, Wenliang Chen, Yu-jie Zhang, Kentaro Torisawa, et al. 2011. Improving Chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th IJCNLP, pages 309-317.
+Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207.
+Juntao Yu, Mohab El-karef, and Bernd Bohnet. 2015. Domain adaptation for dependency parsing via self-training. In Proceedings of the 14th International Conference on Parsing Technologies, pages 1-10.
+Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, and Isabel Trancoso. 2013. Graph-based semi-supervised model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of the ACL, pages 770-779.
+Kun Zhang, Mingming Gong, and Bernhard Scholkopf. 2015. Multi-source domain adaptation: A causal view. In Proceedings of the AAAI, volume 29.
+Meishan Zhang, Nan Yu, and Guohong Fu. 2018. A simple and effective neural model for joint word segmentation and pos tagging. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(9):1528-1538.
+Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Type-supervised domain adaptation for joint segmentation and POS-tagging. In Proceedings of the 14th EACL, pages 588-597.
+Yue Zhang and Stephen Clark. 2008. Joint word segmentation and pos tagging using a single perceptron. In Proceedings of the ACL-08: HLT, pages 888-896.
+
+Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. 2019. On learning invariant representations for domain adaptation. In Proceedings of the ICML, pages 7523-7532.
+
+Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the EMNLP, pages 647-657.
+
+Hao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2017. Word-context character embeddings for chinese word segmentation. In Proceedings of the EMNLP, pages 760-766.
+
+Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jinsong Wang. 2019. Confidence regularized self-training. In Proceedings of the ICCV, pages 5982-5991.
+
+# A Transformer with Adapters
+
+Figure 6 illustrates the internal network structure of the transformer unit in ADBERT. As shown, we can see that two adapter layers are inserted inside each transformer unit:
+
+$$
+\boldsymbol {h} _ {\text {m i d}} = \operatorname {G E L U} \left(\boldsymbol {W} _ {1} ^ {\text {s h a r e}} \boldsymbol {h} _ {\text {i n}} + \boldsymbol {b} _ {1} ^ {\text {s h a r e}}\right), \tag {6}
+$$
+
+$$
+\boldsymbol {h} _ {\text {o u t}} = \boldsymbol {W} _ {2} ^ {\text {s h a r e}} \boldsymbol {h} _ {\text {m i d}} + \boldsymbol {b} _ {2} ^ {\text {s h a r e}} + \boldsymbol {h} _ {\text {i n}},
+$$
+
+where $W_1^{\mathrm{share}}$ , $W_2^{\mathrm{share}}$ , $b_1^{\mathrm{share}}$ , $b_2^{\mathrm{share}}$ are adapter parameters, which are much smaller than those of BERT in scale.
+
+Here we further emphasize that when BERT is powered with adapters, BERT can be regarded as a static knowledge by freezing all the pretrained parameters for downstream tasks, since the BERT parameter values can be shared across these tasks.
+
+# B Hyperparameters
+
+For the model part, we set all the hidden sizes of BiLSTM to 400, and set the hidden sizes of all shared adapters to 192. We exploit the pretrained BERT-base-Chinese model for the character representations, thus the output dimensional size of character representation is 768. The embedding of domain type is with a dimensional size of 50. For fine-grained domain adaption, the number of high-confidence word-tag pairs in Top-K is set by 1000, the probability threshold $p_{\text{threshold}}$ is 0.8.
+
+For training, we exploit online learning with a batch size of 16 to update the model parameters, and use the Adam algorithm with a constant learning rate $2 \times 10^{-5}$ to optimize the parameters. The gradient clipping mechanism by a maximum value
+
+
+Figure 6: The structure of ADBERT.
+
+of 5.0 is adopted to avoid gradient explosion. We use sequential-level dropout to the character representations to avoid overfitting, where the sequential hidden vectors are randomly set to zeros with a probability of 0.2. In particular, we have two hyperparameters $\lambda_{1}$ and $\lambda_{2}$ in our overall training objective, which is auto-adjust during the training from 0 to 1 by exponential annealing in the first 5,000 steps (Bowman et al., 2016).
\ No newline at end of file
diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/images.zip b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..da0e9a47129dfbbd34861c32841c2b13b46b2b62
--- /dev/null
+++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3b07878bd60c1f5561ac159c5f4c4678742c6835ad8d982e88a34c79032153ce
+size 498581
diff --git a/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/layout.json b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e5e2335b83a96e2d7ae8e5b8fb1967df96c9c707
--- /dev/null
+++ b/afinegraineddomainadaptionmodelforjointwordsegmentationandpostagging/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae65c9f7aac0455264652d87e28447793dcf98a8c263dd053e7998bde2a75285
+size 402626
diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_content_list.json b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6cf315bc868549c7adfb03eb09b55448c1004f87
--- /dev/null
+++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9eb5a83ee078cd3f24f06213bba67c96613d6a6f38aaa38d51472e3b7b96760
+size 95937
diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_model.json b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3d674d99f31213bee6a8f6edbb5d369b8511bba4
--- /dev/null
+++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e94262212a82770c5b6e607e3514d164342974293fcfde202b1ae8267736264
+size 116688
diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_origin.pdf b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f4c304f4b1e23a61308f204ee4ab191f053aa70e
--- /dev/null
+++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/3c25d021-abd5-474f-8646-5d9b0d2e9c58_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba84bd8046fc88ea59b4bf6f950ba244e9b1e3926c389efa05472f96e31b5f84
+size 472323
diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/full.md b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..68d9b8ff7db9d3da477a95c54c650febf90f0ebb
--- /dev/null
+++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/full.md
@@ -0,0 +1,350 @@
+# AFROMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
+
+Machel Reid1, Junjie Hu2, Graham Neubig2, Yutaka Matsuo1
+
+1The University of Tokyo, 2Carnegie Mellon University
+
+{machelreid, matsuo}@weblab.t.u-tokyo.ac.jp
+
+{junjieh,gneubig}@cs.cmu.edu
+
+# Abstract
+
+Reproducible benchmarks are crucial in driving progress of machine translation research. However, existing machine translation benchmarks have been mostly limited to high-resource or well-represented languages. Despite an increasing interest in low-resource machine translation, there are no standardized reproducible benchmarks for many African languages, many of which are used by millions of speakers but have less digitized textual data. To tackle these challenges, we propose AFROMT, a standardized, clean, and reproducible machine translation benchmark for eight widely spoken African languages. We also develop a suite of analysis tools for system diagnosis taking into account unique properties of these languages. Furthermore, we explore the newly considered case of low-resource focused pretraining and develop two novel data augmentation-based strategies, leveraging word-level alignment information and pseudo-monolingual data for pretraining multilingual sequence-to-sequence models. We demonstrate significant improvements when pretraining on 11 languages, with gains of up to 2 BLEU points over strong baselines. We also show gains of up to 12 BLEU points over cross-lingual transfer baselines in data-constrained scenarios. All code and pretrained models will be released as further steps towards larger reproducible benchmarks for African languages. $^{1}$
+
+# 1 Introduction
+
+Accuracy of machine translation systems in many languages has improved greatly over the past several years due to the introduction of neural machine translation (NMT) techniques (Bahdanau et al., 2015; Sutskever et al., 2014; Vaswani et al., 2017), as well as scaling to larger models (Ott et al., 2018). However, many of these advances have
+
+been demonstrated in settings where very large parallel datasets are available (Meng et al., 2019; Arivazhagan et al., 2019), and NMT systems often underperform in low-resource settings when given small amounts of parallel corpora (Koehn and Knowles, 2017; Guzmán et al., 2019). One solution to this has been leveraging multilingual pretraining on large sets of monolingual data (Connieu and Lample, 2019; Song et al., 2019; Liu et al., 2020), leading to improvements even with smaller parallel corpora. However, this thread of work has focused on scenarios with the following two properties: (1) pretraining on a plurality of European languages and (2) cases in which the monolingual pretraining data greatly exceeds the parallel data used for finetuning (often by over 100 times) (Guzmán et al., 2019; Liu et al., 2020).
+
+However, in the case of many languages in the world, the above two properties are often not satisfied. In particular, taking the example of African languages (the focus of our work), existing (small) parallel corpora for English-to-African language pairs often comprise the majority of available monolingual data in the corresponding African languages. In addition, African languages are often morphologically rich and from completely different language families, being quite distant from European languages. Moreover, despite the importance of reproducible benchmarks to measuring progress on various tasks in an empirical setting, there exists no standardized machine translation benchmark for the majority of African languages.
+
+In this work, we introduce (1) a new machine translation benchmark for African languages, and (2) pretraining techniques to deal with the previously unexplored case where the size of monolingual data resources for pretraining is similar or equal to the size of parallel data resources for finetuning, and (3) evaluation tools designed for measuring qualities regarding the unique grammar of these languages in machine translation systems for
+
+better system evaluation.
+
+Our proposed benchmark, AFROMT, consists of translation tasks between English and 8 African languages — Afrikaans, Xhosa, Zulu, Rundi, Sesotho, Swahili, Bemba, and Lingala — four of which are not included in commercial translation systems such as Google Translate (as of Feb. 2021). In §2, we describe the detailed design of our benchmark, including the language selection criterion and the methodology to collect, clean and normalize the data for training and evaluation purposes. In §3, we provide a set of strong baselines for our benchmark, including denoising sequence-to-sequence pretraining (Lewis et al., 2020; Liu et al., 2020), transfer learning with similar languages (Zoph et al., 2016; Neubig and Hu, 2018), and our proposed data augmentation methods for pretraining on low-resource languages. Our first method leverages bilingual dictionaries to augment data in high-resource languages (HRL), and our second method iteratively creates pseudo-monolingual data in low-resource languages (LRL) for pretraining. Extensive experiments in §4 show that our proposed methods outperform our baselines by up to $\sim 2$ BLEU points over all language pairs and up to $\sim 15$ BLEU points in data-constrained scenarios.
+
+# 2 AFROMT benchmark
+
+In this section, we detail the construction of our new benchmark, AFROMT. We first introduce our criteria for selecting the languages (§2.1), and then describe the steps to prepare the dataset (§2.2, 2.3).
+
+# 2.1 Language Selection Criteria
+
+Given AFROMT's goal of providing a reproducible evaluation of African language translation, we select languages based on the following criteria:
+
+Coverage of Speakers & Language Representation We select languages largely based on the coverage of speakers as well as how represented they are in commercial translation systems. In total, the AFROMT benchmark covers 225 million L1 and L2 speakers combined, covering a large number of speakers within Sub-Saharan Africa.
+
+Linguistic Characteristics With the exception of English and Afrikaans, which belong to the Indo-European language family, all of the considered languages belong to the Niger-Congo family which is Africa's largest language family in terms of geographical area and speaking popula
+
+tion (see Appendix). Similar to English, the Niger-Congo family generally follows the SVO word order. One particular characteristic feature of these languages is their morphosyntax, especially their system of noun classification, with noun classes often exceeding 10, ranging from markers denoting male/female/animate/inanimate and more $^2$ . These noun classes can be likened in some sense to the male/female designation found in romance languages. However, in contrast with these languages, noun markers in Niger-Congo languages are often integrated within the word, usually as a prefix (Bendor-Samuel and Hartell, 1989). For example: in Zulu, isiziZulu refers to the Zulu language, whereas amaZulu refers to the Zulu people. Additionally, these languages also use "verb extensions", verb-suffixes used to modify the meaning of the verb. These qualities contribute to the morphological richness of these languages — a stark contrast with European languages.
+
+# 2.2 Data Sources
+
+For our benchmark, we leverage existing parallel data for each of our language pairs. This data is derived from two main sources: (1) open-source repository of parallel corpora, OPUS3 (Tiedemann, 2012) and (2) ParaCrawl (Esplà et al., 2019). From OPUS, we use the JW300 corpus (Agić and Vulić, 2019), OpenSubtitles (Lison and Tiedemann, 2016), XhosaNavy, Memat, and QED (Abdelali et al., 2014). Despite the existence of this parallel data, these text datasets were often collected from large, relatively unclean multilingual corpora, e.g. JW300 which was extracted from Jehovah's Witnesses text, or QED which was extracted from transcribed educational videos. This leads to many sentences with high lexical overlap, inconsistent tokenization, and other undesirable properties for a clean, reproducible benchmark.
+
+# 2.3 Data Preparation
+
+Training machine translation systems with small and noisy corpora for low-resource languages is challenging, and often leads to inaccurate translations. These noisy examples include sentences which contain only symbols and numbers, sentences which only consist of one token, sentences which are the same in both the source and target sides, etc. Furthermore, in these noisy extractions
+
+| Language | ISO Code | Lang. Family | # Noun Classes | Sources | AFROMT (En→XX) | Monolingual Data |
| Train | Valid | Test | Gold | Pseudo |
| Afrikaans | Af | Indo-European | — | J, O | 743K | 3000 | 3000 | 1.3G | — |
| Bemba | Bem | Niger-Congo | 9/6/15 | J | 275K | 3000 | 3000 | 38M | 1.0G |
| Lingala | Ln | Niger-Congo | 9/6/15 | J | 382K | 3000 | 3000 | 67M | 1.4G |
| Rundi | Run | Niger-Congo | 9/7/16 | J | 253K | 3000 | 3000 | 26M | 1.1G |
| Sesotho | St | Niger-Congo | 6/5/11 | J | 595K | 3000 | 3000 | 84M | 1.0G |
| Swahili | Sw | Niger-Congo | 9/9/18 | J, P | 700K | 3000 | 3000 | 1.8G | 1.2G |
| Xhosa | Xh | Niger-Congo | 8/7/15 | J, X, M, Q | 610K | 3000 | 3000 | 203M | 1.2G |
| Zulu | Zu | Niger-Congo | 6/10/16 | J | 664K | 3000 | 3000 | 121M | 1.4G |
+
+Table 1: Language characteristic and dataset statistics for AFROMT. Statistics for AFROMT are measured in term of sentences. Monolingual data sizes are measured on the raw, pretokenized corpora. We abbreviate the sources for our benchmark as follows: J=JW300, O=OpenSubtitles, P=ParaCrawl, X=XhosaNavy, M=Memat, Q=QED. The # Noun Classes column shows the number of singular/plural/total noun classes.
+
+from large multilingual corpora such as JW300, there is a key issue of large text overlap over sentences. Given the risk of data leakage, this prevents one from naively splitting the corpus into random train/validation/test splits.
+
+To mitigate these issues, when preparing our data, we use a combination of automatic filtering techniques and manual human verification at each step to produce clean parallel data for the construction of our benchmark. For consistency across language pairs, we perform cleaning mainly based on the English side of the noisy parallel corpora. We list the automatic filtering techniques below:
+
+Removal of extremely short sentences Since we focus on sentence-level machine translation,4 we remove sentences containing less than three whitespace-tokenized tokens excluding numerical symbols and punctuation. Additionally, we remove pairs that contain no source or target sentences.
+
+Removal of non-sentences We remove sentences containing no letters, i.e., pairs that contain only numbers and symbols.
+
+Tokenization normalization We perform tokenization on all corpora using the detokenization script provided in the Moses (Koehn et al., 2007) toolkit5. Given that we collect data from various sources, this step is important to allow for consistent tokenization across corpora.
+
+Removal of sentences with high text overlap To prevent data leakage, we remove sentences with
+
+high text overlap. To do this, we use Levenshtein-based fuzzy string matching $^{6}$ and remove sentences that have a similarity score of over 60. Given that measuring this score against all sentences in a corpus grows quadratically with respect to corpus length, we use the following two heuristics to remove sentences with high overlap in an efficient manner: (1) scoring similarity between the 50 alphabetically-sorted previous sentences, (2): extracting the top 100K four-grams and performing the similarity score within each group of sentences containing at least one instance of a certain four-gram.
+
+Data Split The resulting benchmark is constructed using the data that passes our automatic filtering checks, and we further split the data into train, validation, and test for each language pair. We select 3,000 sentences with the least four-gram overlap (with the corpus) for both validation and testing while leaving the rest of the corpus to be used for training. Validation and test sentences are all further verified for quality. The resulting dataset statistics for each language pair can be seen in Table 1.
+
+# 2.4 Impact of Cleaning Process
+
+Given the non-trivial cleaning process and standardization of key components, such as tokenization/splits/data leakage, this cleaning provides a better representative corpus for the languages considered. We demonstrate this with an experiment comparing a randomly initialized English-Zulu models trained on (a) the original noisy data (including some test data leakage), (b) a model trained
+
+on noisy data (without data leakage) similar to the cleaning process used by Nekoto et al. (2020), and (c) a model trained on the AfroMT data. Scores for each setting are measured in BLEU on the clean test set: (a) 38.6, (b) 27.6, (c) 34.8.
+
+Comparing the noisy model and the AfroMT model, we find that not filtering the data for leakage leads to misleading results, unreliablely evaluating models on these LRLs. Additionally, as shown by (b) vs (c), not filtering for other artifacts hinders performance leading to unrealistically weak performance. Additional quantification of data leakage can be found in the Appendix.
+
+# 3 AfroBART
+
+Given that we aim to provide strong baselines for our benchmark, we resort to multilingual sequence-to-sequence training. However, existing pretraining techniques have often been focused on the situation where monolingual data can be found in a larger quantity than parallel data. In this section we describe our proposed multilingual sequence-to-sequence pretraining techniques developed for the novel scenario where even monolingual data is scarce.
+
+# 3.1 Existing Methods
+
+The most widely used methods for multilingual sequence-to-sequence pretraining (Song et al., 2019; Xue et al., 2020; Liu et al., 2020) make a core assumption that the amount of monolingual data in all languages exceeds the amount of parallel data. However, in the case of many African languages, digitized textual data is not widely available, leading this approach to be less effective in these scenarios as shown in Table 2. To mitigate this issue, we build on existing denoising pretraining techniques, particularly BART (Lewis et al., 2020; Liu et al., 2020) and propose two data augmentation methods using dictionaries to augment high-resource monolingual data ( $\S 3.2$ ), and leveraging pseudo monolingual data in low-resource languages ( $\S 3.3$ ). Finally, we iterate the data augmentation with the model training ( $\S 3.4$ ) as shown in Figure 2.
+
+# 3.2 Dictionary Augmentation
+
+Given that existing monolingual corpora in low-resource languages are small, we aim to increase the usage of words from the low-resource language in diverse contexts. To do so, we propose to take
+
+
+Figure 1: Transforming monolingual high-resource data to augmented code-switched data using an English-Swahili bilingual dictionary
+
+sentences from a high-resource language, and replace the words by their corresponding translations that are available in a dictionary extracted from our parallel corpora.
+
+Dictionary Extraction As our data augmentation technique requires a dictionary, we propose to extract the dictionary from parallel corpora using a statistical word aligner, eflomal7 (Östling and Tiedemann, 2016). Once we produce word alignments between tokens in our parallel corpora, we simply take word alignments that appear over 20 times to produce our bilingual dictionary.
+
+Monolingual Data Augmentation We assume to have access to three sources of data, i.e., high-resource corpus $H = \{H_0,\dots ,H_T\}$ , lowresource corpus $L = \{L_{0},\ldots ,L_{M}\}$ , and bilingual dictionary $D = \{(D_0^h,D_0^l),\ldots ,(D_{N_d}^h,D_{N_d}^l)\}$ with $N_{d}$ pairs mapping high-resource term $D_{i}^{h}$ to low-resource term $D_{i}^{l}$ . Given this, for every highresource sentence $H_{i}$ we replace $30\%$ of the tokens that match the high-resource terms contained in $D$ to their respective low-resource terms. In the case that there exists more than one low-resource term in $D_{i}^{l}$ , we randomly select one to replace the highresource term. Notably, with the assumption that high-resource monolingual data is more diverse in its content given its greater size, this augmentation technique is an effective method to increase the coverage of words from the low-resource lexicon in diverse settings.
+
+
+Figure 2: Iterative approach to pretraining using pseudo monolingual data and dictionaries
+
+# 3.3 Leveraging Pseudo-Monolingual Data
+
+Although leveraging dictionaries to produce code-switched monolingual data is a useful technique to introduce low-resource words in a wider variety of contexts, the code-switched sentences still lack the fluency and consistency of pure monolingual data. To further mitigate these fluency and data scarcity issues in the LRL, we propose to create fluent pseudo-monolingual data by translating the HRL monolingual data to the low-resource language using a pretrained machine translation model.
+
+Specifically, given a pretrained sequence-to-sequence model $M$ , we finetune $M$ for the translation from HRL to LRL on a parallel corpus, i.e., $\mathcal{D}_{ft} = \{(\mathcal{D}_0^h,\mathcal{D}_0^l),\ldots ,(\mathcal{D}_{N_{ft}}^h,\mathcal{D}_{N_{ft}}^l)\}$ , and obtain a machine translation model $M_{ft}$ . With the pretrained translation model $M_{ft}$ , we then proceed to translate sentences from high-resource corpus $H$ to our low-resource language $l$ to produce pseudo LRL monolingual corpus $\tilde{L}$ :
+
+$$
+\tilde {L} = M _ {f t} (H; \Theta_ {f t}) \tag {1}
+$$
+
+Following this, we concatenate the existing lowresource corpus $L$ with $\tilde{L}$ and continue training our pretrained sequence-to-sequence model on this new pseudo-monolingual corpora.8
+
+# 3.4 Iterative Multilingual Denoising Pretraining
+
+Given the pseudo-monolingual data synthesis step detailed in the previous §3.3, we can simply transform this into an iterative pretraining procedure (Tran et al., 2020). That is, given the monolingual data synthesis procedure, we can leverage this procedure to produce a cycle in which a pretrained model is used to initialize an MT model to synthesize pseudo monolingual data and the produced data is used to further train the pretrained model (depicted in Figure 2).
+
+# 4 Experimental Setup
+
+In this section, we describe our experimental setup for both pretraining and finetuning strong baselines for our benchmark. Furthermore, we look to evaluate the efficacy of our proposed pretraining
+
+techniques and see whether they provide an impact on downstream performance on AFROMT.
+
+# 4.1 Pretraining
+
+Dataset We pretrain AfroBART on 11 languages: Afrikaans, English, French, Dutch9, Bemba, Xhosa, Zulu, Rundi, Sesotho, Swahili, and Lingala. To construct the original monolingual corpora, we use a combination of the training sets in AFROMT and data derived from CC10010 (Wenzek et al., 2020; Conneau et al., 2020). We only perform dictionary augmentation on our English monolingual data. We list monolingual and pseudo-monolingual corpora statistics in Table 1.
+
+Balancing data across languages As we are training on different languages with widely varying amounts of text, we use the exponential sampling technique used in Conneau and Lample (2019); Liu et al. (2020), where the text is re-sampled according to smoothing parameter $\alpha$ as shown below:
+
+$$
+q _ {k} = \frac {p _ {k} ^ {\alpha}}{\sum_ {j = 1} ^ {N} p _ {j} ^ {\alpha}} \tag {2}
+$$
+
+where $q_{k}$ refers to the re-sample probability for language $k$ , given multinomial distribution $\{q_{k}\}_{k = 1\dots N}$ with original sampling probability $p_{k}^{11}$ . As we work with many extremely low-resource languages, we choose smoothing parameter $\alpha = 0.25$ (compared with the $\alpha = 0.7$ used in mBART) to alleviate model bias towards an overwhelmingly higher proportion of data in the higher-resource languages.
+
+Hyperparameters We use the following setup to train our AfroBART models, utilizing the mBART implementation in the fairseq library (Ott et al., 2019). We concatenate data using Sentence-Piece (Kudo and Richardson, 2018), using a 80K subword vocabulary. We use the Transformer-base architecture of a hidden dimension of 512, feedforward size of 2048, and 6 layers for both the encoder and decoder. We set the maximum sequence length to be 512, using a batch size of 1024 for 100K iterations with 32 NVIDIA V100
+
+| Direction | En-Run | En-Zu | En-Af | En-Xh |
| BLEU | chrF | BLEU | chrF | BLEU | chrF | BLEU | chrF |
| Random | 22.92 | 51.89 | 34.84 | 65.54 | 48.33 | 68.11 | 24.36 | 52.91 |
| mNMT | 21.53 | 50.62 | 31.53 | 62.95 | 43.39 | 64.73 | 22.28 | 54.81 |
| AfroBART Baseline | 24.33 | 52.87 | 35.59 | 66.14 | 49.09 | 68.54 | 25.65 | 58.09 |
| AfroBART-Dictionary | 24.42 | 53.22 | 35.48 | 66.16 | 49.25 | 68.75 | 25.77 | 58.15 |
| AfroBART | 24.62 | 53.24 | 35.58 | 66.30 | 49.80 | 69.03 | 25.80 | 58.22 |
| Direction | En-Ln | En-Bem | En-St | En-Sw |
| BLEU | chrF | BLEU | chrF | BLEU | chrF | BLEU | chrF |
| Random | 28.23 | 52.62 | 18.96 | 45.85 | 43.04 | 62.68 | 33.61 | 58.56 |
| mNMT | 27.29 | 53.16 | 18.54 | 46.20 | 40.26 | 60.65 | 30.55 | 56.44 |
| AfroBART Baseline | 29.12 | 54.31 | 20.07 | 47.50 | 43.79 | 63.22 | 34.19 | 59.08 |
| AfroBART-Dictionary | 29.13 | 54.40 | 20.48 | 47.69 | 43.74 | 63.33 | 34.30 | 59.08 |
| AfroBART | 29.46 | 54.68 | 20.60 | 48.00 | 43.87 | 63.42 | 34.36 | 59.11 |
+
+Table 2: Results on AFROMT's En-XX Machine Translation
+
+GPUs for one day. When we continue training using pseudo-monolingual data, we use a learning rate of $7 \times 10^{-5}$ and warm up over 5K iterations and train for 35K iterations.
+
+# 4.2 Finetuning
+
+Baselines We use the following baselines for our benchmark:
+
+- AfroBART Baseline We pretrain a model using only the original monolingual corpora in a similar fashion to Liu et al. (2020).
+- AfroBART-Dictionary We pretrain a model using the original data in addition to a dictionary augmented English monolingual corpora in Afrikaans, Bemba, Sesotho, Xhosa, Zulu, Lingala, and Swahili.
+- AfroBART We continue training the dictionary augmented AfroBART model, using pseudo monolingual data produce by its finetuned counterparts. Due to computational constraints we only perform one iteration of our iterative approach. Statistics for the pseudomonolingual data can be seen in Table 1.
+- Cross-Linguual Transfer (CLT) When experimenting on the effect of pretraining with various amounts of finetuning data, we use strong cross-lingual transfer models, involving training from scratch on a combination of both our low-resource data and a similar relatively high-resource language following Neubig and Hu (2018).
+- Multilingual Neural Machine Translation (mNMT) We also experiment with a vanilla
+
+multilingual machine translation system (Dabre et al., 2020) trained on all En-XX directions.
+
+- Random As additional baselines, we also provide a comparison with a randomly initialized Transformer-base (Vaswani et al., 2017) models for each translation pair.
+
+Evaluation We evaluate our system outputs using two automatic evaluation metrics: detokenized BLEU (Papineni et al., 2002; Post, 2018) and chrF (Popović, 2015). Although BLEU is a standard metric for machine translation, being cognizant of the morphological richness of the languages in the AFROMT benchmark, we use chrF to measure performance at a character level. Both metrics are measured using the SacreBLEU library13 (Post, 2018).
+
+# 5 Results and Discussion
+
+# 5.1 Performance on En-XX Translation
+
+Table 2 shows the results on En-XX translation on the AFROMT benchmark comparing random initialization with various pretrained AfroBART configurations. We find that initializing with pretrained AfroBART weights results in performance gains of $\sim 1$ BLEU across all language pairs. Furthermore, we observe that augmenting our pretraining data with a dictionary results in performance gains across all pairs in terms of chrF and 6/8 pairs in terms of BLEU. The gain is especially clear on languages with fewer amounts of monolingual data
+
+
+Figure 3: Visualization of results using various amounts of parallel data on English-Xhosa and English-Zulu. We compare AfroBART, random initialization and cross-lingual transfer.
+
+
+
+such as Rundi and Bemba, demonstrating the effectiveness of our data augmentation techniques on low-resource translation. Moreover we see further improvements when augmenting with pseudo monolingual data, especially on pairs with fewer data which validates the usage of this technique.
+
+# 5.2 Performance vs Amount of Parallel Data
+
+We perform experiments to demonstrate the effect on pretraining with various amounts of parallel data (10k, 50k, and 100k pairs) on two related language pairs: English-Xhosa and English-Zulu. We compare AfroBART (with both dictionary augmentation and pseudo monolingual data) with randomly initialized models, and cross-lingual transfer models (Neubig and Hu, 2018) jointly trained with a larger amount of parallel data (full AFROMT data) in a related language.
+
+In Figure 3, a pretrained AfroBART model finetuned on 10K pairs can almost double the performance of other models (with a significant performance increase over random initialization of $15+$ BLEU on English-Zulu), outperforming both crosslingual transfer and randomly initialized models trained on 5x the data. Furthermore, we notice that CLT performs than Random on English-Xhosa as the data size increases. Although we do not have an exact explanation for this, we believe this has to do with the other language data adding noise rather than additional supervision as the data size increases. We detail these results in Table 3 of the Appendix.
+
+Comparison on convergence speed In contrast to the cross-lingual transfer baseline which involves the usage of more data, and the random initialization baseline which needs to learn from scratch, AfroBART is able to leverage the knowl
+
+edge gained during training for fast adaptation even with small amounts of data. For example, AfroBART converged within 1,000 iterations when finetuning on 10K pairs on English-Zulu, whereas the random initialization and cross-lingual transfer baselines converged within 2.5K and 12K iterations respectively. This is promising as it indicates that we can leverage these models quickly for other tasks where there is much fewer parallel data.
+
+# 5.3 Fine-grained Language Analysis
+
+We further provide a suite of fine-grained analysis tools to compare the baseline systems. In particular, we are interested in evaluating the translation accuracy of noun classes in the considered African languages in the Niger-Congo family, as these languages are morphologically rich and often have more than 10 classes based on the prefix of the word. For example, kitabu and vitabu in Swahili refer to book and books in English, respectively. Based on this language characteristic, our fine-grained analysis tool calculates the translation accuracy of the nouns with the top 10 most frequent prefixes in the test data. To do so, one of the challenges is to identify nouns in a sentence written in the target African language. However, there is no available part-of-speech (POS) tagger for these languages. To tackle this challenge, we propose to use a label projection method based on word alignment. Specifically, we first leverage an existing English POS tagger in the spaCy library to annotate the English source sentences. We then use the fast_align tool (Dyer et al., 2013) to train a word alignment model on the training data for the En-XX language pair, and use the alignment
+
+model to obtain the word-level alignment for the test data. We assign the POS tags of the source words in English to their aligned target words in the African language. We then measure the translation accuracy of the nouns in the African language by checking whether the correct nouns are included in the translated sentences by systems in comparison. Notably, our analysis tool can also measure the translation accuracy of the words in the other POS tags, (e.g. verbs, adjectives) which are often adjusted with different noun classes.
+
+Figure 4 compares the AfroBART and Random baseline in terms of translation accuracy of nouns in Swahili. First, we find that both systems perform worse on translating nouns with the prefix "ku-" which usually represent the infinitive form of verbs, e.g., kula for eating. Secondly, we find that AfroBART significantly improves translation accuracy for nouns with prefixes "ki- (describing man-made tools/languages, e.g., kitabu for book) and "mw- (describing a person, e.g., mwalimu for teacher). Finally, AfroBART improves the translation accuracy on average over the ten noun classes by $1.08\%$ over the Random baseline.
+
+We also perform this analysis on our data-constrained scenario for English-Xhosa, shown in Figure 7. It can be seen that leveraging cross-lingual transfer (trained on both Xhosa and Zulu) models improved noun class accuracy on classes such as uku (infinitive noun class), izi (plural for objects), and ama (plural for body parts) which are shared between languages. This can be contrasted with iin (plural for animals) which is only used in Xhosa, where CLT decreases performance. These analyses which require knowledge of unique grammar found in these languauges can be used for diagnosing cross-lingual transfer for these languauges. Also, we note that AfroBART almost doubles the accuracy (improvement of $16.33\%$ ) of the cross-lingual transfer baseline on these noun classes.
+
+# 5.4 Shortcomings of AFROMT
+
+Although we believe AFROMT to be an important step in the right direction, we acknowledge it is far from being the end-all-be-all. Specifically, we note the following: (1) the lack of domain diversity among many languages (being largely from religious oriented corpora) and (2) the corpora may still contain some more fine-grained forms of noise in terms of translation given its origin. Given this, in the future we look to include more diverse data
+
+
+Figure 4: Translation accuracy of the AfroBART and Random baseline systems on Swahili noun classes with top 10 most frequent 2-character prefixes.
+
+
+Figure 5: Translation accuracy of the AfroBART and Random baseline systems on Xhosa (10k pairs) noun classes with top 10 most frequent 3-character prefixes.
+
+sources and more languages and encourage the community to do so as well.
+
+# 6 Related Work
+
+Machine Translation Benchmarks Previous work in benchmarking includes the commonly used WMT (Bojar et al., 2017) and IWSLT (Federico et al., 2020) shared tasks. Recent work on MT benchmarks for low-resource languages, such as that of Guzmán et al. (2019), have been used for the purpose of studying current NMT techniques for low-resource languages.
+
+Multilingual Pretraining Multilingual encoder pretraining (Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020) has been demonstrated to be an effective technique for cross-lingual transfer on a variety of classification tasks (Hu et al., 2020; Artetxe et al., 2020). More recently, sequence-to-sequence pretraining has emerged as a prevalent method for achieving better performance (Lewis et al., 2020; Song et al., 2019) on generation tasks. Liu et al. (2020) proposed a mul
+
+tilingual approach to BART (Lewis et al., 2020) and demonstrated increased performance on MT. Building on these works, we extend this to a LRL-focused setting, developing two new techniques for improved performance given monolingual data-scarcity. In concurrent work, Liu et al. (2021); Reid and Artetxe (2021) also look at using code-switched corpora for sequence-to-sequence pretraining.
+
+NLP for African Languages Benchmarking machine translation for African languages was first done by Abbott and Martinus (2019) for southern African languages and Abate et al. (2018) for Ethiopian languages. Recent work in NLP for African languages has largely revolved around the grassroots translation initiative Masakhane (Orife et al., 2020; Nekoto et al., 2020). This bottom-up approach to dataset creation (Nekoto et al., 2020), while very valuable, has tended to result in datasets with somewhat disparate data splits and quality standards. In contrast, AFROMT provides a cleaner corpus for the 8 supported languages. We plan to open source the entire benchmark (splits included) to promote reproducible results in the community.
+
+# 7 Conclusion
+
+In this work we proposed a standardized, clean, and reproducible benchmark for 8 African languages, AFROMT, as well as novel pretraining strategies in the previously unexplored low-resource focused setting. Our benchmark and evaluation suite are a step towards larger, reproducible benchmarks in these languages, helping to provide insights on how current MT techniques work for these under-explored languages. We will release this benchmark, our pretrained AfroBART models, dictionaries, and pseudo monolingual data to the community to facilitate further work in this area.
+
+In future work we look to use similar methodology to advance in both of these directions. We look to increase the number of language pairs in AFROMT to be more representative of the African continent. Additionally, we look to scale up our pretraining approaches for increased performance.
+
+# Acknowledgements
+
+We thank Antonios Anastasopoulos and Edison Marrese-Taylor, and the anonymous reviewers for feedback and comments. We also thank Aditi
+
+Chaudhary and Kathleen Siminyu for helpful discussions in early stages of this work. MR is grateful to the Masason Foundation for their support.
+
+# References
+
+Solomon Teferra Abate, Michael Melese, Martha Yifiru Tachbelie, Million Meshesha, Solomon Atinafu, Wondwossen Mulugeta, Yaregal Assabie, Hafte Abera, Binyam Ephrem, Tewodros Abebe, Wondimagegnhue Tsegaye, Amanuel Lemma, Tsegaye Andargie, and Seifedin Shifaw. 2018. Parallel corpora for bi-lingual English-Ethiopian languages statistical machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3102-3111, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
+Jade Abbott and Laura Martinus. 2019. Benchmarking neural machine translation for Southern African languages. In Proceedings of the 2019 Workshop on Widening NLP, pages 98-101, Florence, Italy. Association for Computational Linguistics.
+Ahmed Abdelali, Francisco Guzman, Hassan Sajjad, and Stephan Vogel. 2014. The AMARA corpus: Building parallel language resources for the educational domain. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1856-1862, Reykjavik, Iceland. European Language Resources Association (ELRA).
+Zeljko Agić and Ivan Vulić. 2019. JW300: A wide-coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204–3210, Florence, Italy. Association for Computational Linguistics.
+Naveen Arivazhagan, Ankur Bapna, Orhan First, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019.
+Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, Online. Association for Computational Linguistics.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+John T. Bendor-Samuel and Rhonda L. Hartell, editors. 1989. The Niger-Congo Languages: A classification
+
+and description of Africa's largest language family. University Press of America, Lanham, MD.
+Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169-214, Copenhagen, Denmark. Association for Computational Linguistics.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
+Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057-7067.
+Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A survey of multilingual neural machine translation. ACM Comput. Surv., 53(5).
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics.
+Miquel Esplà, Mikel Forcada, Gema Ramírez-Sánchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119, Dublin, Ireland. European Association for Machine Translation.
+Marcello Federico, Robert Enyedi, Roberto Barra-Chicote, Ritwik Giri, Umut Isik, Arvindh Krishnaswamy, and Hassan Sawaf. 2020. From speech-to
+
+speech translation to automatic dubbing. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 257-264, Online. Association for Computational Linguistics.
+Mitchell A Gordon, Kevin Duh, and Jared Kaplan. 2021. Data and parameter scaling laws for neural machine translation. In ACL Rolling Review - May 2021.
+Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4411-4421. PMLR.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic. Association for Computational Linguistics.
+Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28-39, Vancouver. Association for Computational Linguistics.
+Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer
+
+Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
+Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Portož, Slovenia. European Language Resources Association (ELRA).
+Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8(0):726-742.
+Zihan Liu, Genta Indra Winata, and Pascale Fung. 2021. Continual mixed-language pre-training for extremely low-resource neural machine translation. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021.
+Yuxian Meng, Xiangyuan Ren, Zijun Sun, Xiaoya Li, Arianna Yuan, Fei Wu, and Jiwei Li. 2019. Large-scale pretraining for neural machine translation with tens of billions of sentence pairs. arXiv preprint arXiv:1909.11861.
+Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp Oktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory research for low-resourced machine translation: A case study in African languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2144-2160, Online. Association for Computational Linguistics.
+Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages
+
+875-880, Brussels, Belgium. Association for Computational Linguistics.
+Iroro Orife, Julia Kreutzer, Blessing Sibanda, Daniel Whitenack, Kathleen Siminyu, Laura Martinus, Jamiil Toure Ali, Jade Abbott, Vukosi Marivate, Salomon Kabongo, Musie Meressa, Espoir Murhabazi, Orevaoghene Ahia, Elan van Biljon, Arshath Ramkilowan, Adewale Akinfaderin, Alp Oktem, Wole Akin, Ghollah Kioko, Kevin Degila, Herman Kamper, Bonaventure Dossou, Chris Emezue, Kelechi Ogueji, and Abdallah Bashir. 2020. Masakhane - machine translation for africa. arXiv preprint arXiv:2003.11529.
+Robert Östling and Jörg Tiedemann. 2016. Efficient word alignment with Markov Chain Monte Carlo. Prague Bulletin of Mathematical Linguistics, 106:125-146.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Maja Popovic. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
+Machel Reid and Mikel Artetxe. 2021. PARADISE: Exploiting parallel data for multilingual sequence-to-sequence pretraining. ArXiv, abs/2108.01887.
+Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019,
+
+Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5926-5936. PMLR.
+Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
+Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA).
+Chau Tran, Yuqing Tang, Xian Li, and Jiatao Gu. 2020. Cross-lingual retrieval for iterative self-supervised training. In Advances in Neural Information Processing Systems.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.
+Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934.
+Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.
+
+# A AFROMT
+
+We provide extra information — Script, Language Family, L1 and L2 speakers, Location as well as Word Order — in Table 4.
+
+We upload AFROMT as well as the data generated using the pseudo monolingual data synthesis $^{16}$ .
+
+# B Pretraining
+
+Data We use in addition to the monolingual data for the languages in AFROMT (shown in Table 1 of the main paper), we 14 GB of English data, and 7 GB of French and Dutch data each.
+
+Additional Hyperparameters We optimize the model using Adam (Kingma and Ba, 2015) using hyperparameters $\beta = (0.9, 0.98)$ and $\epsilon = 10^{-6}$ . We warm up the learning rate to a peak of $3 \times 10^{-4}$ over 10K iterations and then decay said learning rate using the polynomial schedule for 90K iterations. For regularization, we use a dropout value of 0.1 and weight decay of 0.01.
+
+# C Finetuning Hyperparameters
+
+Training from scratch When training using random initialization (or CLT), we use a batch size of $32\mathrm{K}$ (or 64K in the case of CLT) tokens and warmup the learning rate to $5 \times 10^{-4}$ over 10K iterations and decay with the inverse square root schedule. We use a dropout value of 0.3, a weight decay value of 0.01, and a label smoothing value of $\epsilon = 0.1$ .
+
+Finetuning from AfroBART We train using a batch size of 32K tokens, and use a smaller learning rate of $3 \times 10^{-4}$ . We use a polynomial learning rate schedule, maximizing the learning rate at 5000 iterations and finishing training after 50K iterations. We perform early stopping, stopping training if the best validation loss remains constant for over 10 epochs. We use a label smoothing value of $\epsilon = 0.2$ , a dropout value of 0.3 and weight decay of 0.01.
+
+# D Training Infrastructure
+
+For finetuning models on AFROMT we use between 1 and 8 NVIDIA V100 16GB GPUs on a DGX-1 machine running Ubuntu 16.04 on a Dual
+
+20-Core Intel Xeon E5-2698 v4 2.2 GHz. For pretraining we make use of a compute cluster using 8 nodes with 4 NVIDIA V100 16GB GPUs per node.
+
+# E Quantification of Potential Data Leakage
+
+In low-resource machine translation, data-leakage is a key concern given its pertinence in the mitigation of misleading results. We quantify data leakage for our benchmark. We measured the target-side train-test data leakage using the 4-gram overlap between the training/test sets. We take the most frequent 100k 4-grams from the training set and compare them with all 4-grams in the test set and obtain an average 4-gram overlap of $5.01 \pm 2.56\%$ (measured against all test-set 4-grams). To put this value in context, we ran these on other widely used low-resource datasets from IWSLT (En-Vi, Ja-En, Ar-En) and obtained $9.50\%$ , $5.49\%$ , and $5.53\%$ respectively. We believe this to be reasonable evidence of the lack of train-test data-leakage.
+
+Furthermore, we also show improvements on source-target leakage as follows: we compute BLEU between the source and target over all training sets, we obtain an average of $4.5 \pm 1.3$ before cleaning (indicating heavy overlap in certain source-target pairs in the corpus), and after cleaning $0.7 \pm 0.2$ indicating a significant decrease in such overlap.
+
+# F Parameter Count
+
+We keep the parameter count of 85M consistent throughout our experiments as we use the same model architecture. We ran experiments on scaling up randomly initialized models with a hidden size of 768 and feed forward dimension of 3072 with 6 layers in both the encoder and decoder on three language pairs. The results of these experiments can be seen in Table 3.
+
+| Lang. Pair | Model | Param. Count | BLEU | chrF |
| En-Run | Random | 85M | 22.92 | 51.89 |
| Random | 160M | 22.12 | 51.22 |
| En-Sw | Random | 85M | 33.61 | 58.56 |
| Random | 160M | 33.62 | 58.65 |
| En-Ln | Random | 85M | 28.37 | 53.65 |
| Random | 160M | 27.58 | 53.29 |
+
+Table 3: Scalability comparison
+
+It can be seen that increasing parameter count of AfroMT for random initialization doesn't provide
+
+| Languages | ISO 639-2 code | Script | Language Family | Population |
| L1 | L2 |
| Afrikaans | Afr | Latin, Arabic | Indo-European: Germanic | 7.2M | 10.3M |
| Bemba | Bem | Latin | Niger-Congo: Bantu Zone M | 4M | 2M |
| Lingala | Lin | Latin | Niger-Congo: Bantu Zone C | 20M | 25M |
| Rundi | Run | Latin | Niger-Congo: Bantu Zone D | 11.9M | — |
| Sotho | Sot | Latin | Niger-Congo: Bantu Zone S | 5.6M | 7.9M |
| Swahili | Swa | Latin | Niger-Congo: Bantu Zone G | 150M | 90M |
| Xhosa | Xho | Latin | Niger-Congo: Bantu Zone S | 8.2M | 11M |
| Zulu | Zul | Latin | Niger-Congo: Bantu Zone S | 12M | 16M |
+
+| Languages | Location | Noun Classes Singular/Plural/Total | Word Order |
| Afrikaans | South Africa, Namibia | — | SVO |
| Bemba | North-Eastern Zambia | 9/6/15 | SVO |
| Lingala | DR Congo, Congo | 9/6/15 | SVO |
| Rundi | Burundi | 9/7/16 | SVO |
| Sotho | Lesotho, South Africa, Zimbabwe | 6/5/11 | SVO |
| Swahili | African Great Lakes region, East/Southern Africa | 9/9/18 | SVO |
| Xhosa | South Africa | 8/7/15 | SVO |
| Zulu | South Africa, Lesotho, Eswatini | 6/10/16 | SVO |
+
+Table 4: Extra information on all the languages contained within AFROMT
+
+an effective performance/compute tradeoff, harming performance on English-Rundi and English-Lingala, while minimally improving performance on English-Swahili. This being said, we believe that if we scale up AfroBART, given the insights from Liu et al. (2020); Gordon et al. (2021), we can provide a good initialization to allows us to scale to these model sizes for greater performance.
+
+# G Fine-grained morphological analysis in a data constrained regime
+
+
+Figure 6: Translation accuracy of the AfroBART and Random baseline systems on Zulu (10k pairs) noun classes with top 10 most frequent 2-character prefixes.
+
+We perform our fine grained morphological analysis (described in Section §5.3 of the main paper) on the data constrained scenario (described in Section §5.2 of the main paper). We perform the analysis on English-Xhosa and English-Zulu (10k parallel sentence pairs) side by side and visualize them in
+
+
+Figure 6 and Figure 7. It can be seen that crosslingual transfer improves accuracy in this data constrained scenario over a random baseline, which is return improved upon by AfroBART.
+Figure 7: Translation accuracy of the AfroBART and Random baseline systems on Xhosa (10k pairs) noun classes with top 10 most frequent 3-character prefixes. (Same as Figure 5 of the main paper)
+
+Additionally, we report the BLEU and chrF scores of the data constrained experiments (shown in Figure 3 of the main paper) in Table 5.
+
+| Lang. Pair | # Data | Model | BLEU | chrF |
| En-Zu | 10k | Random | 4.06 | 28.26 |
| CLT | 8.08 | 37.9 |
| AfroBART | 20.44 | 51.35 |
| 50k | Random | 18.01 | 50.55 |
| CLT | 20.41 | 51.52 |
| AfroBART | 26.95 | 58.56 |
| 100k | Random | 23.09 | 55.63 |
| CLT | 24.50 | 55.81 |
| AfroBART | 29.41 | 60.81 |
| En-Xh | 10k | Random | 2.82 | 26.29 |
| CLT | 6.35 | 32.31 |
| AfroBART | 13.98 | 43.19 |
| 50k | Random | 11.94 | 42.62 |
| CLT | 10.12 | 39.73 |
| AfroBART | 18.54 | 49.70 |
| 100k | Random | 16.00 | 47.92 |
| CLT | 11.64 | 41.19 |
| AfroBART | 20.45 | 52.35 |
+
+Table 5: Comparing performance with various amounts of parallel data
\ No newline at end of file
diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/images.zip b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..daf5ef11ac9cbcf0b2719b2abb453791b90f25c7
--- /dev/null
+++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c245c259139e37a31bbe7c71a3c87591e9670ab3b6c7e97dc7423b0ae885663a
+size 517668
diff --git a/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/layout.json b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e492695cd7d66b8989991d753b819e3a00334d55
--- /dev/null
+++ b/afromtpretrainingstrategiesandreproduciblebenchmarksfortranslationof8africanlanguages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d489c140dea404a990bda99b8597c697cac6b22b3bae30c0e35f3e9aa75a466f
+size 408537
diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_content_list.json b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..cb9d36398cf6a8eaa3765e94afffa1929a368c6c
--- /dev/null
+++ b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:74c6854af11f1a0ac095e37016c121fc9f62f25391d23c527e4b0f172eb26a3c
+size 71756
diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_model.json b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1a2c6a462df5b3cd0ae951d2d6d77e21673ef011
--- /dev/null
+++ b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61838eb624049619fffa5ad552ce74107e0a397e27322f1f3fe41dffe0c4041d
+size 85979
diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_origin.pdf b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..acb8b3f20911c819ce345a772fa92604d665a537
--- /dev/null
+++ b/agenerativeframeworkforsimultaneousmachinetranslation/34ba040d-f73f-41bb-a391-648f202fd1b2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c6a3287ff0d1f7a802ad5f69d75e9bc2895b9475a3d9b52cd05b02367b9a105c
+size 514608
diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/full.md b/agenerativeframeworkforsimultaneousmachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ece1afcfbb890acbefd74c3f7c62bd3035e3941
--- /dev/null
+++ b/agenerativeframeworkforsimultaneousmachinetranslation/full.md
@@ -0,0 +1,346 @@
+# A Generative Framework for Simultaneous Machine Translation
+
+Yishu Miao
+Imperial College London
+ByteDance
+
+Phil Blunsom
+Universitiy of Oxford
+DeepMind
+
+Lucia Specia
+Imperial College London
+University of Sheffield
+
+ym713@ic.ac.uk phil.blunsom@cs.ox.ac.uk l.specia@ic.ac.uk
+
+# Abstract
+
+We propose a generative framework for simultaneous machine translation. Conventional approaches use a fixed number of source words to translate or learn dynamic policies for the number of source words by reinforcement learning. Here we formulate simultaneous translation as a structural sequence-to-sequence learning problem. A latent variable is introduced to model read or translate actions at every time step, which is then integrated out to consider all the possible translation policies. A re-parameterised Poisson prior is used to regularise the policies which allows the model to explicitly balance translation quality and latency. The experiments demonstrate the effectiveness and robustness of the generative framework, which achieves the best BLEU scores given different average translation latencies on benchmark datasets.
+
+# 1 Introduction
+
+The fundamental challenge of simultaneous machine translation (SiMT) is the balance between the translation quality and the latency. It is non-trivial to find an optimal translation strategy, as there is generally a rivalry between the two objectives, i.e. reading more source words before translating leads to better translation quality, but it in turn results in higher latency due to the longer time for reading.
+
+Conventional Wait- $k$ policies (Ma et al., 2019) put a hard limitation over the buffer size $k^1$ , which guarantees low latency but weakens flexibility and scalability when handling long and complicated language pairs. Alternatively, reinforcement learning (RL) approaches (Gu et al., 2017; Satija and Pineau, 2016; Arthur et al., 2021) learn a dynamic policy using a combined reward of a quality metric like the BLEU score and AL (average lagging) $^2$ .
+
+
+
+
+(a) Wait-k $(\mathrm{k} = 3)$
+(c) RL method.
+Figure 1: Example of translation paths of different simultaneous translation models.
+
+
+
+
+(b) Adaptive wait-k $(\mathrm{k} = 3)$ .
+(d) GSiMT.
+
+However, the poor sample efficiency makes it very difficult to learn a robust SiMT model with RL.
+
+In this paper we propose a generative framework with a latent variable that dynamically decides between the actions of read or translate at every time step, enabling the formulation of SiMT as a structural sequence-to-sequence learning task. Figure 1 depicts the examples of possible translation paths of different models. Wait- $k$ only explores one hypothesis, while adaptive wait- $k$ ensembles the other hypotheses with lower $k$ . However, the hypotheses of reading more than $k$ words before translating are not considered (e.g. inversion and reordering in long sequence translations). The RL models apply dynamic policies which can explore all the possible hypotheses, but the gradient estimator conditioned on discrete samples has large variance and the variance issue gets worse for long sequences. Instead, Our proposed generative simultaneous machine translation model (GSiMT)
+
+integrates out all the hypotheses by a dynamic programming algorithm (Algorithm 1) with the help of the introduced latent variable. It does not suffer from such large variance issue, and can be easily and efficiently learned by gradient backpropagation on GPU hardware.
+
+The generative model can be modelled as a neural transducer (Graves, 2012; Yu et al., 2016). However the vanilla neural transducer is not designed for SiMT. Because it is optimised by the cross-entropy of target words, it naturally prefers read actions over translate actions in order to see more contexts before translation, which intuitively can result in better translation quality but high latency.
+
+Here, we propose to extend the neural transducer framework to modern Transformer-based translation models (Vaswani et al., 2017), and introduce a re-parameterised Poisson distribution to regularise the latency (i.e. how many source words are read before translating a target word). Inspired by the fast-alignment work by Dyer et al. (2013), the translation model generally favors word alignments distributed close to the diagonal. We hypothesise that the optimal sequence of translate actions in SiMT is also located close to the diagonal. Thus the Poisson prior acts as context-independent regularisation on the buffer size proportional to the distance between the current position and the diagonal. This ensures that the number of read source words will not grow indefinitely without translating any target words, while the soft boundary, due to the regularisation, still allows the model to consider complicated/long simultaneous translation cases.
+
+To demonstrate the effectiveness of the proposed framework, we evaluate our generative models on two benchmark datasets: WMT15 (Bojar et al., 2015) for text-only SiMT and Multi30K (Elliott et al., 2016) for multimodal SiMT. Compared to a number of strong baseline models, Wait- $k$ , Adaptive Wait- $k$ and an RL-trained policy, our proposed model achieves the best performance on both BLEU scores and average lagging (AL). Our contributions can be summarised:
+
+- A Transformer-based neural transducer model for simultaneous machine translation.
+- Poisson prior for effectively balancing the translation quality and latency.
+- State-of-the-art SiMT results (BLEU & AL) on benchmark datasets, and the BLEU scores are on-par-with consecutive MT models.
+
+# 2 Related Work
+
+Conventional SiMT methods are based on heuristic waiting criteria (Cho and Esipova, 2016) or fixed buffering strategy (Ma et al., 2019) to trade off the translation quality for lower latency. Although the heuristic approaches are simple and straightforward, they lack of scalability and cannot generalise well on longer sequences. There is also a bulk of work attempting to improve the attention mechanism (Arivazhagan et al., 2019) and re-translation strategies (Niehues et al., 2018) for better translation quality. Recently, Zheng et al. (2020) extends the fixed Wait- $k$ policies into adaptive version and ensembles multiple models with lower latency to improve the performance, but one still needs to choose a hard boundary on the maximum value of $k$ . By contrast, our GSiMT model considers all the possible paths with a soft boundary modelled by Poisson distribution, which leads to a more flexible balance between quality and latency.
+
+RL has been explored (Gu et al., 2017) to learn an agent that dynamically decides to read or translate conditioned on different translation contexts. Arthur et al. (2021) further applies extra knowledge on word alignments as the oracle to improve the learning. However, the high variance of the estimator is still a bottleneck that hinders the applicability of RL in structural sequence-to-sequence learning. The proposed GSiMT model combines the merits of both the Wait- $k$ policies and RL.
+
+Deep learning with structures has been explored in many NLP tasks, especially for sequence-to-sequence learning. Kim et al. (2017) implements structural dependencies on attention networks, which gives the ability to attend to partial segmentations or subtrees without changing the sequence-to-sequence structure. Tran et al. (2016) parameterises the transition and emission probabilities of an HMM with explicit neural components, and Jiang et al. (2016) applies deep structural latent variables to implement the dependency model with valence (Klein and Manning, 2004) and integrates out all the structures in end-to-end learning. Our GSiMT model is based on neural transducer model. Previously, Graves (2012) presents an RNN-based neural transducer for phoneme recognition, and Yu et al. (2016) explores an LSTM-based neural transducer for MT. The uni-directional variant model of Yu et al. (2016) is similar to our proposed GSiMT model, however it is implemented as a vanilla neural transducer, which is not optimised for low la
+
+
+Figure 2: During training, all the contextualised representations $\mathbf{S}_{i,j}$ will be used to compute the translation distribution $p(y_{j}|X_{i},Y_{j - 1})$ and action distribution $p(a_{i,j}|X_{i},Y_{j - 1})$ , while in testing the model takes the inputs $X$ in real-time and dynamically produces $y_{j}$ and $a_{i,j}$ until all the inputs have been read.
+
+
+
+tency and hence performs poorly on SiMT. Therefore, the Poisson prior for regularising the latency is the key component to enable neural transducer models work on SiMT.
+
+# 3 Model
+
+# 3.1 Generative Model
+
+We use $X_{:m}$ and $Y_{:n}$ to represent the source language sequence and target language sequence with lengths $m$ and $n$ . $X_{:i}$ represents the sub-sequence $\{x_1, x_2, \ldots, x_i\}$ . The structural latent variable $a_{i,j}$ (0 or 1) represents the action (read or translate). Specifically $a_{i,j} = 0$ means reading an extra source word and $a_{i,j} = 1$ means translating the target word $y_j$ . The translation position $Z$ is introduced as an auxiliary variable to simplify the equations, where $z_j = i$ denotes that there $i$ source words have been read when decoding the $j$ th word $y_j$ . Similar to neural transducer (Graves, 2012; Yu et al., 2016), the generative model can be formulated as:
+
+$$
+p (Y _ {: j} | X) = \sum_ {i = 1} ^ {| X |} p (y _ {j} | X _ {: i}, Y _ {: j - 1}) p (z _ {j} = i, Y _ {: j - 1} | X _ {: i})
+$$
+
+Translation distribution. Given the contextualised representation $\mathbf{S}_{i,j}$ , the translation distribution of $y_{j}$ :
+
+$$
+p \left(y _ {j} \mid X _ {: i}, Y _ {: j - 1}\right) = \operatorname {s o f t m a x} \left(\mathbf {S} _ {i, j} \cdot \mathbf {W} _ {y} ^ {T}\right) \tag {1}
+$$
+
+Specifically, $\mathbf{W}_y^T$ is the projection matrix for word prediction and we leave out the bias terms for simplicity. $\mathbf{S}_{i,j}$ is the state output conditioned on
+
+source words $X_{:i}$ and target words $Y_{:j-1}$ :
+
+$$
+\mathbf {S} _ {i, j} = g \left(\operatorname {E n c} \left(X _ {: i}\right), \operatorname {D e c} \left(Y _ {: j - 1}\right)\right) \tag {2}
+$$
+
+where Enc and Dec are uni-directional Transformers based encoder and decoder. Different from conventional consecutive NMT model e.g. T5 (Raffel et al., 2020) where encoder is a bi-directional Transformer, the model has no access to the full input stream when translating. Figure 2 shows the training process where $S_{i,j}$ is computed for all the sub-sequences at positions $i,j$ .
+
+Position distribution. The position distribution jointly models the translation position $z_{j}$ , and the subsequence $Y_{:j-1}$ :
+
+$$
+\begin{array}{l} p \left(z _ {j} = i, Y _ {: j - 1} \mid X _ {: i}\right) \tag {3} \\ = \sum_ {i ^ {\prime} = 1} ^ {i} p (z _ {j} = i | z _ {j - 1} = i ^ {\prime}, X: i, Y: j - 1) \cdot p (Y: j - 1 | X: i ^ {\prime}) \\ \end{array}
+$$
+
+Here, we can recurrently decompose the position distribution into a sum of products for all the possible sub-sequence $Y_{:j-1}$ given read source sequence $X_{:i'}$ and the transitions from $z_{j-1} = i'$ to $z_j = i$ , i.e. there are $i - i'$ source words newly read before translating $y_j$ .
+
+Switch distribution. To model all the possible transitions from $z_{j-1} = i'$ to $z_j = i$ , we employ the switch distribution:
+
+$$
+\begin{array}{l} p \left(z _ {j} = i \mid z _ {j - 1} = i ^ {\prime}, X _ {: i}, Y _ {: j - 1}\right) \tag {4} \\ = \left\{ \begin{array}{l l} 0 & \text {i f} i < i ^ {\prime} \\ \alpha_ {i, j} & \text {i f} i = i ^ {\prime} \\ \alpha_ {i, j} \cdot \prod_ {k = i ^ {\prime}} ^ {i - 1} (1 - \alpha_ {k, j}) & \text {i f} i > i ^ {\prime} \end{array} \right. \\ \end{array}
+$$
+
+and
+
+$$
+\alpha_ {i, j} = p \left(a _ {i, j} = 1 \mid X: i, Y: j - 1\right) = \operatorname {s i g m o i d} \left(\mathbf {S} _ {i, j} \cdot \mathbf {W} _ {a} ^ {T}\right)
+$$
+
+
+Figure 3: An example of decomposing a position distribution into a sum of products of switch distributions and subsequence generation probabilities.
+
+
+
+
+
+
+
+$$
+\begin{array}{c} p (z _ {4} = 3, Y _ {: 3} | X _ {: 3}) = \sum_ {i ^ {\prime} = 0} ^ {4} p (z _ {4} = 3 | z _ {3} = i ^ {\prime}, X _ {: i ^ {\prime}}, Y _ {: 3}) \cdot p (Y _ {: 3} | X _ {: i ^ {\prime}}) \\ \text {p o s i t i o n d i s t r i b u t i o n} \quad \text {s w i t c h d i s t r i b u t i o n} \quad \text {s u b s e q u e n c e} \end{array}
+$$
+
+where $\mathbf{W}_a^T$ is the linear projection to the action space, and $Z$ is a monotonic sequence ( $z_j \geq z_{j-1}$ ), hence for the transitions $i < i'$ , the switch probability is zero.
+
+Figure 3 shows a simple example for decomposing $p(Y_{:4}|X_{:3})$ into switch distributions and sub-sequence translations. For the transitions $i > i'$ , it accumulates $i - i'$ read actions plus one translate action, so the switch probability is $\alpha_{i,j} \cdot \prod_{k=i'}^{i-1}(1 - \alpha_{k,j})$ . Here, the read or translate actions are conditionally independent given the translation history.
+
+Objective. In SiMT, we explicitly assume the last target word $y_{n}$ is translated after reading all source words, hence the final objective can be simplified as:
+
+$$
+\begin{array}{l} p (Y | X) = p (y _ {n} | X _ {: m}, Y _ {: n - 1}) \cdot p (z _ {n} = m, Y _ {: n - 1} | X _ {: m}) \\ = p \left(y _ {n} \mid X: m, Y: n - 1\right) \cdot \\ \end{array}
+$$
+
+$$
+\sum_ {i = 1} ^ {m} p \left(z _ {n} = m \mid z _ {n - 1} = i, X: m, Y: n - 1\right) \cdot p \left(Y: n - 1 \mid X: i\right)
+$$
+
+One caveat is that this objective does not encourage low latency translations when optimised by maximum log-likelihood, since the model can read as many source words as possible in order to have the best translation quality. Ideally, the lowest latency means that for all the target words $y_{j}$ , the model reads one source word at every time step after translating a target word (i.e. the translation positions $z_{j} = i$ are close to the diagonal of a $m*n$ matrix as much as possible). Therefore, we need an extra regularisation to focus the probability mass of the translation positions along the diagonal.
+
+# 3.2 Poisson Prior
+
+Dyer et al. (2013) proposes a log-linear diagonal reparameterisation for fast word alignments, which helps the IBM 2 model by encouraging the probability mass to be around the diagonal. This in turn
+
+also notably improves efficiency over the vanilla IBM 2 model. Although SiMT is more complex than word alignment, the diagonal reparameterisation can act as a strong regularisation to favor the translate actions happening around the diagonal, which can yield balanced actions resulting in high quality and low latency.
+
+Therefore, we introduce a prior distribution to regularise the maximum number of source words that can be stored $(b_{j})$ when decoding the $j$ th word $(y_{j})$ . To that end, we apply Poisson distribution as it is generally used for modelling the number of events in other specified intervals such as distance, area or volume. The distance between the absolute positions $(i$ and $j)$ and the diagonal can be easily modelled as discrete values to be regularised by Poisson, where the probability decreases when the distance grows. Here we re-parameterise a Poisson distribution:
+
+$$
+p (b _ {j} = i; m, n) = \left\{ \begin{array}{l l} 0 & \text {i f} d (i, j) < 0 \\ \frac {e ^ {- \lambda} \lambda^ {d (i , j)}}{d (i , j) !} & \text {i f} d (i, j) \geq 0 \end{array} \right.
+$$
+
+and
+
+$$
+d (i, j) = \lfloor i - j \cdot \frac {m}{n} - \zeta \rceil \tag {5}
+$$
+
+where $d(i,j)$ is the distance of current position to the diagonal, which is rounded for simplicity. The free parameter $\lambda$ is the mean of Poisson distribution, and $\zeta$ is the free parameter denoting the default offset of the current position to the diagonal. Different from translate positions $z_{j} = i$ which depend on the inputs $X_{:i}$ and $Y_{:j-1}$ , $b_{j} = i$ is independent to the translation context, and is only conditioned on the absolute positions $i$ , $j$ . Therefore, we modify the position distribution:
+
+$$
+\begin{array}{l} p \left(z _ {j} = i, Y _ {: j - 1} \mid X _ {: i}\right) \tag {6} \\ = \sum_ {i ^ {\prime \prime} = 1} ^ {m} p (z _ {j} = i, Y _ {: j - 1} | X _ {: i}, b _ {j} = i ^ {\prime \prime}) \cdot p (b _ {j} = i ^ {\prime \prime}; m, n) \\ \end{array}
+$$
+
+
+
+
+
+
+
+
+(a) $p(y_{j}|X_{:i},Y_{:j - 1})$
+
+
+(b) $p(a_{i,j}|X_{:i},Y_{:j - 1})$
+
+
+(c) $p(z_{j},Y_{:j - 1}|X_{:i})$
+
+
+(d) $p(b_{i,j};\lambda = 5,\zeta = 3)$
+(g) $p(z_{j},Y_{:j - 1}|X_{:i};\lambda = 5,\zeta = 3)$
+Figure 4: Visualisations of the distributions in a generative SiMT model example. The depth of color represents the probability mass in each slot. In the sub-figures (c) (g) (h) and (i), the red rectangles highlight the argmax along the 1-st dimension. In the first row, (a), (b) and (c) show the translation distribution, the translate action probability and the position distribution respectively. The indices of each slot correspond to the actual positions of $i$ and $j$ . Specifically, the position distribution (c) is generated by the vanilla GSiMT without Poisson prior distribution. In the second row, (d), (e) and (f) are the re-parameterised Poisson distribution under different $\lambda$ and $\zeta$ . In the third row, (g), (h) and (i) are the position distribution after integrating out the Poisson prior (d), (e) and (f). Compared to the original position distribution (c), the generative models notably put more emphasis on the positions along the diagonal, which acts as a flexible regularisation to balance translation quality and latency.
+
+
+(e) $p(b_{i,j};\lambda = 3,\zeta = 3)$
+(h) $p(z_{j},Y_{:j - 1}|X_{:i};\lambda = 3,\zeta = 3)$
+
+
+(f) $p(b_{i,j};\lambda = 3,\zeta = 0)$
+(i) $p(z_{j},Y_{:j - 1}|X_{:i};\lambda = 3,\zeta = 0)$
+
+Here, we make the assumption that the number of source words that have been read $i$ cannot exceed the maximum size $b_{j}$ . Hence, for all the cases $b_{j} < i$ , the probability $p(z_{j} = i, Y_{:j-1}|X_{:i}, b_{j4.
+
+The checkpoints with best performance in 5 runs on development datasets are chosen for testing BLEU (Papineni et al., 2002) and AL (average lagging) (Ma et al., 2019). For GSiMT models, we empirically fix $\lambda = 3$ for all the experiments, and use $\zeta$ as the free parameter to achieve different AL.
+
+For Multi30K (Elliott et al., 2016), we use all three language pairs EN $\rightarrow$ FR, EN $\rightarrow$ DE and EN $\rightarrow$ CZ with the image data from Flickr30k as extra modality and flickr2016 as test dataset. We build multimodal models with the goal of testing the generalisation ability of the generative models with extra modalities. To that end, we concatenate the object detection features applied in Caglayan et al. (2020) into the state representation $S_{i,j}$ and maintain the rest of the neural network the same as the unimodal SiMT. The other models (RL, Wait-k and Adpative Wait-k) incorporate the same features as well. Here, as the size of data is small, we apply a smaller Transformers with 4 layers, 4 heads, 512 model dimension and 1024 for linear connection.
+
+# 4.2 Translation Quality & Latency
+
+Table 1 shows the SiMT performance for the benchmark models and our proposed generative models on the WMT15 DE $\rightarrow$ EN dataset. RL is our implementation of Gu et al. (2017) with policy gradient method. All the numbers for Wait- $k$ and AdaptiveWait- $k$ are quoted from Zheng et al. (2020).
+
+| Model | German-English (WMT15) |
| BLEU ↑ | AL ↓ | BLEU ↑ | AL ↓ | BLEU ↑ | AL ↓ | BLEU ↑ | AL ↓ |
| RL (Gu et al., 2017) | 22.12 | 3.16 | 23.81 | 4.66 | 24.31 | 5.52 | 25.22 | 6.71 |
| Wait-k (Ma et al., 2019) | 25.22 | 3.76 | 26.29 | 4.70 | 27.42 | 5.77 | 27.73 | 6.66 |
| Adaptive-Wait-k (Zheng et al., 2020) | 26.73 | 3.63 | 27.84 | 4.79 | 28.41 | 5.33 | 29.20 | 6.60 |
| GSiMT-Possion-T5 | 28.31 | 3.79 | 29.18 | 4.61 | 29.59 | 5.41 | 29.30 | 6.25 |
| GSiMT-Poisson | 28.82 | 3.64 | 29.50 | 4.45 | 29.78 | 5.13 | 29.63 | 6.24 |
| GSiMT-NT | 29.79 | 9.75 | - | - | - | - | - | - |
| Consecutive NMT | 30.24 | 28.58 | - | - | - | - | - | - |
+
+Table 1: SiMT performance on WMT15 DE→EN. The models in the first group are the benchmark models for simultaneous machine translation. The second group is the variants of our proposed GSiMT. The third group is the consecutive NMT model, which provides the upper bound on BLEU score as it has access to the entire source stream. To fairly compare the BLEU under different AL, we apply 4 columns to limit the AL in the similar range but compare the BLEU score. The numbers of Wait-k and Adaptive-Wait-k models are achieved by training different models with $k$ from 1 to 10 and $k_{min} = 1$ , $k_{max} = 10$ (Zheng et al., 2020). For both GSiMT-Possion-T5 and GSiMT-Poisson, we apply $\zeta = 4,5,6,7$ respectively to achieve the corresponding AL scores in each block. We highlight the best performance by BLEU score with bold numbers in each block. The underlined results are from the models that are not optimised for translation latency, which are used for reference only.
+
+| Model | En-Fr (Multi30k) | En-Cz (Multi30k) | En-Ge (Multi30k) |
| BLEU ↑ | AL ↓ | BLEU ↑ | AL ↓ | BLEU ↑ | AL ↓ |
| RL (Gu et al., 2017) | 54.39 | 4.01 | 23.30 | 2.24 | 31.23 | 3.08 |
| Wait-k (Ma et al., 2019) | 56.20 | 3.38 | 23.31 | 3.54 | 33.75 | 3.47 |
| Adaptive-Wait-k (Zheng et al., 2020) | 57.16 | 3.32 | 26.9 | 3.11 | 33.68 | 2.99 |
| DEC-OD (Caglayan et al., 2020) | 57.90 | 3.65 | 28.13 | 2.83 | 34.40 | 2.37 |
| GSiMT-Possion-T5 | 58.45 | 3.28 | 28.92 | 3.06 | 36.23 | 2.58 |
| GSiMT-Poisson | 58.89 | 3.17 | 29.93 | 2.71 | 36.11 | 2.65 |
| GSiMT-NT | 58.81 | 7.32 | 29.22 | 5.21 | 35.78 | 6.55 |
| Consecutive NMT | 59.29 | 13.10 | 30.65 | 13.10 | 36.84 | 13.10 |
+
+Table 2: SiMT performance on Multi30K dataset. The models in the first group are the benchmark models for multimodal simultaneous machine translation. In addition to the models in Table 1, DEC-OD (Caglayan et al., 2020) is an RNN based model with an extra attention layer to attend to object detection features while carrying out translation. The numbers of other models in the first group are from our implementations, of which the state outputs are concatenated with the same visual features from Caglayan et al. (2020) for multimodal SiMT. For better comparison, we only report the BLEU scores with AL around 3. Similarly, the underlined results are from the models that are not optimised for translation latency, which are used for reference only. For both GSiMT-Possion-T5 and GSiMT-Poisson, we apply $\zeta = 3$ for all of the language pairs.
+
+
+Figure 5: Overview of the performance of different models: BLEU scores versus average lagging (AL).
+
+GSiMT-Possion is our proposed generative model with Possion prior. GSiMT-Possion-T55 is a variant of GSiMT-Poisson which takes the top
+
+5 history paths during dynamic programming when decoding a new target word. It is similar to having a sparse 'attention' over the previous histories, which in turn highlights the simultaneous translation paths with higher confidence. GSiMT-NT is the vanilla neural transducer model without Poisson prior.
+
+According to the experimental results in Table 1, the GSiMT-Poisson obtains a good balance between the translation quality and latency. More importantly, it achieves the best BLEU given different AL scores in the same range. Especially when the AL is very low, the GSiMT-Poisson model maintains its high performance on BLEU scores. Interestingly, the performance of GSiMT-Possion-T5 is very similar to the GSiMT-Poisson model that updates all the possible translation paths instead of the top 5. It shows that the model can be further
+
+| Model | BLEU ↑ AL ↓ | BLEU ↑ AL ↓ | BLEU ↑ AL ↓ | BLEU ↑ AL ↓ |
| Wait-k (Ma et al., 2019) | k = 2 | k = 3 | k = 4 | k = 5 |
| 22.64 | 3.95 | 22.96 | 4.73 | 23.60 | 5.56 | 24.48 | 6.41 |
| GSiMT-Possion-T5 | ζ = 0 | ζ = 1 | ζ = 2 | ζ = 3 |
| 27.14 | 3.88 | 27.88 | 4.36 | 28.00 | 5.43 | 27.83 | 6.15 |
| GSiMT-Poisson | 27.20 | 4.01 | 27.75 | 4.69 | 28.05 | 5.51 | 28.20 | 6.37 |
+
+Table 3: Test-only performance on the SiMT. For both GSiMT-Possion-T5 and GSiMT-Poisson models, we apply different offset $\zeta$ to parameterise the prior distribution.
+
+
+(a) A translation example of GSiMT with $\zeta = 0$
+
+
+(b) A translation example of GSiMT with prior $\zeta = 3$
+Figure 6: Visualisation of decoded sentences with different Poisson parameters. $\downarrow$ represents the state was sampled with the read action, while $\rightarrow$ represents the translate action. The decoding is carried out by the pretrained GSiMT-Poisson $(\zeta = 8)$ and apply $\zeta = 0$ , $\zeta = 3$ to generate the decoded sentences in the Test-only setup.
+
+optimised in terms of efficiency without much loss on the performance. As expected, GSiMT-NT is able to achieve high performance on BLEU scores (close to the upper bound BLEU score obtained by the consecutive NMT) but suboptimal AL, because it is able to read as many source words as possible. Figure 5 further compares the overall performance on BLEU and AL on the test dataset.
+
+Table 2 further demonstrates the good performance of the proposed generative models on multimodal SiMT. For all three language pairs, the GSiMT-Poisson model maintains the best performance. More importantly, by simply concatenating the visual features, the GSiMT models perform better than state-of-the-art multimodal SiMT model DEC-OD (Caglayan et al., 2020).
+
+# 4.3 Generalisation Ability
+
+To further verify the effectiveness of the soft boundary modelled by the Poisson distribution, we also test the performance for test-only setup. In this case, we first pretrain a GSiMT-Poisson and a GSiMT-Possion-T5 with $\zeta = 8$ as the base models. Then, we directly set up different free parameters $\zeta$ to dynamically adjust the translation latency during testing. The test-only model of Wait- $k$ (Zheng et al., 2020) pretrain a consecutive NMT as the base model and apply the Wait- $k$ policies during testing. Table 3 shows the results on the WMT15 dataset. Compared to Wait-k, both the GSiMT-Possion-T5
+
+and GSiMT-Poisson have stronger generalisation ability in test-only setup. It demonstrates the great potential of adjusting the translation latency on-the-fly without much loss on translation quality given a pretrained GSiMT model.
+
+Figure 6 shows decoded sentences under different set of parameters. As we can see, even in the test-only setup, the generative model can effectively adjust the translation latency to decode the target sentences. Interestingly, the translation quality is not affected much when pursuing lower latency, and with less restrictive latency ( $\zeta = 3$ compared to $\zeta = 0$ ), the generative model is able re-arrange the sub-sequences and produce the word order that is more natural in the target language.
+
+# 5 Conclusions
+
+This paper proposes a generative framework for simultaneous MT, which we demonstrated achieves the best translation quality and latency to date on common datasets. The introduction of Poisson prior over the buffer size fills in the gap between simultaneous MT and structural sequence-to-sequence learning. More importantly, the overall algorithm is simple and easy to implement, which grants the ability to be massively applied for various real-world tasks. It has the potential to become the standard framework for SiMT and we will release the code to the public for future research.
+
+# References
+
+Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simultaneous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1313-1323, Florence, Italy. Association for Computational Linguistics.
+Philip Arthur, Trevor Cohn, and Gholamreza Haffari. 2021. Learning coupled policies for simultaneous machine translation using imitation learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2709-2719, Online. Association for Computational Linguistics.
+Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1-46, Lisbon, Portugal. Association for Computational Linguistics.
+Ozan Caglayan, Julia Ive, Veneta Haralampieva, Pranava Madhyastha, Loic Barrault, and Lucia Specia. 2020. Simultaneous machine translation with visual context. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2350-2361, Online. Association for Computational Linguistics.
+Kyunghyun Cho and Masha Esipova. 2016. Can neural machine translation do simultaneous translation? arXiv preprint arXiv:1606.02012.
+Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics.
+Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30K: Multilingual English-German image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70-74, Berlin, Germany. Association for Computational Linguistics.
+Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711.
+Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume
+
+1, Long Papers, pages 1053-1062, Valencia, Spain.
+Association for Computational Linguistics.
+Yong Jiang, Wenjuan Han, and Kewei Tu. 2016. Unsupervised neural dependency parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 763-771, Austin, Texas. Association for Computational Linguistics.
+Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. 2017. Structured attention networks. arXiv preprint arXiv:1702.00887.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Dan Klein and Christopher Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 478-485, Barcelona, Spain.
+Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025-3036, Florence, Italy. Association for Computational Linguistics.
+Jan Niehues, Ngoc-Quan Pham, Thanh-Le Ha, Matthias Sperber, and Alex Waibel. 2018. Low-latency neural speech translation. In Proc. Interspeech 2018, pages 1293-1297.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Harsh Satija and Joelle Pineau. 2016. Simultaneous machine translation using deep reinforcement learning. In ICML 2016 Workshop on Abstraction in Reinforcement Learning.
+
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Ke M. Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, and Kevin Knight. 2016. Unsupervised neural hidden Markov models. In Proceedings of the Workshop on Structured Prediction for NLP, pages 63-71, Austin, TX. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
+Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. 2016. The neural noisy channel. arXiv preprint arXiv:1611.02554.
+Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, and Liang Huang. 2020. Simultaneous translation policies: From fixed to adaptive. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2847-2853, Online. Association for Computational Linguistics.
+
+# A Top-k Strategy
+
+Alternative to the full paths dynamic programming computation, each step of the generative simultaneous machine translation can also be formulated as sum of top-k sub-sequence generation probabilities. The original position distribution Equation 3 can be reformed as:
+
+$$
+\begin{array}{l} p (z _ {j} = i, Y _ {: j - 1} | X _ {: i}) \\ = \max _ {| K | = k} \sum_ {i ^ {\prime} \in K} p (z _ {j} = i | z _ {j - 1} = i ^ {\prime}, X: i, Y: j - 1) \cdot \\ p \left(Y _ {: j - 1} \mid X _ {: i ^ {\prime}}\right) \tag {8} \\ \end{array}
+$$
+
+where $K$ is the set of position indices that equal or lower than $i$ . This yields to a biased estimator for the log-likelihood estimation of the target sequences. The difference is that it acts as an inductive bias to have a sparse 'attention' over the previous sub-sequence generation probabilities. According to the experiments, the top-k strategy can achieve adequate performance as the full paths strategy.
\ No newline at end of file
diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/images.zip b/agenerativeframeworkforsimultaneousmachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..18a2844c37f424a3abdacb2fc384c7a832ab64bc
--- /dev/null
+++ b/agenerativeframeworkforsimultaneousmachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7898b6a7aae30c26472957a085d865a929e39987e46cf5020443a8e25c4351f3
+size 595457
diff --git a/agenerativeframeworkforsimultaneousmachinetranslation/layout.json b/agenerativeframeworkforsimultaneousmachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b3383e80101d3828e16def214c1b7207c1ae368d
--- /dev/null
+++ b/agenerativeframeworkforsimultaneousmachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ce43a80a90f5c8af4ac9039af90adf46a54f1eaa785c039504b11b6cdbfa5f69
+size 426082
diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_content_list.json b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d6dfcd995c06213b9a52614a8bdae075e0ae60f3
--- /dev/null
+++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0408ccb84a51dded662788f0689325b3e90beb16840c2b5b17c320996412800
+size 79885
diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_model.json b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..489a5f3d45a7ef15da2c3e73c3fd496887cf8b17
--- /dev/null
+++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fba4610737b025bd37d622404cf2e659abd952d2393fc23be8fa2b4a8f007bc2
+size 100121
diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_origin.pdf b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..bb336e22485f2f2b040c760fd66499c75c24fa02
--- /dev/null
+++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/299f61ce-6056-4d10-9ade-bfa97811e4f7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:23a64b28fd69d198274468d3581aba367f4a024185fc6d581a508cb7ac587f00
+size 435345
diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/full.md b/agraphbasedneuralmodelforendtoendframesemanticparsing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4d105d75059e15acb938be938412306d8cbd2a40
--- /dev/null
+++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/full.md
@@ -0,0 +1,351 @@
+# A Graph-Based Neural Model for End-to-End Frame Semantic Parsing
+
+Zhichao Lin $^{1}$ , Yueheng Sun $^{2}$ , Meishan Zhang $^{1*}$
+
+$^{1}$ School of New Media and Communication, Tianjin University, China
+
+$^{2}$ College of Intelligence and Computing, Tianjin University, China
+
+{chaosmyth,yhs,zhangmeishan}@tju.edu.cn
+
+# Abstract
+
+Frame semantic parsing is a semantic analysis task based on FrameNet which has received great attention recently. The task usually involves three subtasks sequentially: (1) target identification, (2) frame classification and (3) semantic role labeling. The three subtasks are closely related while previous studies model them individually, which ignores their intern connections and meanwhile induces error propagation problem. In this work, we propose an end-to-end neural model to tackle the task jointly. Concretely, we exploit a graph-based method, regarding frame semantic parsing as a graph construction problem. All predicates and roles are treated as graph nodes, and their relations are taken as graph edges. Experiment results on two benchmark datasets of frame semantic parsing show that our method is highly competitive, resulting in better performance than pipeline models.
+
+# 1 Introduction
+
+Frame semantic parsing (Gildea and Jurafsky, 2002) aims to analyze all sentential predicates as well as their FrameNet roles as a whole, which has received great interest recently. This task can be helpful for a number of tasks, including information extraction (Surdeanu et al., 2003), question answering (Shen and Lapata, 2007), machine translation (Liu and Gildea, 2010) and others (Coyne et al., 2012; Chen et al., 2013; Agarwal et al., 2014). Figure 1 shows an example, where all predicates as well as their semantic frame and roles in the sentence are depicted.
+
+Previous studies (Das et al., 2014; Swayamdipta et al., 2017; Bastianelli et al., 2020) usually divide the task into three subtasks, including target identification, frame classification and semantic role labeling (SRL), respectively. By performing the three subtasks sequentially, the whole frame semantic parsing can be accomplished. The majority of
+
+
+Figure 1: An example involving frame semantic structures, taken from the FrameNet (Baker et al., 1998). Frame-evoking predicates are highlighted in the sentence, and corresponding frames are shown in colored blocks below. The frame-specific roles are underlined with their frames in the same row.
+
+works focus on either one or two of the three subtasks, treating them separately (Yang and Mitchell, 2017; Botschen et al., 2018; Swayamdipta et al., 2018; Peng et al., 2018).
+
+The above formalization has two weaknesses. First, the individual modeling of the three subtasks is inefficient to utilize the relationship among them. Apparently, the earlier subtasks can not exploit the information from their future subtasks. Second, the pipeline strategy can suffer from the error propagation problem, where the errors occurring in the previous subtasks can influence the later subtasks as well. To address the two weaknesses, end-to-end modeling is one promising alternative, which has been widely adopted in natural language processing (NLP) (Cai et al., 2018; He et al., 2018; Sun et al., 2019; Fu et al., 2019; Fei et al., 2020).
+
+In this work, we propose a novel graph-based model to tackle frame semantic parsing in an end-to-end way, using a single model to perform the three subtasks jointly. We organize all predicates and their FrameNet semantic by a graph, and then design an end-to-end neural model to construct the graph incrementally. An encoder-decoder model is presented to achieve the graph building goal, where the encoder is equipped with contextualized BERT representation (Devlin et al., 2019), and the decoder includes node generation and edge building sequentially. Our final model is elegant and easy to
+
+understand as a whole.
+
+We conduct experiments on two benchmark datasets to evaluate the effectiveness of our proposed model. First, we study our graph-based framework in two settings, the end-to-end scenario and the pipeline manner, where the node building and edge building are trained separately. Results show that end-to-end modeling is much better. Besides, we also compare our model with several other pipelines, where the similar findings can be observed. Second, we compare our graph-based framework with previous methods by the three subtasks individually, finding that the graph-based architecture is highly competitive. We can obtain the best performance in the literature, leading to a new state-of-the-art result. Further, we conduct extensive analyses to understand our method in depth.
+
+In summary, we make the following two major contributions in this work:
+
+(1) We propose a novel graph-based model for frame semantic parsing which can achieve competitive results for the end-to-end task as well as the individual subtasks.
+(2) To the best of our knowledge, we present the first work of end-to-end frame semantic parsing to solve all included subtasks together in a single model.
+
+We will release our codes as well as experimental setting public available on https://github. com/Ch4osMy7h/FramenetParser to help result reproduction and facilitate future researches.
+
+# 2 Related Work
+
+Frame-Semantic Parsing Frame-semantic parsing has been received great interest since being released as an evaluation task of SemEval 2007 (Baker et al., 2007). The task attempts to predict semantic frame structures defined in FrameNet (Baker et al., 1998) which are composed of frame-evoking predicates, their corresponding frames and semantic roles. Most of the previous works (Das et al., 2014; Swayamdipta et al., 2017; Bastianelli et al., 2020) focus on a pipeline framework to solve the task, training target identification, frame classification and semantic role labeling models separately. In this work, to the best of our knowledge, we present the first end-to-end model to handle the task jointly.
+
+Among the three subtasks of frame semantic parsing, semantic role labeling has been researched
+
+most extensively (Kshirsagar et al., 2015; Yang and Mitchell, 2017; Peng et al., 2018; Swayamdipta et al., 2018; Marcheggiani and Titov, 2020). It is also highly related to the Propbank-style semantic role labeling (Palmer et al., 2005) as while with only differences in the frame definition. Thus the models between the two types of semantic role labeling can be mutually borrowed. There are several end-to-end Propbank-style semantic role labeling models as well (Cai et al., 2018; He et al., 2018; Li et al., 2019; Fu et al., 2019). However, these models are difficult to be applied directly for frame semantic parsing due to the additional frame classification as well as the discontinuous predicates. In this work, we present a totally-different graph construction style model to solve end-to-end frame semantic parsing elegantly.
+
+Graph-Based Methods Recently, graph-based methods have been widely used in a range of other tasks, such as dependency parsing (Dozat and Manning, 2016; Kiperwasser and Goldberg, 2016; Ji et al., 2019), AMR parsing (Flanigan et al., 2014; Lyu and Titov, 2018; Zhang et al., 2019a,b) and relation extraction (Sun et al., 2019; Fu et al., 2019; Dixit and Al-Onaizan, 2019). In this work, we aims for frame semantic parsing, organizing the three included subtasks by a well designed graph, converting it into graph-based parsing task naturally.
+
+# 3 Method
+
+# 3.1 Task Formulation
+
+The goal of frame-semantic parsing is to extract semantic predicate-argument structures from texts, where each predicate-argument structure includes a predicate by a span of words, a well-defined semantic frame to express the key roles of the predicate, and the values of these roles by word spans. Formally, given by a sentence $X$ with $n$ words $w_{1},w_{2},\ldots ,w_{n}$ , frame-semantic parsing aims to output a set of tuples $\mathcal{V} = \{(y_1,y_2,\dots ,y_k)\}_{k = 1}^K$ , where each $y_{i}$ consists of the following elements:
+
+- $p_i = (p_{i,1}, \ldots, p_{i,d_i})$ , where $p_{i,*}$ are word spans in $X$ and $d_i$ indicates the number of pieces of the predicate since it might be discontinuous.
+- $f_{i} \in \mathcal{F}$ , where $\mathcal{F}$ is the frame set which is well defined in FrameNet.
+
+
+Figure 2: The overall architecture of our graph-based end-to-end model.
+
+- $r_i = ([r_{i,1}, v_{i,1}], \ldots, [r_{i,m_k}, v_{k,m_k}])$ , where $r_{i,*}$ are frame roles derived from $f_i$ and $v_{i,*}$ are also word spans in $X$ .
+
+The full frame semantic parsing is usually divided into the following three subtasks:
+
+- Target Identification (also known as predicate identification), which is to identify all valid frame-evoking predicates from $X$ , outputting $P = \{(p_1, \dots, p_k)\}$ .
+- Frame Classification, which is to predicate the concrete evoking frame $f_{i}$ of a certain predicate $p_{i} \in P$ .
+- Semantic Role Labeling, which is to assign concrete values for roles $r_i$ by given a predicate frame pair $(p_i, f_i)$ .
+
+Previously, the majority of work of frame semantic parsing performs the three subtasks individually, ignoring their highly-related connections and also being vulnerable to the error propagation problem. Thus, we present an end-to-end graph-based model to accomplish the three subtasks by a single model.
+
+# 3.2 The Graph-Based Methodology
+
+We formalize the frame-semantic parsing task as a graph constructing problem, and further present an encoder-decoder model to perform the task in an end-to-end way. The encoder aims for representation learning of the frame semantic parsing, and the decoder constructs the semantic graph incrementally. Concretely, for the encoder, we compute the span representations since the basic processing units of our model are word spans, and for the decoder, we first generate all graph nodes, and then build edges among the graph nodes. Figure 2 shows
+
+the overall architecture of our method.
+
+# 3.2.1 Encoding
+
+Due to the strong capability of BERT (Devlin et al., 2019) for represent learning, we adopt it as the backbone of our model. Given a sentence $X = \{w_{1}, w_{2}, \dots, w_{n}\}$ , BERT converts each word $w_{i}$ into word pieces, and feed them into deep transformer encoders to get the piece-level representation. To obtain word-level representation, we average all piece vectors of word $w_{i}$ as its final representation $\mathbf{e}_{i}$ .
+
+For further feature abstraction, we exploit BiHLSTM (Srivastava et al., 2015) to compose high-level features based on word-level output $\mathbf{e}_1,\dots ,\mathbf{e}_n$ , following Swayamdipta et al. (2018):
+
+$$
+\mathbf {h} _ {1}, \dots , \mathbf {h} _ {n} = \operatorname {B i H L S T M} \left(\mathbf {e} _ {1}, \dots , \mathbf {e} _ {n}\right), \tag {1}
+$$
+
+where the gated highway connections are applied to BiLSTMs.
+
+Span Representation We enumerate all possible spans $S = \{s_1, s_2, \ldots, s_m\}$ in a sentence and limit the maximum span length to $L$ . Then, each span $s_i \in S$ is represented by:
+
+$$
+\mathbf {g} _ {i} = \left[ \mathbf {h} _ {\text {S T A R T} (i)}; \mathbf {h} _ {\text {E N D} (i)}; \mathbf {h} _ {\text {A T T N}}; \phi \left(s _ {i}\right) \right], \tag {2}
+$$
+
+where $\phi(g_i)$ represents the learned embeddings of span width features, $\mathbf{h}_{\mathrm{ATTN}}$ is computed by self-attention mechanism which weights the corresponding vector representations of the words in the span by normalized attention scores, and $\mathrm{START}(i)$ and $\mathrm{END}(i)$ denote start and end indices of $s_i$ .
+
+# 3.2.2 Node Building
+
+Node Generation We exploit a preliminary classification to achieve the goal of node generation. First, a span can be either a graph node or not. Further, a graph node can be a full or partial predicate node, and the node can also be a role node. Totally, we define four types for a given span:
+
+- FPRD: a full predicate span.
+- PPRD: a partial predicate span.
+- ROLE: a role span.
+- NULL: a span that is not a graph node.
+
+The type of a span can be the full permutation of elements in set {FPRD, PPRD, ROLE} or NULL. Thus, each span can be classified into eight types (i.e., FPRD, PPRD, ROLE, FPRD-PPRD, FPRD-ROLE, PPRD-ROLE, FPRD-PPRD-ROLE, NULL).
+
+Given an input span $s_i$ with its vectorial representation as $\mathbf{g}_i$ , we exploit one MLP layer with softmax to classify the span type:
+
+$$
+\mathbf {p} _ {n} = \operatorname {s o f t m a x} \left(\operatorname {M L P} _ {n} \left(\mathbf {g} _ {i}\right)\right), \tag {3}
+$$
+
+where $\mathbf{p}_n$ indicates the probabilities of span types. By this classification, all non-null type spans are graph nodes, reaching the goal of node generation.
+
+Frame Classification of Predicate Nodes Node generation detects all graph nodes roughly, assigning each node with a single label to indicate whether it can be served as a predicate or role. Here we go further to recognize the semantic frames for all predicate nodes, which could be regarded as an in-depth analysis for node attribution. The step is corresponding to the frame classification subtask.
+
+Given an input span $s_i$ of a predicate node (FPRD or PPRD), assuming its representation being $\mathbf{g}_i$ , we use another MLP layer together with softmax to output the probabilities of each candidate frame for the predicate node:
+
+$$
+\mathbf {p} _ {c} = \operatorname {s o f t m a x} \left(\operatorname {M L P} _ {c} \left(\mathbf {g} _ {i}\right)\right), \tag {4}
+$$
+
+where $\mathbf{p}_c$ is the output probabilities of semantic frames. Specially, frames are constrained by the lexical units defined in FrameNet. For example, the predicate with the lexical unit "meeting" only evokes frame Social_event and Discussion.
+
+We also adopt the pseudo strategy following Swayamdipta et al. (2017) to optimize the classification. First, we use spacy lemmatizer (Honnibal et al., 2020) to translate an input sentence into lemmas. Then, if a word span is a predicate node, we
+
+treat the corresponding lemma span as the pseudo lexical unit and index the corresponding semantic frame set by it. Finally, we reduce the search space by masking frames outside the set. In our experiments, we find it is practical to apply this strategy.
+
+# 3.2.3 Edge Building
+
+After graph nodes are ready, we then build edges to accomplish frame semantic parsing accordingly. There are two types of edges in our model.
+
+Predicate-Predicate Edge For extracting discontinuous mentions, we build the edges between nodes which are predicate fragments (i.e., PPRD nodes). In detail, we treat it as a binary classification problem considering whether two nodes alongside the edge can form parts of a predicate or not. Formally, given two PPRD nodes with the corresponding spans $\mathbf{s}_i^p$ and $\mathbf{s}_j^p$ and their encoding representations $\mathbf{g}_i^p$ and $\mathbf{g}_j^p$ , we utilize one MLP layer to classify their edge type:
+
+$$
+\mathbf {p} _ {p e} = \operatorname {s o f t m a x} \left(\mathrm {M L P} _ {p e} \left(\left[ \mathbf {g} _ {i} ^ {p}, \mathbf {g} _ {j} ^ {p}, \mathbf {g} _ {i} ^ {p} * \mathbf {g} _ {j} ^ {p} \right]\right)\right), \tag {5}
+$$
+
+where $\mathbf{p}_{pe}$ indicates the probabilities of two types, namely Connected and NULL (i.e., cannot be connected), and the feature representation is borrowed from Zhao et al. (2020).
+
+Predicate-Role Edge For extracting framespecific roles, we build the edges between predicates nodes (i.e., node type by FPRD or PPRD) and role nodes (i.e., node type by ROLE). Given a predicate node $\mathbf{s}_i^p$ and a role node $\mathbf{s}_j^r$ , assuming their neural representations being $\mathbf{g}_i^p$ and $\mathbf{g}_j^r$ , respectively, we utilize another MLP layer to determine their edge type by multi-class classification:
+
+$$
+\mathbf {p} _ {r e} = \operatorname {s o f t m a x} \left(\mathrm {M L P} _ {r e} \left(\left[ \mathbf {g} _ {i} ^ {p}, \mathbf {g} _ {j} ^ {r}, \mathbf {g} _ {i} ^ {p} * \mathbf {g} _ {j} ^ {r} \right]\right)\right), \tag {6}
+$$
+
+where $\mathbf{p}_{re}$ indicates the probabilities of predicate-role edge types (i.e., frame roles as well as a NULL label indicating no relation).
+
+# 3.3 Joint Training
+
+To train the joint model, we employ the negative log-likelihood loss function for both node building and edge building step:
+
+$$
+\mathcal {L} _ {n} = - \sum \log \mathbf {p} _ {n} (y _ {n}) - \sum \log \mathbf {p} _ {c} (y _ {c})
+$$
+
+$$
+\mathcal {L} _ {e} = - \sum \log \mathbf {p} _ {p e} \left(y _ {p e}\right) - \sum \log \mathbf {p} _ {r e} \left(y _ {r e}\right), \tag {7}
+$$
+
+where $y_{n}$ and $y_{c}$ are the gold labels for the text spans and predicate nodes, $y_{pe}$ and $y_{re}$ indicate the gold edge labels for the predicate-predicate and predicate-role node pairs. Further, losses from two steps are summed together, leading to the final training objective of our model:
+
+$$
+\mathcal {L} = \mathcal {L} _ {n} + \mathcal {L} _ {e} \tag {8}
+$$
+
+# 3.4 Decoding
+
+The decoding aims to derive frame semantic parsing results by the graph-based model. Here we describe the concrete process by the three subtasks.
+
+Target Identification The target identification involves both node building and edge building steps. First, all predicate nodes with type FPRD are predicates. Second, there is a small percentage of predicates composed of multiple nodes with type PPRD. If two or more such nodes are connected with predicate-predicate edges, we regard these nodes as one single valid predicate.
+
+Frame Classification The frame classification decoding is performed straightforwardly for single-node predicates. For multi-node predicates, there may exist conflicts from the frame classification of different nodes. Concretely, given a multi-node predicate composed of two or more nodes, the max-scored frame evoked by them might be different. Thus, to address this issue, we use the maximum operation achieved by first summing up the softmax distributions over all covered nodes and then fetching the max-scored frame.
+
+Semantic Role Labeling The condition of semantic role labeling is similar to frame classification. For the single-node predicates, the semantic role labeling output is determinative. For the multi-node predicates, we assign role values for the candidate roles inside its predicted frame only, and further select the concrete role node, which is the highest-probability to the covered predicate nodes.
+
+# 4 Experiments
+
+# 4.1 Setting
+
+Dataset We adopt the FrameNet versions 1.5 and $1.7^{2}$ (denoted by FN1.5 and FN1.7 for short, respectively) as the benchmark datasets to evaluate our models. FN1.5 is the widely used dataset in
+
+| Dataset | Type | Train | Dev | Test |
| FN1.5 | # Sentence | 2,713 | 326 | 982 |
| # Predicate | 16,618 | 2,282 | 4,427 |
| # Role | 29,449 | 4,039 | 7,146 |
| FN1.7 | # Sentence | 3,413 | 326 | 1,354 |
| # Predicate | 19,384 | 2,270 | 6,714 |
| # Role | 34,385 | 4,024 | 11,303 |
+
+Table 1: Statistics of the datasets.
+
+previous work and FN1.7 is the latest version used recently which involves more semantics. We follow the previous studies (Das et al., 2014; Swayamdipta et al., 2017) to divide the two datasets into the training, validation and test sets, respectively. Table 1 shows the overall data statistics.
+
+Evaluation We measure the performance of frame semantic parsing by its three subtasks, respectively. For target identification, we treat a predicate as correct only when all its included word spans exactly match with the gold-standard spans of the predicate. For frame classification, we use the joint performance for evaluation, regarding a classification as correct only when the predicate, as well as the frame, are both correct. For semantic role labeling, we also use the joint performance regarding the role as correct when the predicate, role span (exact match), and role type are all correct, which is treated as our major metric.
+
+Derived Models Following previous studies and our graph-based method, we can derive a range of basic models for comparisons:
+
+- Node, the node building submodel which is the first step of our decoder module mentioned in section 3.2.2.
+- Edge: the edge building submodel which is the second step of our decoder module mentioned in section 3.2.3.
+- Predicate, a graph-based predicate identification model, which is implemented by keeping only the predicate node generation and predicate-predicate edge building in our final graph-based model.
+- Frame, our final graph-based model with only the frame classification submodel, assuming predicate nodes and their edges are given.
+- Role, our graph-based model with only role node generation and predicate-role edge building, assuming predicate nodes and their frames are given.
+- PredicateFrame, a joint model of Predicate
+
+| Data | Model | Target | Frame | Role |
| P | R | F1 | P | R | F1 | P | R | F1 |
| FN1.5 | Predicate+Frame+Role | 73.17 | 75.47 | 74.30 | 66.08 | 68.15 | 67.06 | 45.91 | 46.70 | 46.30 |
| Predicate•Frame+Role | 74.32 | 76.16 | 75.23 | 67.91 | 68.78 | 68.34 | 46.85 | 47.89 | 47.36 |
| Predicate+Frame•Role | 73.17 | 75.47 | 74.30 | 67.73 | 68.56 | 68.14 | 46.51 | 49.01 | 47.72 |
| Predicate•Frame+Semi-CRF | 74.32 | 76.16 | 75.23 | 67.91 | 68.78 | 68.34 | 46.33 | 50.87 | 48.50 |
| Node+Edge | 74.99 | 75.85 | 75.42 | 68.01 | 68.79 | 68.40 | 46.64 | 49.35 | 47.96 |
| Ours-Joint | 75.81 | 76.17 | 75.99 | 68.72 | 69.05 | 68.89 | 47.79 | 50.60 | 49.16 |
| FN1.7 | Predicate+Frame+Role | 78.62 | 69.92 | 74.02 | 71.05 | 63.06 | 66.82 | 49.32 | 45.60 | 47.38 |
| Predicate•Frame+Role | 76.79 | 72.92 | 74.81 | 69.29 | 66.16 | 67.69 | 49.21 | 46.03 | 47.57 |
| Predicate+Frame•Role | 78.62 | 69.92 | 74.02 | 69.79 | 65.06 | 67.34 | 49.46 | 46.01 | 47.67 |
| Predicate•Frame+Semi-CRF | 76.79 | 72.92 | 74.81 | 69.29 | 66.16 | 67.69 | 49.03 | 47.49 | 48.24 |
| Node+Edge | 76.81 | 72.54 | 74.62 | 68.99 | 66.33 | 67.63 | 49.55 | 46.28 | 47.86 |
| Ours-Joint | 76.16 | 74.98 | 75.56 | 69.39 | 68.30 | 68.84 | 49.09 | 48.81 | 48.95 |
+
+Table 2: Main results of frame-semantic parsing on FN1.5 and FN1.7, where the pipeline and end-to-end methods are compared thoroughly.
+
+and Frame, which is implemented by excluding the role node generation and predicate-role edge building in our final model.
+
+- FrameRole, a joint model of Frame and Role, which is implemented by excluding the predicate node generation and predicate-predicate edge building in our final model.
+- Semi-CRF, a span-level semi-Markov CRF (Sarawagi and Cohen, 2005) model for semantic role labeling which is borrowed from Swayamdipta et al. (2018), where the only difference is that we use BERT as the representation layer for fair comparisons. $^3$
+
+Note that the above derived models are trained individually. Based on these models, we can build five pipeline systems: (1) Predicate + Frame + Role, (2) Predicate $\circ$ Frame + Role, (3) Predicate + Frame $\circ$ Role, (4) Predicate $\circ$ Frame + Semi-CRF, and (5) Node + Edge, which are exploited for comparisons with our graph-based end-to-end model.
+
+Hyperparameters All our codes are based on Allennlp Library (Gardner et al., 2017) and trained on a single RTX-2080ti GPU. We choose the BERT-base-cased4, which consists of 12-layer transformers with the hidden size 768 for all layers. We set all the hidden sizes of BiHLSTM to 200, and the number of layer to 6. The MLP layers are of dimension size by 150 and depth by 1, with ReLU function. We apply dropouts of 0.4 to BiHLSTM and 0.2 to MLP layers. Following Swayamdipta
+
+et al. (2018), we also limit the maximum length of spans to 15 for efficiency, resulting in oracle recall of $95\%$ on the development set.
+
+For training, we exploit online batch learning with a batch size of 8 to update the model parameters, and use the BertAdamW algorithm with the learning rate $1 \times 10^{-5}$ to finetune BERT and $1 \times 10^{-3}$ to fine-tune other parts of our model. The gradient clipping mechanism by a maximum value of 5.0 is exploited to avoid gradient explosion. The training process is stopped early if the performance does not increase by 20 epochs.
+
+# 4.2 Main Results
+
+Table 2 shows the main results on the test sets of FN1.5 and FN1.7 datasets respectively, where our end-to-end model is compared with the four strong pipeline methods mentioned in Section 4.1. We can see that the end-to-end joint model can lead to significantly better performance (p-value below $10^{-5}$ by pair-wise t-test) as a whole on both datasets. Concretely, we can obtain average improvements of $\frac{0.57 + 0.75}{2} = 0.66$ on target identification, $\frac{0.49 + 1.15}{2} = 0.82$ on frame classification, and $\frac{0.66 + 0.71}{2} = 0.69$ on semantic role labeling on the two datasets compared with the best results of the pipeline systems, respectively.
+
+Besides the overall advantage of the end-to-end joint model over the pipelines, we can also find that the joint of two subtasks can also outperform their counterpart baselines. Concretely, as shown in Table 2, PredicateFrame is better than Predicate + Frame, and FrameRole is better than Frame + Role. The results further indicate the effectiveness
+
+| Model | FN1.5 | FN1.7 |
| Das et al. (2014) | 45.40 | - |
| Swayamdipta et al. (2017) | 73.23 | 73.25 |
| Bastianelli et al. (2020) (wo syntax) | 74.96 | - |
| Bastianelli et al. (2020) (w syntax) | 76.80 | - |
| Predicate | 76.09 | 75.34 |
| PredicateFrame | 76.47 | 75.88 |
| Ours-Joint | 76.90 | 76.27 |
+
+Table 3: Target Identification Results.
+
+| Model | FN1.5 | FN1.7 |
| Das et al. (2014) | 83.60 | - |
| Hermann et al. (2014) | 88.41 | - |
| Hartmann et al. (2017) | 87.63 | - |
| Yang and Mitchell (2017) | 88.20 | - |
| Swayamdipta et al. (2017) | 86.40 | 86.55 |
| Botschen et al. (2018) | 88.82 | - |
| Peng et al. (2018) | 90.00 | 89.10 |
| Bastianelli et al. (2020) (wo syntax) | 89.90 | - |
| Bastianelli et al. (2020) (w syntax) | 89.83 | - |
| Frame | 90.16 | 90.34 |
| Ours-Joint | 90.62 | 90.64 |
+
+Table 4: The accuracy of the Frame Identification task based on the gold targets.
+
+| Model | FN1.5 | FN1.7 |
| Das et al. (2014) | 59.10 | - |
| Kshirsagar et al. (2015) | 63.10 | - |
| Yang and Mitchell (2017) | 65.50 | - |
| Swayamdipta et al. (2017) | 59.48 | 61.36 |
| Swayamdipta et al. (2018) | 69.10 | - |
| Marcheggiani and Titov (2020) | 69.30 | - |
| Bastianelli et al. (2020) (wo syntax) | 72.85 | - |
| Bastianelli et al. (2020) (w syntax) | 75.56 | - |
| Semi-CRF | 73.56 | 72.22 |
| Ours-Joint | 73.28 | 72.06 |
+
+Table 5: Pipeline Semantic Role Labeling results using gold targets and frames.
+
+of joint learning. Further, by comparisons between our graph-based Role model and the Semi-CRF one, we can see that the Semi-CRF is better. The reason could be that the Semi-CRF model can exploit higher-order features among different frame roles which are ignored by our simple edge building module. As our edge building considers all predicates and all roles together, the incorporation of such features is still with great inconveniences.
+
+# 4.3 Individual Subtask Evaluation
+
+Previous studies commonly focus on only individual subtasks of frame semantic parsing. In order to compare with these studies, we simulate the scenarios by imposing constraints with gold-standard inputs in our joint models. In this way, we show the capability of our models on individual tasks. In particular, Bastianelli et al. (2020) report the best performance of the previous studies in the literature, which is based on BERT representations. They adopt the constituency syntax which can boost the individual model performances significantly. Since our final model uses no other knowledge except BERT, we report their model performances by with syntax (denoted as w syntax) and without syntax (denoted as wo syntax) for careful comparisons.
+
+Target Identification We show the performance of previous studies on target identification in Table 3, and also report the results of three-related models derived from this work. First, by comparing our three models (i.e. predicate only, predicate with frame, and the full graph parsing), the results show that both frame classification and semantic role labeling can help target identification. Second, we can see that our final model can achieve the best performance among the previous work.
+
+Frame Classification Noted that we have Table 4 shows the result of individual frame classification tasks, where all systems assume gold-standard predicates as inputs. Similar to target identification, we can achieve better performance than all previous studies. Peng et al. (2018) did not use BERT, but they use extra datasets from FrameNet (exemplar sentences) and semantic dependency parsing, which can also benefit our task greatly. As for the comparison between our implemented two models, Frame alone and our final joint model, the results show that semantic role labeling can benefit the frame classification, which is reasonable.
+
+Semantic Role Labeling Table 5 shows the results of various models on the semantic role labeling task. By constraining gold-standard predicates and frames to the outputs, our model degenerates to a normal semantic role labeling model. We also give the result by using Semi-CRF. As shown, our final semantic role labeling model is highly competitive in comparison with previous studies, except
+
+
+(a) FN1.5
+
+
+(b) FN1.7
+Figure 3: F1 scores of frame metric in different predicates. Single: single-word predicate. Multi: multi-word predicate.
+
+| Model | Node | Frame | Edge |
| Predicate+Frame+Role | 64.17 | 68.39 | 50.14 |
| Predicate•Frame+Role | 64.32 | 68.89 | 50.97 |
| Predicate+Frame•Role | 64.24 | 68.97 | 51.09 |
| Node+Edge | 64.56 | 69.30 | 51.43 |
| Ours-Joint | 65.34 | 69.80 | 52.13 |
+
+the model of Bastianelli et al. (2020) with syntax. The exception is expected, since syntax has been demonstrated highly effective before for SRL (Swayamdipta et al., 2018; Peng et al., 2018; Bastianelli et al., 2020). In addition, the Semi-CRF model is better than our method, which is consistent with the results in Table 2.
+
+# 4.4 Discussion
+
+In this subsection, we conduct detailed experimental analyses for better understanding our graph-based methods. Note that if not specified, the analyses are based on the FN1.7 dataset, which has a larger scale of annotations for exploring.
+
+Effectiveness on recognizing different types of predicates For frame-semantic parsing, extracting correct frame-evoking predicates is the first step that influences the later subtasks directly. Here we performed fine-grained analysis for the predicate identification, splitting the predicates into three categories, i.e., single-word predicates (Single), multiword predicates (Multi), respectively. As shown in Figure 3, our joint model can achieve consistent improvements over the pipeline models for all kinds of predicates, indicating that the information from the frame and frame-specific roles are both beneficial for target identification. In addition, the multi-word predicates are more difficult than the single-word predicates, leading to significant decreases as a whole.
+
+Table 6: F1 score results of the modules.
+
+| Model | Target | Frame | Role |
| Predicate+Frame+Role | 74.86 | 67.28 | 47.34 |
| Predicate•Frame+Role | 75.11 | 67.78 | 47.81 |
| Predicate+Frame•Role | 74.86 | 67.57 | 47.93 |
| Predicate•Frame+Semi-CRF | 75.11 | 67.78 | 48.26 |
| Ours-Joint | 75.72 | 68.86 | 48.89 |
+
+Table 7: F1 score results of our proposed method ignoring discontinuous predicates.
+
+
+Figure 4: F1 scores of roles in terms of the length.
+
+Performance by the role length Frame-special roles are the core structures that frame-semantic parsing intends to obtain. It is obvious that roles of different lengths would affect the performance, and longer roles would be much more difficult. Here we bucket the roles into seven categories and report the F1-score of our proposed methods on them. Figure 4 shows the results. We can find that the overall curve declines as the length increases, which is consistent with our intuition, and our graph-based end-to-end model is better than the pipeline methods of all lengths.
+
+Performances of node, frame and edge Our graph-based model builds nodes, determines node attributes (frame), and builds edge sequentially, which is different from the standard pipelines based on target identification, frame classification and semantic role labeling. Thus, it is interesting to see the performance based on node building, frame classification $^6$ and edge building, respectively. Table 6 shows the results, where the joint model as well as four pipeline models are included. As shown, we can see that the full joint model is better than the partial joint models, and the full pipeline model gives the worst results.
+
+Ignoring discontinuous predicates Although in both FN1.5 and FN1.7 datasets, discontinuous
+
+| Model | FN1.5 | FN1.7 |
| Semi-CRF | 1.94 sent/s | 1.72 sent/s |
| Ours-Joint | 15.72 sent/s | 16.51 sent/s |
+
+Table 8: Comparison on decoding speed (sentences per second) for the semantic role labeling subtask.
+
+| LU Lexicon | Text and Frames |
| ✓ | Up Tai Hang [Road]Roadways behind Cause-way Bay is Aw Boon Haw (Tiger Balm) [Gardens]Locale_by_use. |
| X | Up Tai Hang [Road]Roadways [behind]Locative Relation Causeway Bay is Aw Boon Haw (Tiger Balm) [Gardens]Locale_by_use. |
| Ground Truth | Up Tai Hang [Road]Roadways behind Causeway [Bay]Natural_features is Aw Boon Haw (Tiger Balm) [Gardens]Locale_by_use. |
+
+Table 9: An example for frame suggestion out-the-scope-of the predefined LU lexicon, where the blue indicates the suggested frame outside the dictionary, $\checkmark$ and $x$ represent whether inference with the dictionary, respectively.
+
+predicates are significantly smaller in amount than others, we keep it in this work for a more comprehensive study to demonstrate that our model can process them as well. Here we also add the results which ignore the discontinuous predicates (i.e., removing the predicate- predicate edges) to facilitate future studies. As shown in Table 7, our joint model performs better than the pipeline methods, which is consistent with the main results.
+
+Comparison on decoding speed Table 8 compares the computational efficiency of the strong Semi-CRF baseline and our joint model for semantic role labeling task, which is also an essential measurement of proposed approach. Experimental results are all obtained by running models on a single 2080ti GPU. We could observe that our model can reach an almost ten times faster speed in comparison to Semi-CRF. Even though the Semi-CRF implementation uses dynamic programming to optimize the time complexity, it still needs to iterate over segments of each sentence in the batch one by one, which might not take advantage of the GPU's parallel capabilities to accelerate the process. Nevertheless, our model as a whole adopts batch-based learning, which enables more efficient inference.
+
+Frame classification without dictionary Following Swayamdipta et al. (2017), we also adopt the Lexical Unit (LU) dictionary in our model empirically. However, according to Punyakanok et al. (2008), sometimes the dictionary might be quite limited. Therefore, we offer one eample in Table 9 to illustrate the capability of our model for frames not in the dictionary. As shown, our model could predict the appropriate frame outside the dictionary as well and might additionally enrich the gold-standard annotations (i.e., the blue texts which do not appear in the Ground Truth).
+
+# 5 Conclusion
+
+In this paper, we proposed a novel graph-based model to address the end-to-end frame semantic parsing task. The full frame semantic parsing result of one sentence is organized as a graph, and then we suggest an end-to-end neural model for the graph building. Our model first encodes the input sentence for span representations with BERT, and then constructs the graph nodes and edges incrementally. To demonstrate the effectiveness of our method, we derived several pipeline methods and used them to conduct the experiments for comparisons. Experimental results showed that our graph-based model achieved significantly better performance than various pipeline methods. In addition, in order to compare our models with previous studies in the literature, we conducted experiments in the scenarios of the individual subtasks. The results showed that our proposed models are highly competitive.
+
+# Acknowledgements
+
+We thank all reviewers for their hard work. This work is supported by grants from the National Key Research and Development Program of China (No. 2018YFC0832101) and the National Natural Science Foundation of China (No. 62176180).
+
+# References
+
+Apoory Agarwal, Sriramkumar Balasubramanian, Anup Kotalwar, Jiehan Zheng, and Owen Rambow. 2014. Frame semantic tree kernels for social network extraction from text. In Proceedings of the EACL, pages 211-219.
+Collin Baker, Michael Ellsworth, and Katrin Erk. 2007. SemEval-2007 task 19: Frame semantic structure extraction. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 99–104.
+
+Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the ACL-COLING, pages 86-90.
+Emanuele Bastianelli, Andrea Vanzo, and Oliver Lemon. 2020. Encoding syntactic constituency paths for frame-semantic parsing with graph convolutional networks. CoRR, abs/2011.13210.
+Teresa Botschen, Iryna Gurevych, Jan-Christoph Klie, Hatem Mousselly-Sergieh, and Stefan Roth. 2018. Multimodal frame identification with multilingual evaluation. In Proceedings of the NAACL-HLT, pages 1481-1491.
+Jiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic-agnostic over syntactic-aware? In Proceedings of the COLING, pages 2753-2765.
+Y. Chen, W. Y. Wang, and A. I. Rudnicky. 2013. Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 120-125.
+Bob Coyne, Alex Klapheke, Masoud Rouhizadeh, Richard Sproat, and Daniel Bauer. 2012. Annotation tools and knowledge representation for a text-toscope system. In Proceedings of the COLING 2012, pages 679-694.
+Dipanjan Das, Desai Chen, Andre F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguistics, 40(1):9-56.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL-HLT.
+Kalpit Dixit and Yaser Al-Onaizan. 2019. Span-level model for relation extraction. In Proceedings of the ACL, pages 5308-5314.
+Timothy Dozat and Christopher D Manning. 2016. Deep bioaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734.
+Hao Fei, Yafeng Ren, and Donghong Ji. 2020. High-order refining for end-to-end Chinese semantic role labeling. In Proceedings of the AACL, pages 100-105.
+Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the Abstract Meaning Representation. In Proceedings of the ACL, pages 1426-1436.
+Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the ACL, pages 1409-1418.
+
+Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.
+Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Comput. Linguist., 28(3):245-288.
+Silvana Hartmann, Ilia Kuznetsov, Teresa Martin, and Iryna Gurevych. 2017. Out-of-domain FrameNet semantic role labeling. In Proceedings of the 15th EACL, pages 471-482.
+Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the ACL, pages 364-369.
+Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame identification with distributed word representations. In Proceedings of the ACL, pages 1448-1458.
+Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.
+Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph-based dependency parsing with graph neural networks. In Proceedings of the ACL, pages 2475-2485.
+Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.
+Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime Carbonell, Noah A. Smith, and Chris Dyer. 2015. Frame-semantic role labeling with heterogeneous annotations. In Proceedings of the ACLIJCNLP, pages 218-224.
+Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dependency or span, end-to-end uniform semantic role labeling. In Proceedings of the AAAI, volume 33, pages 6730-6737.
+Ding Liu and Daniel Gildea. 2010. Semantic role features for machine translation. In Proceedings of the COLING 2010, pages 716-724.
+Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In Proceedings of the ACL, pages 397-407.
+Diego Marcheggiani and Ivan Titov. 2020. Graph convolutions over constituent trees for syntax-aware semantic role labeling. In Proceedings of the EMNLP, pages 3915-3928.
+
+Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106.
+Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learning joint semantic parsers from disjoint data. In Proceedings of the NAACL-HLT, pages 1492-1502.
+Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287.
+Sunita Sarawagi and William W Cohen. 2005. Semimarkov conditional random fields for information extraction. In Advances in Neural Information Processing Systems, volume 17. MIT Press.
+Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of the EMNLP-CoNLL, pages 12-21, Prague, Czech Republic. Association for Computational Linguistics.
+Rupesh K Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Training very deep networks. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.
+Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man Lan, Shiliang Sun, and Nan Duan. 2019. Joint type inference on entities and relations via graph convolutional networks. In Proceedings of the ACL, pages 1361-1370.
+Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proceedings of the ACL, pages 8-15.
+Swabha Swayamdipta, Sam Thomson, Chris Dyer, and Noah A Smith. 2017. Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold. arXiv preprint arXiv:1706.09528.
+Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In Proceedings of the EMNLP, pages 3772-3782.
+Bishan Yang and Tom Mitchell. 2017. A joint sequential and relational model for frame-semantic parsing. In Proceedings of the EMNLP, pages 1247-1256. Association for Computational Linguistics.
+Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR parsing as sequence-tograph transduction. In Proceedings of the ACL, pages 80-94.
+Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. In Proceedings of the EMNLP-IJCNLP, pages 3786-3798, Hong Kong, China. Association for Computational Linguistics.
+
+He Zhao, Longtao Huang, Rong Zhang, Quan Lu, and Hui Xue. 2020. SpanMlt: A span-based multi-task learning framework for pair-wise aspect and opinion terms extraction. In Proceedings of the ACL, pages 3239-3248.
\ No newline at end of file
diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/images.zip b/agraphbasedneuralmodelforendtoendframesemanticparsing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e69c6311fa4f4fb8edb050aff32e55a460f5e2c7
--- /dev/null
+++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7cf118538472c2e01bf734735652e94b688851ed626d93290399d13102a8594
+size 570256
diff --git a/agraphbasedneuralmodelforendtoendframesemanticparsing/layout.json b/agraphbasedneuralmodelforendtoendframesemanticparsing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8fa8eebb19f889649127a3ef27bc555f55f4b7fd
--- /dev/null
+++ b/agraphbasedneuralmodelforendtoendframesemanticparsing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bf7836768813dac3a06c7c5943a25eb4ce469c31e272ae28758584660f561f7d
+size 393372
diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_content_list.json b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9585c392d36e01a3213961fc72ac42dc2fc17ea3
--- /dev/null
+++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:861a986ce00aa703c48f0ec526e7931846947bb5bc740157fdba9d1633024080
+size 87298
diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_model.json b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..42a6bd4ca217f76206a18e174961a03dccf566d9
--- /dev/null
+++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14c29f606b21ceb402180aa516e879edf5beae1a39cee64ddb4b8025f17823a1
+size 101122
diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_origin.pdf b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f376bab02ca25bd37a32a48b2e87904dcc445020
--- /dev/null
+++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/70ba5f43-fe50-4003-b363-8f7ae7e327af_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd291a28022e0d813d707697212cd3110268eae020dab6687dfafa7e6b93489d
+size 554314
diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/full.md b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..673164d1d22a7e841884b2cd43d729268a4b685e
--- /dev/null
+++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/full.md
@@ -0,0 +1,285 @@
+# Agreeing to Disagree: Annotating Offensive Language Datasets with Annotators' Disagreement
+
+Elisa Leonardelli, Stefano Menini, Alessio Palmero Aprosio Marco Guerini, Sara Tonelli
+
+Fondazione Bruno Kessler, Trento, Italy
+
+{eleonardelli,menini,aproasio,guerini,satonelli}@fbk.eu
+
+# Abstract
+
+Since state-of-the-art approaches to offensive language detection rely on supervised learning, it is crucial to quickly adapt them to the continuously evolving scenario of social media. While several approaches have been proposed to tackle the problem from an algorithmic perspective, so to reduce the need for annotated data, less attention has been paid to the quality of these data. Following a trend that has emerged recently, we focus on the level of agreement among annotators while selecting data to create offensive language datasets, a task involving a high level of subjectivity. Our study comprises the creation of three novel datasets of English tweets covering different topics and having five crowd-sourced judgments each. We also present an extensive set of experiments showing that selecting training and test data according to different levels of annotators' agreement has a strong effect on classifiers performance and robustness. Our findings are further validated in cross-domain experiments and studied using a popular benchmark dataset. We show that such hard cases, where low agreement is present, are not necessarily due to poor-quality annotation and we advocate for a higher presence of ambiguous cases in future datasets, particularly in test sets, to better account for the different points of view expressed online.
+
+# 1 Introduction
+
+When creating benchmarks for NLP tasks through crowd-sourcing platforms, it is important to consider possible issues with inter-annotator agreement. Indeed, crowd-workers do not necessarily have a linguistic background and are not trained to perform complex tasks, thus jeopardizing benchmark quality. Furthermore, some crowd-workers try to maximize their pay by supplying quick answers that have nothing to do with the correct label. This issue has been tackled in the past by proposing approaches to control for annotators' expertise
+
+and reliability (Hovy et al., 2013), trying to identify spammers and mitigate their effect on annotation, or by repeating labeling on targeted examples (Sheng et al., 2008). However, not all tasks are the same: while in some cases, like for instance PoS-tagging or parsing, disagreement among annotators is more likely due to unclear annotation guidelines and can usually be reconciled through adjudication, full annotators' agreement should not be necessarily enforced in social computing tasks, whose goal is to study and manage social behavior and organizational dynamics, especially in virtual worlds built over the Internet (Wang, 2007). In these tasks – which include offensive language detection among others – subjectivity, bias and text ambiguity play an important role (Aroyo et al., 2019), and being an inherent component of the task they should be measured and analysed rather than discarded (Klenner et al., 2020; Basile, 2020). Indeed, instead of aiming for a global consensus on what constitutes verbal abuse on social media, we investigate the impact of different degrees of disagreement, how classifiers behave with ambiguous training and test data, and the role of disagreement in current shared tasks. More specifically, we first collect and annotate three datasets of English tweets covering different domains, to test if agreement among a pool of generic classifiers can be considered a proxy for annotator agreement. We then focus on how annotator agreement (both in training and test set) impacts classifiers' performance, considering domain-specific and generic classifiers as well as in-domain and out-of-domain experiments. We also show that low agreement examples – no matter how difficult they can be – still provide useful signal for training offensive language detection systems and do not represent random annotations. So "coin-flipping" or example removal seems not to be the right strategy to solve these disagreement cases. Then, we measure disagreement in the English test set of the last Offenseval shared task (Zampieri et al., 2020),
+
+and analyse to what extent the high performance achieved by most participating systems is related to high agreement in annotation.
+
+We release the new annotated datasets upon request, including more than 10k tweets covering three domains. The messages have been labeled with 50k crowd-worker judgements and annotated with agreement levels. To our knowledge, this represents the first dataset explicitly created to cover different agreement levels in a balanced way. We also advocate for the release of more datasets like the one we propose, especially for highly subjective tasks, where the need to include different points of view should be accounted for.
+
+NOTE: This paper contains examples of language which may be offensive to some readers. They do not represent the views of the authors.
+
+# 2 Related Work
+
+While there has been an extensive discussion on minimal standards for inter-annotator agreement to ensure data quality (Di Eugenio and Glass, 2004; Passonneau, 2004; Artstein and Poesio, 2008), recently an increasing number of works argue that disagreement is unavoidable because language is inherently ambiguous (Aroyo and Welty, 2015), proposing ways to tackle annotators' disagreement when building training sets (Dumitrache et al., 2019). Hsueh et al. (2009), for example, identify a set of criteria to select informative yet unambiguous examples for predictive modeling in a sentiment classification task. Rehbein and Ruppenhofer (2011) analyse the impact that annotation noise can have on active learning approaches. Other works along this line investigate the impact of uncertain or difficult instances on supervised classification (Peterson et al., 2019), while Beigman Klebanov and Beigman (2014) show that including hard cases in training data results in poorer classification of easy data in a word classification task. Along the same lines, Jamison and Gurevych (2015) show that filtering instances with low agreement improve classifier performance in four out of five tasks. Both works observe that the presence of such instances lead to misclassifications.
+
+Several approaches have been presented that implement strategies to deal with disagreement when training classifiers for diverse tasks. In most cases, disagreement has been treated as a consequence of low annotation quality, and addressed through
+
+methodologies aimed at minimising the effects of noisy crowdsourced data. Simpson et al. (2020), for example, present a Bayesian sequence combination approach to train a model directly from crowdsourced labels rather than aggregating them. They test their approach on tasks such as NER where disagreement is mainly due to poor annotation quality. Other works have focused instead on uncertainty in PoS-tagging, integrating annotators' agreement in the modified loss function of a structured perceptron (Plank et al., 2014). Also Rodrigues and Pereira (2018) propose an approach to automatically distinguish the good and the unreliable annotators and capture their individual biases. They propose a novel crowd layer in deep learning classifiers to train neural networks directly from the noisy labels of multiple annotators, using only backpropagation.
+
+Other researchers have suggested to remove hard cases from the training set (Beigman Klebanov and Beigman, 2009) because they may potentially lead to poor classification of easy cases in the test set. We argue instead that disagreement is inherent to the kind of task we are going to address (i.e. offensive language detection) and, in line with recent works, we advocate against forced harmonisation of annotators' judgements for tasks involving high levels of subjectivity (Klenner et al., 2020; Basile, 2020). Among recent proposals to embrace the uncertainty exhibited by human annotators, Gordon et al. (2021) propose a novel metric to evaluate social computing tasks that disentangles stable opinions from noise in crowd-sourced datasets. Akhtar et al. (2020), instead, divide the annotators into groups based on their polarization, so that different gold standard datasets are compiled and each used to train a different classifier.
+
+Compared to existing works, our contribution is different in that we are interested mainly in the process of dataset creation rather in evaluation metrics or classification strategies. Indeed, our research is guided mainly by research questions concerning the data selection process, the composition of datasets and the evaluation using controlled levels of agreement. To this purpose, we create the first dataset for offensive language detection with three levels of agreement and balanced classes, encompassing three domains. This allows us to run comparative in-domain and out-of-domain evaluations, as well as to analyse existing benchmarks like the Offenseval dataset (Zampieri et al., 2020) using the
+
+same approach. While few crowd-sourced datasets for toxic and abusive language detection have been released with disaggregated labels (Davidson et al., 2017), they have not been created with the goal of analysing disagreement, therefore no attention has been paid to balance the number of judgments across different dimensions, like in our case.
+
+# 3 Data Selection and Annotation
+
+In our study, we focus on three different domains, which have been very popular in online conversations in 2020: Covid-19, US Presidential elections and Black Lives Matter (BLM) movement. After an empirical analysis of online discussions, a set of hashtags and keywords for each domain are defined (e.g. #covid19, #election202, #blm). Then, using Twitter public APIs, tweets in English containing at least one of the above keywords are collected in a time span between January and November 2020 (for more details about data collection see Appendix D). From this data collection, we randomly select 400,000 tweets (around 130,000 for each domain), which we then pre-process by splitting hashtags into words using the Ekphrasis tool (Gimpel et al., 2010) and then replacing all mentions to users andurls with $\langle \text{user} \rangle$ and $\langle \text{url} \rangle$ respectively.
+
+# 3.1 Ensemble of classifiers to select data for annotation
+
+Since we do not know the real distribution of agreement levels in the data we collected, random sampling for annotation might be a sub-optimal choice. Thus, we developed a strategy to pre-evaluate the tweets, trying to optimize annotators' effort by having a balanced dataset (in fact data might be very skewed leading to over-annotation of some classes and under-annotation of others). To pre-evaluate the tweets we use a heuristic approach by creating an ensemble of 5 different classifiers, all based on the same BERT configuration and fine-tuned starting from the same abusive language dataset (Founta et al., 2018). Since the original dataset contains four classes (Spam, Normal, Abusive and Hateful), we first remove the tweets from the Spam class and map the remaining ones into a binary offensive or non-offensive label, by merging Abusive and Hateful tweets into the offensive class and mapping the Normal class into the non-offensive one. We then select 15k tweets from the Founta dataset (~100k tweets) for speeding up the process, as we are not
+
+interested in the overall performance of the different classifiers, but rather in their relative performances. Each classifier of the ensemble is trained using a different balance for the training and the evaluation set, so to yield slightly different predictions. In particular, all five classifiers are trained with the BERT-Base uncased model $^2$ , a max seq length of 64, a batch size of 16 and 15 epochs. One classifier has been trained using 12k tweets in the training and 3k in the validation set, a second classifier was trained using the same training instances but repeated twice (24k), while the validation set remained the same. In a third and fourth configuration, we repeat twice the offensive and the non-offensive training instances respectively. Finally, in a fifth configuration we change the proportion between training and validation set (10k for training, 5k for validation).
+
+The rationale for this choice is twofold: (i) since we will collect 5 crowd-annotations for each tweet, we want to have an intuitive and possible direct comparison between ensemble agreement and annotators' agreement (i.e. five votes per tweet coming from the classifiers and five from crowd-workers). (ii) The dataset in Founta et al. (2018) has been specifically created to encompass several types of offensive language. We can therefore consider it as a general prior knowledge about verbal abuse online before adapting our systems to the 3 domains of interest.
+
+In the following sections we will denote unanimous agreement with $A^{++}$ (i.e. agreement between 5 annotators or classifiers), mild agreement with $A^{+}$ (i.e. 4 out of 5 annotations agreeing on the same label), and weak agreement with $A^0$ (i.e. the 5 annotations include 3 of them in agreement and 2 in disagreement). When focusing also on the label we will use the same notation, representing offensive tweets as $O^{++/+/0}$ and non-offensive ones as $N^{++/+/0}$ respectively.
+
+The pre-evaluation through the classifier ensemble resulted in the following agreement distribution: about $92\%$ of the data was classified as $A^{++}$ . For about $5\%$ of the data, agreement among the classifiers was $A^{+}$ , while for the remaining $3\%$ of the data, they fell in the $A^0$ situation.
+
+# 3.2 Data Annotation with AMT
+
+In order to analyse the relation between automated and manual annotation with respect to agreement and disagreement, we select an equal number of tweets from each class of agreement of the ensemble $(A^{++}, A^{+}, A^{0})$ to be manually annotated. For each domain and each agreement class we select 1,300 tweets - equally divided between offensive and non-offensive predictions - for a total of 3,900 tweets per domain.
+
+Every tweet is annotated by 5 native speakers from the US, who we expect to be familiar with the topics, using Amazon Mechanical Turk. We follow for all domains the same annotation guidelines, aimed at collecting crowd-workers' judgements on the offensiveness of the messages using the binary labels offensive and not offensive (see Guidelines included in Appendix A).
+
+To ensure high-quality annotations, we select a pool of tweets from the three domains of interest and ask three expert linguists to annotate them. The tweets with perfect agreement are used as gold standard. We then include a gold standard tweet in every HIT (group of 5 tweets to be annotated). If a crowd-worker fails to evaluate the gold tweet, the HIT is discarded. Moreover, after the task completion we remove all the annotations done by workers who did not reach a minimum overall accuracy of $70\%$ with respect to the gold standard. As a consequence of this quality control, for some tweets we could not collect five annotations, and they had to be removed from the final dataset. On the other hand, it was a crucial process to minimise the possible impact of spam and low-quality annotations on disagreement – which is the focus of our analysis. The total number of tweets annotated using AMT is 10,753, including 3,472 for Covid-19, 3,490 for US elections and 3,791 for BLM. Some (slightly modified) examples of tweets judged with different levels of agreement by crowd-annotators are reported in Table 1.
+
+# 3.3 Annotators and Ensemble Agreement
+
+If we use the majority vote for crowd-annotated data, the datasets have an average distribution of $31\%$ of offensive and $69\%$ non-offensive tweets, while it is $50\%$ each according to ensemble annotation we used for sampling. This means that our classifiers tend to label more tweets as offensive compared to human annotators, as shown in the confusion matrix in Fig. 1. It is interesting to note
+
+that, although the tweets to be annotated were selected evenly across classifiers' agreement classes, the agreement between annotators is not uniformly distributed.
+
+As regards annotators' agreement, for about $43\%$ of the tweets annotated we have full consensus between annotators $(A^{+ + })$ . The vast majority of these tweets were judged unanimously as non-offensive $(34,12\% N^{+ + })$ , and only $8,05\%$ of the data were judged unanimously offensive $(O^{+ + })$ , the less represented type of agreement. For the remaining data, $29,35\%$ has mild agreement $(A^{+}, 4$ out of 5 annotators agreed) with $19\% N^{+}$ and $10,35\% O^{+}$ , and another $28,28\%$ of the data in the class $A^0$ (3 vs 2 annotators) with $15,56\% N^0$ and $12,92\% O^0$ .
+
+ | Classifiers ensemble |
| N++ | N+ | N0 | O0 | O+ | O++ | Total |
| Crowded-annodators | N++ | 1434 | 695 | 542 | 485 | 369 | 144 | 3669 |
| 13.34% | 6.46% | 5.04% | 4.51% | 3.43% | 1.34% | 34.12% |
| N+ | 299 | 390 | 395 | 387 | 373 | 199 | 2043 |
| 2.78% | 3.63% | 3.67% | 3.60% | 3.47% | 1.85% | 19.00% |
| N° | 118 | 284 | 325 | 328 | 313 | 305 | 1673 |
| 1.10% | 2.64% | 3.02% | 3.05% | 2.91% | 2.84% | 15.56% |
| O° | 40 | 173 | 226 | 262 | 294 | 394 | 1389 |
| 0.37% | 1.61% | 2.10% | 2.44% | 2.73% | 3.66% | 12.92% |
| O+ | 19 | 101 | 133 | 139 | 212 | 509 | 1113 |
| 0.18% | 0.94% | 1.24% | 1.29% | 1.97% | 4.73% | 10.35% |
| O++ | 15 | 37 | 68 | 81 | 119 | 546 | 866 |
| 0.14% | 0.34% | 0.63% | 0.75% | 1.11% | 5.08% | 8.05% |
| Total | 1925 | 1680 | 1689 | 1682 | 1680 | 2097 | 10753 | |
| 17.90% | 15.62% | 15.71% | 15.64% | 15.62% | 19.50% | 100.00% |
+
+Figure 1: Confusion matrix (raw number of tweets and percentage) between classifiers ensemble agreement (x-axis) and crowd-annotators agreement (y-axis) on offensive ("O") and non-offensive ("N") labels.
+
+We also compute Pearson's correlation coefficient between the agreement of the ensemble classifiers and that of annotators. It achieves a moderate correlation $(r = 0.51)$ , showing that training an ensemble of classifiers on generic data to pre-screen domain-specific tweets before manual annotation could help identifying tweets that are either unambiguous or more challenging. A similar correlation $(r = 0.50)$ was obtained on an ensemble of BiLSTM classifiers trained with the same training and development sets of the five BERT-based classifiers, suggesting that the pre-screening approach could be used also with other classifiers.
+
+# 3.4 Qualitative analysis of (dis)agreement
+
+Through a manual analysis of the tweets belonging to the $A^0$ class, we can identify few phenomena that lead to disagreement in annotation. In many cases,
+
+| N++ | Stand for something or else fall for anything. #BlackLivesMatter
+Hello world! What a great day to be alive #Trump2020 #MAGA |
| N+ | Come on man! Lock'em up!!! #maga
+Not the first time. You all misspelled #blacklivesmatter. Speak up! @user |
| N0 | Set fire to Fox News (metaphorically)
+@user is outing #BLACK_LIVES_MATTER as a cult! HE IS CORRECT! |
| O0 | #DISGUSTING #Democrats terrorize old folks just before #elections2020
+I love this shit! #BlackLivesMatter |
| O+ | @user You're a bumbling fool #elections2020
+Elections 2020: Red Rapist v. Blue Racist |
| O++ | Y'all trending about kpop stans instead of #BlackLivesMatter big fack you
+Crazy idiots. This is batshit bullshit. #elections2020 |
+
+Table 1: Examples of tweets with different degrees of crowd-workers' agreement. The messages have been created starting from real examples by slightly changing their wording, so to make it impossible to retrieve the original ones on Twitter. $N =$ Not offensive, $O =$ Offensive. $+ + / + / 0$ correspond to high, medium and low agreement respectively.
+
+tweets are ambiguous and more context would be needed to fully understand whether the user wanted to offend someone or not. These cases include the presence of deictic expressions or pronouns referring to previous tweets, see for example:
+
+(1) Shoulda thrown this clowns bike off the bridge!
+(2) Won't work. Gangs will terrorize the city. Murder at will and maybe they'll shoot the Mayor.
+
+Other cases include generic expressions of anger that are not targeted against a specific person or group, or expressions of negative feelings, see for example:
+
+(3) Amen! Enough of this crap!
+
+Finally, questions, and in particular rhetorical questions, are very frequent in the $A^0$ class and their interpretation seems to represent a challenging task for crowd-workers:
+
+(4) if George Floyd was white would the cop have acted in the same violent, murderous way?
+(5) What is it with these kids of leftist politicians?
+
+Overall, disagreement does not seem to stem from poor annotation of some crowd-workers, but rather from genuine differences in the interpretation of the tweets. Additionally, BLM and US American elections are recent events and annotators may have been biased by their personal opinion on the topic during annotation, an effect that has already been highlighted in Sap et al. (2019, 2020).
+
+# 4 Classification experiments
+
+After collecting information on human agreement on tweets covering three different domains, we aim at assessing the impact of (dis)agreement on classifier behaviour.
+
+To this end, we create several balanced configurations of the datasets, so to control for the effect of agreement level, label distribution and domain topic. We first split the data into a training and test set of $75\%$ and $25\%$ for each domain. Then, to control for the effect of training data size, we further downsample all sets to the smallest one, so that each agreement sample is equally represented $(A^{++}, A^{+}, A^{0})$ . In this way, we obtain 3 sets of training data - one per ambiguity level - containing 900 tweets each. Every set further contains 300 tweets from each domain, half for offensive label and half for non-offensive label so to control also for the effect of label distribution across domains and agreement levels.
+
+# 4.1 Impact of (dis)agreement in training data
+
+To assess the impact of agreement level in training data, we run a series of experiments by comparing two different classifiers: the first one relies on BERT directly fine-tuned on domain data, while the second foresees also an intermediate fine-tuning step using the entire dataset in Founta et al. (2018), inspired by the supplementary training approach from Phang et al. (2018). BERT is used with the same parameters of the ensemble classifiers, reported in Section 3.1. The domain data used for fine-tuning are built starting from the training data described above divided into different agreement levels $(A^{+ + },A^{+},A^{0}$ and their combinations).
+
+| Training split | Training size | All domains | Founta + all domains |
| A++ | 900 | 0.746 | 0.757 |
| A+ | 900 | 0.734 | 0.753 |
| A0 | 900 | 0.639 | 0.683 |
| A++/+ | 1800 | 0.755 | 0.756 |
| A+/0 | 1800 | 0.728 | 0.724 |
| A++/0 | 1800 | 0.723 | 0.730 |
| A++/+/0 | 2700 | 0.745 | 0.752 |
| Baseline - training only on Founta et al. data: 0.667 F1 |
+
+Results are reported in Table 2. Note that, for training, the tweets in a given partition for all domains are merged, while they are tested on each domain separately. The reported F1 is an average of the three results (results for each domain can be found in the Appendix and are consistent with the ones reported here). We observe that, if we consider only one level of agreement, data with total agreement are the best for prediction $(A^{++})$ , up to the point that $A^{++}$ data alone provide better results than using all data available in the three splits (all), despite the different size (900 vs. 2700 instances). Additionally, the combination of high and mild agreement data $(A^{++/+})$ yields results that are in line with the best configuration obtained with two fine-tuning steps (0.755 vs 0.757). This result clearly indicates that for this kind of task it is not necessary to collect huge datasets for fine-tuning, since few data from the target domain may suffice if properly selected. Finally, the effect of using low agreement data for training is detrimental, in line with findings reported in past works (Reidsma and op den Akker, 2008; Jamison and Gurevych, 2015). This can be spotted in two results: the use of generic data alone as in our baseline is better than using low agreement in-domain data (0.667 vs. 0.639) and all configurations where $A^0$ is added to mild and high agreement data perform worse than without $A^0$ (0.734 vs 0.728 and 0.746 vs 0.723).
+
+# 4.2 Impact of (dis)agreement in test data
+
+As a next step, we investigate how classifier's performance varies as a function of annotators' agreement in the test data. To this end, we divide also our test set into subsets according to the same agreement levels $(A^{+ + },A^{+},A^{0})$ and calculate separate F1s on each of these splits. We run the classifier for
+
+'all domains' described in Section 4.1, i.e. trained on the three domains and tested on one of them. Results, reported in Table 3, are obtained by averaging the F1 for each domain.
+
+We observe a dramatic drop in performance when agreement decreases in the test set, indicating that ambiguous data are the most challenging to classify. These results highlight the need to control for ambiguity also in the test set when creating offensive language benchmarks (for example in shared tasks), in order to avoid high system performance being due to a lack of challenging examples. The best performance on ambiguous data is obtained when training on unambiguous and mildly ambiguous data $(A^{+ + / + })$ . Interestingly, adding $A^+$ data to $A^{+ + }$ data leads to the highest increase in performance exactly for $A^0$ test data (from 0.552 to 0.574). This rules out the possibility that a certain level of disagreement in the training set is more effective in classifying the same type of ambiguity in the test set (e.g. train and test on $A^0$ data), and suggests that high agreement or mild agreement training sets perform better in all cases.
+
+Table 2: Performance (F1) when training on data with different levels of human agreement (rows), fine-tuned either on domain data or using the dataset from Founta et al. (2018) and domain data.
+
+| Training split | Training size | Tested on | F1 |
| A++/+ | 1800 | A++ | 0.860 |
| A++/+ | 1800 | A+ | 0.768 |
| A++/+ | 1800 | A0 | 0.574 |
| A++ | 900 | A++ | 0.847 |
| A++ | 900 | A+ | 0.763 |
| A++ | 900 | A0 | 0.552 |
| A0 | 900 | A++ | 0.662 |
| A0 | 900 | A+ | 0.639 |
| A0 | 900 | A0 | 0.567 |
+
+Table 3: Performance on $A^{+ + / + }$ $A^{+ + }$ $A^0$ data, classified with "all domains" configuration in Table 2.
+
+# 4.3 Impact of (dis)agreement on out-of-domain data
+
+We then test the effect of cross-domain classification according to agreement levels, so to minimise the impact of possible in-domain overfitting. We repeat the experiments described in the previous section by using two domains for training and the third for testing. As an example, a classifier model was trained using $A^{++}$ data from Covid 19 and US Presidential campaign, and tested on $A^{++}$ data on BLM. This has been repeated for each domain and each agreement level. For conciseness of presentation, we report in Table 4 the F1 obtained by averaging F1 on each domain (results for each domain can be found in the Appendix and also in
+
+this case they are consistent with the one reported here). Results confirm that (i) the classifier yields a good performance when the training data have high agreement, even in an out-of-domain scenario, and (ii) adding $A^0$ data to the training set has a detrimental effect on performance. Finally, if we compare these results with Table 2, we observe that the effect of overfitting on in-domain data is very limited.
+
+| Training split | Training size | Out of domain | Founta + out of domain |
| A++ | 600 | 0.719 | 0.747 |
| A+ | 600 | 0.677 | 0.716 |
| A0 | 600 | 0.567 | 0.658 |
| A+/+ | 1,200 | 0.732 | 0.748 |
| A+/0 | 1,200 | 0.659 | 0.715 |
| A++/0 | 1,200 | 0.714 | 0.722 |
| A++/+/0 | 1,800 | 0.722 | 0.737 |
+
+Table 4: Performance (F1) in the out-of-domain setting. Results are the average F1 obtained by the classifier on each domain when trained on the other two.
+
+# 4.4 (Dis)agreement versus Randomness
+
+An additional question we want to address is whether low agreement data provide some useful information for training offensive language detection systems or if the effect of such data is no more that of random annotation.
+
+We therefore replicate the experiments of Table 2 by replacing the label of $A^0$ data with a random one. Since we want to obtain the same controlled distribution we assign the same probability to $N$ and $O$ labels. Results are reported in Table 5. As can be seen, when using $A^{0_{rand}}$ data the results worsen as compared to $A^0$ , indicating that the label in $A^0$ are not assigned by chance and they can contain useful signal for the classifier, albeit challenging. Consistently with previous results, the more gold and high agreement data is added to the training, the smaller the effect of $A^{0_{rand}}$ . These results show also that coin-flipping, which has been suggested in past works to resolve hard disagreement cases (Beigman Klebanov and Beigman, 2009), may not be ideal because it leads to a loss of information.
+
+# 5 Experiments on Offenseval dataset
+
+Our experiments show that when training and test data include tweets with different agreement levels, classification of offensive language is still a challenging task. Indeed, our classification results reported in Table 2 and 4 suggest that on this kind
+
+| Training split | Training size | All domains | Founta + all domains |
| A0 | 900 | 0.639 | 0.683 |
| A0rand | 900 | 0.505 | 0.576 |
| A+/0 | 1800 | 0.728 | 0.724 |
| A+/0rand | 1800 | 0.657 | 0.689 |
| A++/0 | 1800 | 0.723 | 0.730 |
| A++/0rand | 1800 | 0.684 | 0.703 |
| A+++/0 | 2700 | 0.745 | 0.752 |
| A+++/0rand | 2700 | 0.719 | 0.730 |
+
+Table 5: Performance (F1) when training on data with different levels of human agreement (rows) and replacing $A^0$ labels with random ones ( $A^{0_{rand}}$ ). First line of each group is reported from Table 2 for comparison.
+
+of balanced data, F1 with Transformer-based models is $\approx 0.75$ . However, system results reported for the last Offenseval shared task on offensive language identification in English tweets (Zampieri et al., 2020) show that the majority of submissions achieved an F1 score $>0.90$ on the binary classification task.
+
+We hypothesize that this delta in performance may depend on a limited presence of low agreement instances in the Offenseval dataset used for evaluation (Zampieri et al., 2019). We therefore randomly sample 1,173 tweets from the task test data (30% of the test set) and annotate them with Amazon Mechanical Turk using the same process described in the previous sections (5 annotations per tweet). We slightly modify our annotation guidelines by including the cases of profanities, which were explicitly considered offensive in Offenseval guidelines.
+
+Results, reported in Table 6 (left column) show that the outcome of the annotation is clear-cut: more than $90\%$ of the tweets in the sample have either a high $(A^{+})$ or very high $(A^{++})$ agreement level. Furthermore, only $6.4\%$ of the annotations (75) have a different label from the original Offenseval dataset, $50\%$ of which are accounted for by the $A^0$ class alone. So our annotation is very consistent with the official one and the distribution is very skewed towards high agreement levels, as initially hypothesized.
+
+To understand whether this skewness can be generalised, i.e. if this sample distribution might be representative of a population distribution, we also estimate the distribution of agreement levels in the initial pool of data (around 400k tweets) we collected using US Election, BLM and Covid-related hashtags (Section 3). The estimate of the distri
+
+| Agreement | Offenseval | 400k Tweets |
| A++ | 75.62% | (887) | 68.52% | (274,514) |
| A+ | 14.75% | (173) | 19.08% | (76,457) |
| A0 | 9.63% | (113) | 12.40% | (49,694) |
| N++ | 64.36% | (755) | 65.12% | (260,925) |
| N+ | 5.80% | (68) | 15.50% | (62,085) |
| N0 | 4.60% | (54) | 7.94% | (31,813) |
| O0 | 5.03% | (59) | 4.46% | (17,882) |
| O+ | 8.95% | (105) | 3.59% | (14,372) |
| O++ | 11.25% | (132) | 3.39% | (13,589) |
+
+bution for class $A^{+}$ , $A^{++}$ and $A^0$ is reported in Table 6 (right column). A comparison between the two columns shows that disagreement distribution in the Offenseval sample is in line with the distribution in the data we initially collected before balancing, providing initial evidence that this distribution – with few disagreement cases – might be a ‘natural’ one for online conversations on Twitter.
+
+Differences emerge when considering the ratio of offensive tweets. In Offenseval data, the percentage of offensive tweets is more than double the percentage in our data (25.23% vs. 11.44%), because the authors adopted several strategies to overrepresent offensive tweets (Zampieri et al., 2019).
+
+As a final analysis, we collect the runs submitted to Offenseval and compute the F1 score of each of these systems over the three levels of agreement separately. Overall, we consider all runs that in the task obtained $\mathrm{F}1 > 0.75$ , i.e. 81 runs out of 85. Results are reported in Table 7 as the average of the F1 obtained by the different systems. This last evaluation confirms our previous findings, since F1 increases when agreement level increases in test data. This finding, together with the distribution of agreement levels, shows that the high performance obtained by the best systems in the shared task is most probably influenced by the prevalence of tweets with total agreement.
+
+Table 6: Comparison of agreement distribution in Offenseval sample and projection on 400k tweets.
+
+| Offenseval 2020 - test subsets | F1 | StDev |
| A++ (887 tweets) | 0.915 | ± 0.055 |
| A+ (173 tweets) | 0.817 | ± 0.075 |
| A0 (113 tweets) | 0.656 | ± 0.067 |
+
+Table 7: Average F1 obtained by the best systems at Offenseval ${2020} \pm$ StDev.
+
+# 6 Discussion and Conclusions
+
+We have presented a data annotation process and a thorough set of experiments for assessing the effect of (dis)agreement in training and test data for offensive language detection. We showed that an ensemble of classifiers can be employed to preliminarily select potentially unambiguous or challenging tweets. By analysing these tweets we found that they represent real cases of difficult decisions, deriving from interesting phenomena, and are usually not due to low-quality annotations. We also found that these challenging data are minimally present in a popular benchmark dataset, accounting for higher system performance. We believe that such hard cases should be more represented in benchmark datasets used for evaluation of hate speech detection systems, especially in the test sets, so to develop more robust systems and avoid overestimating classification performance. This goal can be achieved by integrating the common practice of oversampling the minority offensive class with the oversampling of minority agreement classes.
+
+From a multilingual perspective, we also noted that at Offenseval 2020 the best performing systems on Arabic scored 0.90 F1 with a training set of 8k tweets, 0.85 on Greek with less than 9k tweets, and 0.82 on Turkish despite having more than 32k examples for training. This shows that the amount of training data is not sufficient to ensure good classification quality, and that also in this case a study on disagreement levels could partly explain these differences (this is further corroborated by the fact that for Turkish the lowest overall inter-annotator agreement score was reported).
+
+As future work, we plan to develop better approaches to classify (dis)agreement, in order to ease oversampling of low agreement classes. Preliminary experiments (not reported in this paper) show that the task is not trivial, since supervised learning with LMs such as BERT does not work properly when trying to discriminate between ambiguous and not ambiguous tweets. Indeed, BERT-based classification performed poorly both in the binary task (ambiguous vs. not ambiguous) and in the three-way one (offensive vs. not offensive vs. ambiguous). This suggests that ambiguity is a complex phenomenon where lexical, semantic and pragmatic aspects are involved, which are difficult to capture through a language model.
+
+This corpus, together with the experiments presented in this paper, will hopefully shed light onto
+
+the important role played by annotators' disagreement, something that we need to understand better and to see as a novel perspective on data. Indeed, if we want to include diversity in the process of data creation and reduce both the exclusion of minorities' voices and demographic misrepresentation (Hovy and Spruit, 2016), disagreement should be seen as a signal and not as noise.
+
+# 7 Ethics Statement
+
+The tweets in this dataset have been annotated by crowd-workers using Amazon Mechanical Turk. All requirements introduced by the platform for tasks containing adult content were implemented, for example adding a warning in the task title. We further avoid to put any constraints on the minimum length of sessions or on the minimum amount of data to be labeled by each crowd-worker, therefore they were not forced to prolonged exposure to offensive content. Indeed, we observed that crowd-workers tended to annotate for short sessions, on average 20 minutes, which suggests that annotating was not their main occupation. Crowd-workers were compensated on average with 6 US$ per hour.
+
+Although we put in place strict quality control during data collection, we compensated the completed hits also when annotations were finally discarded because they did not reach the minimum accuracy threshold of $70\%$ w.r.t. the gold standard. We also engaged in email conversations with crowd-workers when they were blocked because of mismatches with the gold standard tweets. In several cases, we clarified with them the issue and subsequently unlocked the task.
+
+Concerning the annotated dataset, we support scientific reproducibility and we would like to encourage other researchers to build upon our findings. However, we are aware that ethical issues may arise related to the complexity and delicacy of judgments of offensiveness in case they are made public. Therefore, in compliance with Twitter policy, we want to make sure that our dataset will be reused for non-commercial research only4 avoiding any discriminatory purpose, event monitoring, profiling or targeting of individuals. The dataset, in the form of tweet IDs with accompanying annotation, can be obtained upon request following the process described at this link: https://github.com/dhfbk/annotators-agreement-dataset. Re
+
+questions will be asked to prove their compliance with Twitter policy concerning user protection and non-commercial purposes, as well as to declare that they will not use our dataset to collect any sensitive category of personal information. Also, releasing the tweet IDs instead of the text will enforce users' right to be forgotten, since it will make it impossible to retrieve tweets if their authors delete them or close their account. Although we are aware of the risks related to developing and releasing hate speech datasets, this research was carried out with the goal of improving conversational health on social media, and even exposing the limitations of binary offensive language detection. We believe that our findings confirm the context- and perspective-dependent offensiveness of a message, and we therefore avoid binary labels, stressing the importance of taking multiple points of view (in our case, five raters) into account. Following the same principle of avoiding profiling, crowd-workers' IDs are not included in the dataset, so that it will not be possible to infer annotator-based preferences or biases.
+
+# Acknowledgments
+
+Part of this work has been funded by the KID ACTIONS REC-AG project (n. 101005518) on "Kick-off preventIng and responDing to children and AdolesCenT cyberbullyIng through innovative mOnitoring and educatioNal technologieS", https://www.kidactions.eu/.
+
+# References
+
+Sohail Akhtar, Valerio Basile, and Viviana Patti. 2020. Modeling annotator perspective and polarized opinions to improve hate speech detection. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8(1):151-154.
+Lora Aroyo, Lucas Dixon, Nithum Thain, Olivia Redfield, and Rachel Rosen. 2019. Crowdsourcing subjective tasks: The case study of understanding toxicity in online discussions. In *Companion Proceedings of The 2019 World Wide Web Conference*, WWW '19, page 1100-1105, New York, NY, USA. Association for Computing Machinery.
+Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd truth and the seven myths of human annotation. AI Magazine, 36(1):15-24.
+Ron Artstein and Massimo Poesio. 2008. Survey article: Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555-596.
+
+Valerio Basile. 2020. It's the end of the gold standard as we know it. on the impact of pre-aggregation on the evaluation of highly subjective tasks. In Proceedings of the AIxA 2020 Discussion Papers Workshop co-located with the 19th International Conference of the Italian Association for Artificial Intelligence (AIxA2020), Anywhere, November 27th, 2020, volume 2776 of CEUR Workshop Proceedings, pages 31-40. CEUR-WS.org.
+Beata Beigman Klebanov and Eyal Beigman. 2009. From annotator agreement to noise models. Computational Linguistics, 35(4):495-503.
+Beata Beigman Klebanov and Eyal Beigman. 2014. Difficult cases: From data to learning, and back. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 390-396, Baltimore, Maryland. Association for Computational Linguistics.
+Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17.
+Barbara Di Eugenio and Michael Glass. 2004. Squibs and discussions: The kappa statistic: A second look. Computational Linguistics, 30(1):95-101.
+Anca Dumitrache, Lora Aroyo, and Chris Welty. 2019. A crowdsourced frame disambiguation corpus with ambiguity. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2164-2170, Minneapolis, Minnesota. Association for Computational Linguistics.
+Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In 11th International Conference on Web and Social Media, ICWSM 2018.
+Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. 2010. Part-of-speech tagging for twitter: Annotation, features, and experiments. Technical report, Carnegie-Mellon University, School of Computer Science.
+Mitchell L. Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori Hashimoto, and Michael S. Bernstein. 2021. The disagreement deconvolution: Bringing machine learning performance metrics in line with reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21, New York, NY, USA. Association for Computing Machinery.
+
+Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130, Atlanta, Georgia. Association for Computational Linguistics.
+Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591-598, Berlin, Germany. Association for Computational Linguistics.
+Pei-Yun Hsueh, Prem Melville, and Vikas Sindhwani. 2009. Data quality from crowdsourcing: A study of annotation selection criteria. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 27-35, Boulder, Colorado. Association for Computational Linguistics.
+Emily Jamison and Iryna Gurevych. 2015. Noise or additional information? leveraging crowdsourced annotation item agreement for natural language tasks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 291-297, Lisbon, Portugal. Association for Computational Linguistics.
+Manfred Klenner, Anne Gohring, and Michael Amsler. 2020. Harmonization sometimes harms. In Proceedings of the 5th Swiss Text Analytics Conference and the 16th Conference on Natural Language Processing, SwissText/KONVENS 2020, Zurich, Switzerland, June 23-25, 2020 [online only], volume 2624 of CEUR Workshop Proceedings. CEUR-WS.org.
+Rebecca J. Passonneau. 2004. Computing reliability for coreference annotation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04), Lisbon, Portugal. European Language Resources Association (ELRA).
+J. Peterson, R. Battleday, T. Griffiths, and O. Russakovsky. 2019. Human uncertainty makes classification more robust. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9616-9625.
+Jason Phang, Thibault Févry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088.
+Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 742–751, Gothenburg, Sweden. Association for Computational Linguistics.
+
+Ines Rehbein and Josef Ruppenhofer. 2011. Evaluating the impact of coder errors on active learning. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 43-51, Portland, Oregon, USA. Association for Computational Linguistics.
+Dennis Reidsma and Rieks op den Akker. 2008. Exploiting 'subjective' annotations. In *Coling* 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, pages 8-16, Manchester, UK. Coling 2008 Organizing Committee.
+Filipe Rodrigues and Francisco C. Pereira. 2018. Deep learning from crowds. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1611-1618. AAAI Press.
+Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668-1678, Florence, Italy. Association for Computational Linguistics.
+Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5477-5490. Association for Computational Linguistics.
+Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeirotis. 2008. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '08, page 614-622, New York, NY, USA. Association for Computing Machinery.
+Edwin Simpson, Jonas Pfeiffer, and Iryna Gurevych. 2020. Low resource sequence tagging with weak labels. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8862-8869.
+Fei-Yue Wang. 2007. Toward a paradigm shift in social computing: The acp approach. IEEE Intelligent Systems, 22(5):65-67.
+Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the Type and Target of Offensive Posts in Social Media. In Proceedings of NAACL.
+Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Cagri Koltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffenseEval 2020). In Proceedings of SemEval.
+
+# A Annotation Guidelines for AMT
+
+This section contains the instructions provided to annotators on Amazon Mechanical Turk. The first part changes according to the domain:
+
+Covid-19: The tweets in this task have been collected during the pandemic. Would you find the content of the messages offensive? Try to judge the offensiveness of the tweets independently from your opinion but solely based on the abusive content that you may find.
+
+US Presidential campaign: The tweets in this task have been collected during the last US Presidential campaign. Would you find the content of the messages offensive? Try to judge the aggressiveness of the tweets independently from your political orientation but solely based on the abusive content that you may find.
+
+Black Lives Matter: These tweets are related to the Black Lives Matter protests. Would you find the content of the messages offensive? Try to judge the offensiveness of the tweets independently from your opinion but solely based on the abusive content that you may find.
+
+The second part of the task description, instead, is the same for all the domains, containing a definition of what is offensive and informing the workers that there is a quality check on the answers:
+
+Offensive: Profanity, strongly impolite, rude, violent or vulgar language expressed with angry, fighting or hurtful words in order to insult or debase a targeted individual or group. This language can be derogatory on the basis of attributes such as race, religion, ethnic origin, sexual orientation, disability, or gender. Also sarcastic or humorous expressions, if they are meant to offend or hurt one or more persons, are included in this category.
+
+Normal: tweets that do not fall in the previous category.
+
+Quality Check: the HIT may contain a gold standard sentence, manually annotated by three different researchers, whose outcome is in agreement. If that sentence is wrongly annotated by a worker, the HIT is automatically rejected.
+
+Asking annotators to label the tweets independently from their views, opinions or political orientation was inspired by recent works, showing that making explicit possible biases in the annotators contributes to reduce such bias (Sap et al., 2019).
+
+| Training | Training Size | All domains | Founta + all domains |
| BLM | Covid | Election | BLM | Covid | Election |
| A++ | 900 | 0.756 | 0.752 | 0.730 | 0.768 | 0.752 | 0.752 |
| A+ | 900 | 0.745 | 0.724 | 0.734 | 0.774 | 0.736 | 0.748 |
| A0 | 900 | 0.647 | 0.644 | 0.626 | 0.689 | 0.652 | 0.707 |
| A++/+ | 1800 | 0.776 | 0.756 | 0.732 | 0.779 | 0.738 | 0.750 |
| A+/0 | 1800 | 0.738 | 0.738 | 0.707 | 0.744 | 0.698 | 0.729 |
| A++/0 | 1800 | 0.732 | 0.733 | 0.704 | 0.746 | 0.723 | 0.721 |
| A++/+/0 | 2700 | 0.758 | 0.736 | 0.742 | 0.766 | 0.748 | 0.742 |
+
+Table 8: Test results on single domains using a model trained on all domains.
+
+| Training | Training Size | Out of domain | Founta + out of domain |
| BLM | Covid | Election | BLM | Covid | Election |
| A++ | 600 | 0.699 | 0.734 | 0.723 | 0.760 | 0.736 | 0.746 |
| A+ | 600 | 0.681 | 0.720 | 0.631 | 0.718 | 0.706 | 0.725 |
| A0 | 600 | 0.557 | 0.603 | 0.542 | 0.674 | 0.629 | 0.672 |
| A++/+ | 1800 | 0.696 | 0.758 | 0.742 | 0.740 | 0.771 | 0.734 |
| A+/0 | 1800 | 0.641 | 0.686 | 0.649 | 0.733 | 0.696 | 0.716 |
| A++/0 | 1800 | 0.695 | 0.726 | 0.720 | 0.737 | 0.706 | 0.722 |
| A++/+/0 | 2700 | 0.737 | 0.736 | 0.692 | 0.734 | 0.756 | 0.720 |
+
+Table 9: Results in the out-of-domain setting, testing the classifier on each domain when trained on the other two.
+
+# B Impact of (dis)agreement on classification - results in detail
+
+Table 8 displays domain-specific results related to the analysis shown in Section 4.1 of the main document, where for sake of brevity we have shown only an average between the three domains. The table confirms that also on single domains, training data with higher level of agreement improve predictions, while training data with low level of agreement are detrimental. Classification took about 2 minutes on a Titan X for the runs using only domain-specific data. Adding the intermediate fine-tuning on data from Founta et al. (2018) increases the time to 1.5 hours.
+
+# C Impact of (dis)agreement on out-of-domain data - results in detail
+
+Similar to the previous table, Table 9 displays out-of-domain results related to the analysis shown in Section 4.3 of the main document, where we report only an average between the three domains. The results are consistent with the average scores reported in the main document, i.e. that training data with high agreement improve prediction, while training data with low agreement are detrimental. Classification took about the same time of the runs in the single domain configuration.
+
+# D Twitter data collection
+
+Through its application programming interface (API), Twitter provides access to publicly available messages upon specific request. For each of the domains analysed, a set of hashtags and keywords was identified that unequivocally characterizes the domain and is collectively used. During a specific period of observation, all the tweets containing at least an item of this hashtags/keywords seed list were retrieved in real time (using "filter" as query). The most relevant entries from the Covid-19 seed list are: covid-19, coronavirus, ncov, #Wuhan, covid19, sarscov2 and covid. Data were collected in the time span between 25 January and 09 November 2020. The most relevant entries from the blm seed list are: george floyd, #blm, black lives matter. Tweets were collected between 24 May 2020 and 16 June 2020. The most relevant entries from the US Elections seed list are: #maga, #elections2020, Trump, Biden, Harris, Pence. The tweets were collected between 30 September 2020 and 04 November 2020.
+
+For each domain, a big bulk of data was collected in real time for each specific time span. From these about 400,000 tweets were randomly selected and evaluated with the ensemble method as described in Section 3 of the main paper.
\ No newline at end of file
diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/images.zip b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..44e48e3eaf3b6f1f87ba6ac82e29ea3995b5d6b2
--- /dev/null
+++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a25c6b15ed58f69668e2143a6527e248323bdac3550f60cfdf315cc799074aa4
+size 390414
diff --git a/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/layout.json b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..94ea47c00e56fc44c71babebc030af3daa6416ca
--- /dev/null
+++ b/agreeingtodisagreeannotatingoffensivelanguagedatasetswithannotatorsdisagreement/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4facfc770a3162673d0a65c8a0e493e9bcdf5a822a1d51b58d7c5b8a3bb4d6e6
+size 350575
diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_content_list.json b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a489acc97d29c2d8a24f612940a10a738d1f86eb
--- /dev/null
+++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2cab24919178e61a3d05f7b644713ee3a0b8c2a7686cce291cbc10ddd1ca0b4a
+size 90509
diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_model.json b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2f0b8670630cf1bf2520e292b7ab6aa4a6758c34
--- /dev/null
+++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5cd7cd441c85c00d6fb04f17f04f4d7bc8acbf1c97262113d90108a17ac635ca
+size 108614
diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_origin.pdf b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5e5e027f5c7c404e66ae671e71da39102130267a
--- /dev/null
+++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/c55859b7-448d-4ac1-8f84-3c2e31b38e7d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c617ca2b322e839a26be5e19d7607e98935df03937bd665d06f142a231bee36
+size 4891834
diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/full.md b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..07f1dac4a20cdbec36adf02076559c854ed45bdc
--- /dev/null
+++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/full.md
@@ -0,0 +1,374 @@
+# A Label-Aware BERT Attention Network for Zero-Shot Multi-Intent Detection in Spoken Language Understanding
+
+Ting-Wei Wu, Ruolin Su, Biing-Hwang Juang
+
+Department of Electrical and Computer Engineering
+
+Georgia Institute of Technology
+
+{waynewu, ruolinsu}@gatech.edu, juang@ece.gatech.edu
+
+# Abstract
+
+With the early success of query-answer assistants such as Alexa and Siri, research attempts to expand system capabilities of handling service automation are now abundant. However, preliminary systems have quickly found the inadequacy in relying on simple classification techniques to effectively accomplish the automation task. The main challenge is that the dialogue often involves complexity in user's intents (or purposes) which are multiproned, subject to spontaneous change, and difficult to track. Furthermore, public datasets have not considered these complications and the general semantic annotations are lacking which may result in zero-shot problem. Motivated by the above, we propose a Label-Aware BERT Attention Network (LABAN) for zero-shot multi-intent detection. We first encode input utterances with BERT and construct a label embedded space by considering embedded semantics in intent labels. An input utterance is then classified based on its projection weights on each intent embedding in this embedded space. We show that it successfully extends to few/zero-shot setting where part of intent labels are unseen in training data, by also taking account of semantics in these unseen intent labels. Experimental results show that our approach is capable of detecting many unseen intent labels correctly. It also achieves the state-of-the-art performance on five multi-intent datasets in normal cases.
+
+# 1 Introduction
+
+In spoken language understanding (SLU) of task-oriented dialog systems, each utterance is often interpreted as a kind of action being performed by the speaker, which we call speech or dialog acts (Abbeduto, 1983). These acts may commit speakers to some course of actions, like asking or acknowledging, along with a series of distinctive semantic notions involved in a task. Usually the system forms the semantic frames by identifying intents and slots to express dialog acts. For instance,
+
+given a sample utterance, "Are there any accidents on my route to work at 10?", the intent detection task will first identify intents, i.e., 'Get Info Traffic', 'Get Location Work' and then the slot-filling task will predict a slot such as (time:10). In such case, an 'intent label' for an utterance is defined as a purpose or a goal that clearly states user's act.
+
+Dominant SLU systems have adopted several techniques to predict single intents by treating it as a multi-class classification problem (Gao et al., 2018; Goo et al., 2018; Qin et al., 2019). However, in real world scenario, many utterances may have multiple intents (Li et al., 2018b; Rastogi et al., 2019) like the above example. Multi-intent SLU often requires more sophisticated reasoning on given utterances to disambiguate different intent natures. Gangadharaiah and Narayanaswamy (2019) first explored the joint multi-intent and slotfilling task by treating multi-intents as a single context vector, but not scalable to a large number of intents. Qin et al. (2020) further proposed a state-of-the-art model to consider each intent-slot interaction via adaptive graph attention. However, these approaches cannot successfully tackle more complex multi-intent scenarios when sentences may not have explicit conjunctions.
+
+The second challenge in SLU intent detection is intent fluidity variation, which we refer to the extent of naturalness when a dialogue progresses. In less stylized conversations, they usually contain a less bounded set of intents which may change with dialog context/states. Thus, usually some utterances' intents may not be seen during training and this problem deteriorates in the multi-intent scenario (Xia et al., 2020). Second, there is no rigorous definition of an intent annotation format or how many intents should be defined. Therefore, conventional models trained on one dataset with a fixed set of intent labels may possibly fail to detect a new in-domain intent. We refer it to the zero-shot problem. Larson et al. (2019) suggests a two-stage
+
+process to first classify if a query is in-scope; then to assign intents. However, it cannot scale easily to unseen intents in multi-intent scenario.
+
+To tackle the above two challenges, we found that leveraging embedded semantics in intent labels may be useful. In conventional intent classification, these systems usually classify an utterance to a label which is represented by an indexed ID like 0 (i.e. one-hot encoding). However, representing intents with indexed IDs fails to consider embedded semantics in the labels too. For instance, we can use words 'get' and 'direction' in an intent label 'get direction' to help with identifying semantically equivalent words in an utterance, i.e., I 'want direction' to SF. For a given set of intent labels within one domain, we can compare the semantic similarity between words in an utterance and words in these intents. Similarly in the zero-shot setting, even if some intents may not be visible during training, we could still compare the word semantics in these intents with a new utterance.
+
+In this paper, we propose our new framework: Label-Aware BERT Attention Network (LABAN) in Figure. 1. We first introduce BERT to capture the multi-intent natures when utterances do not have explicit conjunctions. Then, instead of treating intent labels only for indexed IDs, we use words in each intent label in training data to construct a label embedding space. After encoding an utterance and all intents in a given training set for embeddings separately, a label-aware layer will generate scores of how likely this utterance belongs to each intent. To accommodate the zero-shot case, we could additionally introduce unseen intents' embeddings too to jointly construct the embedding space. In contrast with prior works' limited predictability only on seen intents, our model unfreezes the constraint by considering semantics in intent labels to deal with new unseen labels. The code and resources are released in https://github.com/waynewu6250/LABAN. The paper has the following contributions:
+
+1. We extend the first use of BERT into multi- intent SLU scenario with a simple but powerful label-aware approach.
+2. We successfully demonstrate LABAN's effectiveness to deal with unseen multiple intents and fast harness the intent detection task by training with few data of unseen intents.
+3. We compare the LABAN's performance
+
+on five extended and complex multi-intent datasets that show significant improvement over previous methods and baselines by considering the contextualized information from BERT and label semantics.
+
+# 2 Related Work
+
+Multi-intent Detection Intent detection mainly aims to classify a given utterance with its intents from user inputs. Different approaches such as convolutional-LSTM and capsule network have been proposed to solve the problem (Qian, 2017; Liu et al., 2017; Xia et al., 2018). Considering intents highly associated with slot-filling, many joint models (Goo et al., 2018; Li et al., 2018a; Qin et al., 2019; E et al., 2019; Liu et al., 2019b) utilize intent information like gradients or cross-impact networks to further reinforce the slot-filling prediction. However these methods do not consider multiple intent cases. Therefore, Rychalska et al. (2018) first adopted hierarchical structures to identify multiple user intents. Gangadharajaiah and Narayanaswamy (2019) and Qin et al. (2020) further exploited interactive relations between intents and slots. Wu et al. (2021) leveraged the dialog context to better harness the joint tasks. Our model follows these models' paradigm and focuses on more complex cases: 1) Multi intents no longer exist in separate parts of the sentence which our BERT introduction can be beneficial and 2) Some testing intents are not available during training.
+
+Zero-shot Learning Zero-shot learning (ZSL) aims to recognize objects whose instances may not be seen during training (Lampert, 2014). Early works usually focused in the fields of computer vision (Lampert, 2014; Al-Halah et al., 2016; Norouzi et al., 2014). They adopted a two-stage approach to first identify object's attributes and estimated class posteriors based on similarity, which often suffered from domain shift between intermediate and target tasks. Recent advances in ZSL directly learned a mapping between feature and semantic spaces (Palatucci et al., 2009; Akata et al., 2016; Frome et al., 2013) or built a common intermediate space (Zhang and Saligrama, 2015; Xian et al., 2017). Similar treatment could be applied in natural language. Chen et al. (2016) proposed CDSSM to consider cosine similarity of deep semantics from utterances and intents. Xia et al. (2018) and Liu et al. (2019a) extended ZSL in user intent detection with capsule neural networks. Si
+
+
+Figure 1: This figure shows the overall LABAN framework. (a) During training phase, two BERT encoders will encode both the utterance and all seen intent labels. Then the utterance embedding will be projected onto a constructed semantic embedding space $\mathcal{T}$ with projected weights as scores. (b) During testing phase, new unseen intents will also be encoded and participate in constructing $\mathcal{T}'$ to generate scores based on a new utterance.
+
+et al. (2021) proposed disentangled intent representations for multi-task training. We follow these works and extend to multi-intent detection cases with intent semantics and pretrained models.
+
+# 3 Problem Formulation
+
+In this section, we formally state the multi-intent detection problem in the normal and zero-shot case. Multi-Intent Detection. Given a labeled training dataset where each sample has the following format: $(x,y)$ where $x$ is an utterance and $y = (y_{1},y_{i},\dots,y_{K})\in \{0,1\}^{K}$ is a set of multiple binary intent labels. Each $y_{i}$ will belong to a set $Y^s$ of $K$ seen intents. We aim to classify an utterance $x_{seen}$ in the seen intent classes $Y^s$ .
+
+Zero-shot Multi-Intent Detection. Given a labeled training dataset $(x,y)$ where $y\in Y^{s}$ , in testing we aim to classify an utterance $x_{\text{unseen}}$ with its correct intent categories $y_{\text{unseen}} = (y_1,y_i,\dots,y_{K + L})\in \{0,1\}^{K + L}$ from the seen and unseen intent classes $Y = Y^{s}\cup Y^{u}$ . $Y^{u}$ will be a set of $L$ unseen intents which is given along with $Y^{s}$ as domain ontology during testing, but not visible in training.
+
+# 4 Approach
+
+# 4.1 Utterance encoder
+
+BERT is a multi-layer transformer-based encoder containing multi-head self-attention layers (Devlin
+
+et al., 2019). Models fine-tuned on BERT have achieved several benchmark results in many natural language tasks (Sun et al., 2020). Therefore, we first adopt one BERT $BERT_{u}$ to encode an input utterance $x = (w_{1},\dots,w_{T_{u}})$ . Here, we will pad it up to a max sequence length $T_{u}$ .
+
+$$
+h ^ {u} = B E R T _ {u} (x) \tag {1}
+$$
+
+where $h^u \in \mathbb{R}^{T_u \times H}$ is the token-level representations of $x$ and $H$ is the hidden size of BERT. Then, we adopt two methods to further encode them into a sentence embedding $r^u \in \mathbb{R}^H$ . First, we could take the hidden state $h_1^u$ from the first time step of [CLS] as $r^u = h_1^u$ (BERT-finetune). Or to better consider the individual word importance to the overall sentence embedding, we follow the work in Lin et al. (2017) to use a self-attentive network.
+
+$$
+\bar {h} _ {t} ^ {u} = W h _ {t} ^ {u} + b _ {w} \tag {2}
+$$
+
+$$
+\alpha_ {t} = \frac {e ^ {\bar {h} _ {t} ^ {u T} u _ {w}}}{\sum_ {t ^ {\prime}} e ^ {\bar {h} _ {t ^ {\prime}} ^ {u T} u _ {w}}} \tag {3}
+$$
+
+$$
+r ^ {u} = \sum_ {t} \alpha_ {t} h _ {t} ^ {u} \tag {4}
+$$
+
+where each $h_t^u$ in $h^u$ are fed into an affine transformation $(W, b_w)$ and output $\bar{h}_t^u$ . Then $\{\alpha_t\}$ represents the similarity scores between each $h_t^u$ and $K$ heads of learnable context vectors $u_w$ as the global sentence views; for each head, we can get
+
+a sentence representation $r_h^u$ . Finally we will concatenate all heads for the final representation $r^u$ .
+
+# 4.2 Adaptive label-aware attentive layer
+
+Inspired by few-shot learning works (Snell et al., 2017; Reimers and Gurevych, 2019), instead of classifying utterance into a predefined set of intents, we instead leverage the linear approximation idea (del Pino and Galaz, 1995) to help us determine the intents of an utterance. The linear approximation problem states that let $S$ be a Hilbert space and $\mathcal{T}$ be a subspace of $S$ , given a vector $z \in S$ , we would like to find the closest point $\hat{z} \in \mathcal{T}$ to $z$ . It turns out that the solution of $\hat{z} = \sum_{k=1}^{N} \beta_k v_k$ will be a linear combination of a basis $v_1, \ldots, v_N$ for $\mathcal{T}$ of $N$ dimension. $\beta = \mathbf{G}^{-1}\mathbf{b}$ where an element in the Gram matrix $G_{k,n} = \langle v_n, v_k \rangle$ and $b_n = \langle z, v_n \rangle$ .
+
+To transform the above idea into a multi-intent detection setting, we first construct an intent embedding subspace $\mathcal{T}$ with a basis $\{r_1^l,\dots,r_K^l\}$ given a set of $K$ intents $Y^{s}$ . To obtain $\{r_1^l,\dots,r_K^l\}$ , we adopt another BERT $BERT_{l}$ to encode $K$ intents. Namely, for every intent $y_{i}$ in a given set $Y^{s}$ , which could be expressed as a word sequence $(w_{1},\dots,w_{T_{l}})$ , we similarly use another BERT $BERT_{l}$ with the self-attentive layer mentioned in section 4.1 to encode it into an intent embedding $r_i^l$ . The reason to use a different BERT from $BERT_{u}$ is that intents often have very different syntactic structures (i.e. no subjects) compared to the utterances.
+
+By such intent encoding, we will obtain $K$ intent embeddings as our basis $\{r_1^l,\dots,r_K^l\}$ to construct an intent embedding space $\mathcal{T}$ . Then shown in Figure. 1, for an utterance $r^u$ , we can project it onto $\mathcal{T}$ to obtain its linear approximation $\hat{r}^u = \sum_{i=1}^{K} w_i r_i^l$ , where $\mathbf{w} \in \mathbb{R}^K$ could be computed as $\mathbf{w} = \sqrt{H} \mathbf{G}^{-1} \mathbf{b}$ . And the Gram matrix $\mathbf{G}$ and $\mathbf{b}$ are the followings:
+
+$$
+\mathbf {G} = \left[ \begin{array}{c c c} \langle r _ {1} ^ {l}, r _ {1} ^ {l} \rangle & \dots & \langle r _ {K} ^ {l}, r _ {1} ^ {l} \rangle \\ \vdots & \ddots & \vdots \\ \langle r _ {1} ^ {l}, r _ {K} ^ {l} \rangle & \dots & \langle r _ {K} ^ {l}, r _ {K} ^ {l} \rangle \end{array} \right] \tag {5}
+$$
+
+$$
+\mathbf {b} = \left[ \begin{array}{c} \langle r ^ {u}, r _ {1} ^ {l} \rangle \\ \vdots \\ \langle r ^ {u}, r _ {K} ^ {l} \rangle \end{array} \right] \tag {6}
+$$
+
+To note, we assume $\{r_1^l,\dots,r_K^l\}$ are linearly independent since each vector represents the concept of an intent which should not be a linear combination of other intent vectors. Hence, $\mathbf{G}$ is guaranteed
+
+positive definite and will have an inverse. Here we further time a scaling factor $\sqrt{H}$ to compute $\mathbf{w}$ for empirical consideration since $\mathbf{G}^{-1}$ tends to lead overall product into small values.
+
+After obtaining $\mathbf{w}$ , these projection weights can be viewed as scores of how likely an utterance $x$ belong to each intent $y_{i}$ . We can follow Qin et al. (2020) to treat it as a multi-label classification task and generate the logits $\hat{y} = \sigma (\mathbf{w})$ by sending $\mathbf{w}$ into a sigmoid function $\sigma$ . Finally we can have the intent detection objective as a binary cross entropy loss where $N$ is number of samples:
+
+$$
+\begin{array}{l} \mathcal {L} := - \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {K} \left(y _ {j} ^ {(i)} \log \left(\hat {y} _ {j} ^ {(i)}\right) \right. \\ + (1 - y _ {j} ^ {(i)}) l o g (1 - (\hat {y} _ {j} ^ {(i)})) \qquad (7) \\ \end{array}
+$$
+
+During testing, after obtaining $\hat{y} \in \mathbb{R}^K$ as probabilities of the utterance belong to each intent, we can set a threshold $t$ where $0 < t < 1.0$ as a hyperparameter to select the final predicted intents. For instance, if we have $\hat{y} = \{0.3, 0.6, 0.9, 0.1, 0.4\}$ and $t = 0.5$ , the intents are predicted as $\{2, 3\}$ .
+
+# 4.3 Zero-shot setting
+
+For normal multi-intent detection, after training, for a given $K$ seen intent set $Y^{s}$ , we could use the method in section 4.2 to calculate the scores of a new utterance $x_{seen}$ with respect to each intent. Similarly, we could easily extend it into the zero-shot setting. First we will train $BERT_{u}, BERT_{l}$ with the training data of a given $K$ seen intent set $Y^{s}$ . Then, during testing, given a new $L$ unseen intent set $Y^{u}$ , we could also encode these intents into intent embeddings $\{r_1^l, \dots, r_L^l\}$ with the trained $BERT_{l}$ too. Finally, plus the seen intent set $Y^{s}$ , we could construct an extended intent subspace $\mathcal{T}'$ with a basis of $\{r_1^l, \dots, r_K^l, r_{K+1}^l, \dots, r_{K+L}^l\}$ and similarly generate scores for each seen and unseen intents with a new utterance $x_{unseen}$ .
+
+# 5 Experimental Setting
+
+# 5.1 Datasets
+
+We use three widely used public multi-intent single-sentence datasets: MixATIS, MixSNIPS (Qin et al., 2020; Hemphill et al., 1990; Coucke et al., 2018) and Facebook Semantic Parsing System (FSPS) dataset (Gupta et al., 2018) and two multi-intent dialogue datasets: Microsoft dialogue challenge dataset (MDC) (Li et al., 2018b) and Schema-Guided Dialogue dataset (SGD) (Rastogi et al.,
+
+| Dataset | Data Type | train/val/test | Total Labels |
| MixATIS | single | 18k/1k/1k | 17 |
| MixSNIPS | single | 45k/2.5k/2.5k | 7 |
| FSPS | single | 31k/4.4k/9k | 24 |
| MDC | dialogue | 45k/15k/15k | 11 |
| SGD | dialogue | 198k/66k/66k | 18 |
+
+Table 1: Dataset statistics
+
+2019) for our experiments. For FSPS, we focus on predicting all intents regardless of their positions for each utterance. For MDC and SGD, we treat each utterance as an individual sample with multiple user and system acts as intents for experiments. We use all datasets for normal and zero-shot multi-intent detection and include single intent detection results with ATIS (Hemphill et al., 1990) and SNIPS datasets (Coucke et al., 2018). The detailed data statistics is shown in Table. 1.
+
+For zero-shot task, we use single sentence datasets MixATIS, MixSNIPS and FSPS for experiments. We subsample each dataset 5 times with the same train-valid/test number and report the average results of 5 random splits. In each split, we simulate the situation where training data only contain a part of intent labels and test will have all intent labels. For instance, MixATIS has totally 17 labels, we maintain $\mathrm{K} < 17$ possible intents seen in training set and the testing set has all 17 intents. In experiments, we set 4 possible values of $\mathrm{K}$ in each three datasets. For few-shot task, we add $5\%$ and $10\%$ testing data into the training data and predict the rest testing data performance. We also replace BERT with two variations: ALBERT, TOD-BERT as our utterance encoder for additional baselines.
+
+# 5.2 Baselines
+
+We compare the normal multi-intent detection results with three competitive baseline models:
+
+1. Stack-Prop which uses two stacked encoder-decoder structures for joint intent and slot filling tasks (Qin et al., 2019).
+2. Joint MID-SF which first considers multi-intent detection task in use of BiLSTMs (Gangadharaiah and Narayanaswamy, 2019).
+3. AGIF uses graph interactive framework to consider fine-grained information (Qin et al., 2020).
+
+We also compare zero-shot multi-intent detection results with seven competitive baselines:
+
+1. BERT-finetune uses BERT as the encoder and increases the total output size of the final fully-connected layer on top of it (Devlin et al., 2019).
+2. Zero-shot LSTM uses two LSTM encoders to
+
+encode utterances and intents; then acquires scores with dot product (Kumar et al., 2017).
+
+3.CDSSM uses convolutional deep structured model to calculate cosine similarities between embeddings (Chen et al., 2016).
+4. Zero-shot BERT uses BERT as the encoder for Zero-shot LSTM (Kumar et al., 2017) instead.
+5. CDSSM BERT uses BERT as the encoder for CDSSM (Chen et al., 2016) instead.
+6. ALBERT-LA uses ALBERT as encoder along with our label-aware layer (Lan et al., 2020).
+7. TOD-BERT-LA uses TOD-BERT, a pretrained encoder for task-oriented dialogs, along with our label-aware attentive layer (Wu et al., 2020).
+
+# 5.3 Experimental setting
+
+We use the pretrained BERT with 12 hidden layers of 768 units and 12 self-attention heads. The model is trained for 50 epochs and saved with the best performance on the validation set. For zero/few-shot setting, we randomly pick a number of intents to be unseen in the training set, run experiments for 5 different splits and report the average. We set the threshold $t$ as 0.5 for multi-label classification. We follow the metrics used in Qin et al. (2020) for intent accuracy and F1 score.
+
+# 6 Main Results
+
+# 6.1 Multi-intent detection
+
+Table. 2 shows the normal multi-intent detection results on all five datasets. We can observe that LABAN outperforms the baselines substantially in the multi-intent detection especially in MixATIS and FSPS. It proves the usefulness of our fine-tuning BERT to capture more precise contextualized information for the downstream task. LABAN also considers the semantics in intent labels where the improvement enlarges when the number of intents increases, i.e. larger increase in MixATIS with 17 intents compared to MixSNIPS with only 7 intents. For datasets that do not have explicit conjunction words between the sentence like FSPS, MDC, SGD, we can observe a huge increase in accuracy in our model. Second, not only in multi-intent detection, in Table. 4, we can also see LABAN outperforms other baselines dealing with just one intent.
+
+# 6.2 Zero-shot Multi-intent detection
+
+To further justify our model's main contribution in zero-shot cases, we compare LABAN with several competitive baselines. As shown in Table. 3,
+
+| Dataset | MixATIS | MixSNIPS | FSPS | MDC | SGD |
| Model | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc |
| Stack-Prop | 0.790 | 0.719 | 0.976 | 0.946 | 0.911 | 0.723 | 0.877 | 0.780 | 0.919 | 0.891 |
| Joint MID-SF | 0.806 | 0.731 | 0.980 | 0.951 | 0.877 | 0.780 | 0.855 | 0.754 | 0.907 | 0.850 |
| AGIF | 0.812 | 0.758 | 0.985 | 0.961 | 0.914 | 0.749 | 0.907 | 0.741 | 0.924 | 0.761 |
| LABAN | 0.958† | 0.889† | 0.985 | 0.963 | 0.948† | 0.913† | 0.898 | 0.814† | 0.950† | 0.928† |
+
+Table 2: Normal multi-intent detection results on five datasets. We report accuracy (Acc) for all intents exact match and F1 scores based on individual intent calculation. $\dagger$ indicates the significant improvement of $p$ -value $< 0.05$ compared to the previous state-of-the-art model AGIF.
+
+| Dataset | FSPS | MixATIS | MixSNIPS |
| Model | F1-a | F1-s | F1-u | F1-a | F1-s | F1-u | F1-a | F1-s | F1-u |
| BERT-finetune | 0.365 | 0.479 | 0.000 | 0.592 | 0.836 | 0.000 | 0.490 | 0.653 | 0.000 |
| Zero-shot LSTM | 0.341 | 0.494 | 0.029 | 0.533 | 0.728 | 0.055 | 0.475 | 0.546 | 0.264 |
| CDSSM | 0.496 | 0.440 | 0.394 | 0.592 | 0.827 | 0.060 | 0.591 | 0.659 | 0.432 |
| Zero-shot BERT | 0.517 | 0.461 | 0.373 | 0.463 | 0.576 | 0.162 | 0.472 | 0.464 | 0.370 |
| CDSSM BERT | 0.494 | 0.486 | 0.348 | 0.491 | 0.614 | 0.041 | 0.481 | 0.481 | 0.402 |
| ALBERT-LA | 0.391 | 0.425 | 0.228 | 0.595 | 0.739 | 0.362 | 0.567 | 0.574 | 0.466 |
| TOD-BERT-LA | 0.419 | 0.369 | 0.405 | 0.702 | 0.782 | 0.459 | 0.642 | 0.641 | 0.559† |
| BERT-LA (LABAN) | 0.544 | 0.471 | 0.451† | 0.696 | 0.808 | 0.518† | 0.640 | 0.622 | 0.526 |
+
+Table 3: Performance of the zero-shot multi-intent detection compared with several competitive baselines. Here we choose the train/test label ratio to be FSPS 17/24, MixATIS 14/17, MixSNIPS 5/7. F1-a, F1-s, F1-u are F1 scores evaluated on data with all/seen/unseen intent labels. $\dagger$ indicates the significant improvement of $p$ -value $< 0.05$ on F1-u results compared with CDSSM.
+
+| Model | ATIS | SNIPS |
| Stack-Propagation | 0.969 | 0.980 |
| Joint Mul ID-SF | 0.954 | 0.972 |
| AGIF | 0.971 | 0.981 |
| LABAN | 0.978 | 0.982 |
+
+Table 4: Single intent detection accuracy results on two single-intent datasets compared with baseline models.
+
+BERT-finetune by simply enlarging the neurons for unseen intents is not capable of predicting any unseen intent utterances, causing 0.00 F1-u scores. Non-BERT approaches like Zero-shot LSTM and CDSSM using dot product or cosine similarity can show improved but limited unseen intent predictability. By leveraging pretraining power, zero-shot BERT can better associate unseen and seen intents with higher F1 score; while the performance of CDSSM BERT with more complex structures degrades with model overfitting. Finally, we discover that in all datasets (FSPS, MixATIS, MixSNIPS), with our label-aware attentive layer, three models (ALBERT-LA, TOD-BERT-LA, LABAN) with a strong pretrained power successfully outperform baselines in predicting unseen labels by associating their relations with input sequences, even if these intents are never seen in training phase.
+
+We also observe that ALBERT has relatively inferior performance among BERT-based models, which possibly results from a light version of BERT and a different pretraining objective from
+
+the conversation-oriented version: TOD-BERT. To note, the original BERT model has slightly better F1 score for seen intents. It is reasonable since it avoids the error to predict utterances with unseen labels by searching over only the seen intents. However, without sacrificing much, models with the label-aware attentive layer could significantly boost the overall F1 scores in all three datasets.
+
+Then we comprehensively evaluate LABAN's performance in zero/few-shot setting with different seen/unseen intent ratios in Figure. 2. We mainly have four discoveries. (1) LABAN can predict unseen intents around average half correctly. (2) When the number of seen intents decreases, F1 score reduces both for seen and unseen intent labels with model's poorer knowledge of seen intents. (3) In utterances with both seen and unseen intents, F1 score for seen intents is lower than utterances with only seen intents. The fewer seen intents are trained, the more inclined the model will predict the utterance as unseen intents frequently. (4) In the few-shot setting, with little data of unseen intents trained, both seen and unseen intent accuracy boost by a large margin especially in MixSNIPS. It indicates the fact that regardless of scarce training data with some unseen labels, LABAN could fully exploit the use of pretrained linguistic knowledge on label semantics to match the most relevant intents in current criteria.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 2: Zero-shot/Few-shot results of LABAN for FSPS, MixATIS, MixSNIPS datasets with varying Seen Labels, the number of seen labels during training. FSPS, MixATIS, MixSNIPS have total 24, 17, 7 intents. F1-a, F1-s, F1-u are F1 scores evaluated on data with all/seen/unseen intent labels.
+
+
+
+
+
+| Dataset | MixATIS | MixSNIPS | FSPS | MDC | SGD |
| Model | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc |
| BERT-finetune | 0.952 | 0.879 | 0.982 | 0.954 | 0.938 | 0.901 | 0.897 | 0.814 | 0.949 | 0.926 |
| BERT-attn | 0.963 | 0.893 | 0.984 | 0.961 | 0.942 | 0.903 | 0.897 | 0.816 | 0.950 | 0.927 |
| LABAN | 0.958 | 0.889 | 0.985 | 0.963 | 0.948 | 0.913 | 0.898 | 0.814 | 0.950 | 0.928 |
+
+Table 5: Ablation analysis of different components in LABAN for normal multi-intent detection results on five datasets. We report accuracy (Acc) for all intents exact match and F1 scores based on individual intent calculation.
+
+# 6.3 Ablation Analysis
+
+To better understand the effectiveness of LABAN's components on multi-intent detection, we conduct the ablation analysis by reporting two different baseline variations of our model: BERT-finetune and BERT-attn. BERT-finetune refers to using the hidden state of [CLS] head from BERT without the extra label-aware layer; BERT-attn refers to adding a self-attentive layer to encode the sentence embeddings without the label-aware layer too. And finally, LABAN refers to our final model as the BERT with the self-attentive layer and adaptive label-aware attentive layer.
+
+In experimental results shown in Table. 5, we can first observe that BERT with the additional self-attentive layer has increased performances on all five datasets, especially in MixATIS and FSPS.
+
+When the number of total intents increases, the self-attentive layer is beneficial in understanding each word importance to the overall intent prediction. After introducing the label-aware layer, we could see a further increase, especially in FSPS which contains the maximum number of intents (24). It does help LABAN to better match the utterance and different intent semantics, particularly in the case when intent options are more complicated. Although the increase seems subtle when the label sources are abundant, it can cause huge assistance of tackling unseen labels, without sacrificing much performance in normal cases.
+
+# 6.4 Error Analysis
+
+We demonstrate a few cases in Table. 6 to analyze some error cases of LABAN. For simplicity, we abbreviate each dataset as MixATIS: MA, MixSNIPS:
+
+| MI ID | Sentence | Predict labels | Real labels |
| MA1 | At the charlotte airport, how many different types of aircraft are there for US air and St. Paul to Kansas city friday night. | atisquantity (7) | atis_Aircraft (4),atis_airport (12) |
| MS1 | Play the album Journeyman. | play_music (0) | search 创建_work (2) |
| FS1 | Is traffic always heavy at this stretch of highway? | get_location (3), get_info_social (6) | unsupported_navigation (2) |
| FS2 | How's the traffic ahead? | get_info_social (6) | get_info_social (6),get_location (3) |
| ZS ID | Sentence | Predict labels | Real labels |
| MA2 | Show me the lowest priced fare from Dallas to Baltimore. | atis_airfare (16),atis_airport (9),atis cheapest (14) | atis_airfare (16) |
| MS2 | Play music from 2015 and then I am giving this current novel 1 out of 6 stars. | rate_book (4),search.Screening_event (5),book Restaurant (6) | rate_book (4), play_music (3) |
| FS3 | I want to be at my daughters by 8am what time should I leave? | get_location_home (4),get_estimated Arrival (2),get Directions (16),update Directions (19) | get_location_home (4),get_estimateddeparture (15) |
+
+Table 6: Example of multi-intent (MI) and zero-shot (ZS) prediction errors. Each example will have an id referring its dataset (MixATIS: MA, MixSNIPS: MS, FSPS: FS). intent indicates that it is the same both in prediction and real. And the number behind intents are the corresponding label id.
+
+MS, and FSPS: FS in the table.
+
+First, we found that some words in the utterances may obfuscate LABAN's prediction. For instance, in case MA1, LABAN may predict 'atis quantity' based on the keyword 'how many' by comparing the sentence and label semantics. In case MS1, the 'play' keyword also induces the model to predict the intent 'play music', where it actually means to search and play an album list. In such sense, 'creative work' may be less relevant to 'album' for our model's sentence-label pairing.
+
+For FSPS, we found that most errors occur when real labels are 'unsupported navigation', 'unsupported event' or 'unsupported' such as case FS1. This may be hard for the model without an external ontology to identify unsupported events (out-of-scope). Therefore, in most cases, the model will just identify 'get info traffic' and 'get location' as the closest intents. In FS2 case, the model fails to predict 'get location' correctly. Without including contexts, it may be hard for the model to associate 'ahead' with 'get location'.
+
+Then, we show the errors in zero-shot setting. Here, the model only sees 12/17 intents in MixATIS, 3/7 intents in MixSNIPS and 14/24 intents in FSPS during training. We found two distinctive phenomena: (1) The model tends to predict more labels like in case MA2 if it is uncertain with unseen intents, resulting in lower precision. (2) We found that the model can predict seen intents well regardless of other existence of unseen intents in the same sentence. For unseen intent errors, the model tends to categorize them more into other unseen classes than seen classes, which indicates that the model has a basic knowledge of what seen intents should be. Mechanisms for explicit semantic pairing may
+
+be one of reasons and show ability of separating known and unknown classes confidently.
+
+In case MA2, 'atis cheapest' and 'atis airfare' are not seen in training phase. However, the model is still capable of predicting 'atis airfare' accurately. Moreover, 'lowest' keyword is matched with the predicted label 'atis cheapest', benefiting from our label-aware attentive layer. For case MS2, all of predicted and real labels are unseen during training. We found the model still accurately predicts 'rate book' correctly based on keyword 'stars'. And the model predicts 'search screening event' or 'search creative work' instead 'play music', which actually happen frequently in other predictions. In FSPS like FS3 case, the model tends to predict lots of unseen intents without matching any of true intents. In FS3 case, it has only seen the intent 'get estimated arrival' during training which makes it erroneously predicts the sentence to 'arrival' rather than 'departure'. The effect could be possibly alleviated by introducing external knowledge embeddings for keyword 'leave' related to 'departure', which human usually associates with.
+
+# 6.5 Visualization
+
+To better understand the classification results of LABAN, shown in Figure. 3, we perform TSNE visualization (van der Maaten and Hinton, 2008) on the projected embeddings $\hat{r}^u = \sum_{i=1}^{K} w_i r_i^l$ of each utterance onto the intent subspace $\mathcal{T}$ . Here we also plot each intent embedding $r_i^l$ with their intent numbers. We can observe that numerous clusters are formed with close semantic distances. And most of intent embeddings like id 0, 6, 9, 12 are close to their respective clusters. It indicates that LABAN successfully constructs an intent embedding space
+
+
+TSNE visualization of intent clusters
+Figure 3: This figure shows the visualization of utterance embeddings with their intent labels (color) in FSPS test set. Each number $i$ indicates an intent embedding $r_i^{l}$ 's location and its intent class.
+
+that illustrates the semantic relation between each of intents and helps with classification of a projected utterance embedding. To note, since some of utterances have more than one intent, to simply the graph, we randomly pick one of intents in these utterances for visualization. Therefore, we can see some of clusters like id 8 actually have two dominant sub clusters. And some of utterances on the right sub cluster have other intents like id 3, 4, 12, 17. Hence, they may be semantically close to these intent embeddings (3, 4, 12, 17) on the graph.
+
+# 7 Conclusion
+
+In this paper, we propose the extension of finetuning BERT and label-aware semantic interactions into the multi-intent detection task in SLU. It successfully provides the solution to zero/few-shot setting where there are unseen labels in new utterances. By considering the label semantics, we can generate scores of how likely new utterances belong to these unseen intents. We compare the performance of our approach with previous methods and obtain significant improvements over baselines. It sheds the light that constructing a label semantic space could help the model to distinguish seen and
+
+unseen intents in utterances better. It provides the guidance in the work of improving SLU zero-shot multi-intent detection by considering dialogue contexts and external knowledge learning, or a more challenging task of detecting out-of-domain (OOD) detection where unseen intents are not available.
+
+# Acknowledgements
+
+We are grateful for the insightful comments from the anonymous reviewers and the computing resources supported from Department of Electrical and Computer Engineering at Georgia Institute of Technology.
+
+# Ethical Consideration and Impact
+
+The work aims to unfreeze the limitation of intent granularity defined in task-oriented dialogue training datasets, which is often ill-posed in the context of modeling precise and multiple intents in many previous works (Qin et al., 2019; Goo et al., 2018). Multi-intent detection could be applied to a wide range of applications in many industries where the scenario requires a broader understanding of user requests. For example, customer service automation often solicits clear intent identification at each utterance for flexible answer policy, where identifying single intents may increase redundant and ambiguous dialogue turns. Second, zero-shot work has long been studied to unfreeze the limitation of deep learning models requesting large amount of data. It could be applied to multiple domains where intent labels are significantly lacking and may cause time-consuming labeling. By transferring the knowledge from existing labels, the model shall be more robust in dealing with unseen labels as humans have approached new things, which will be very beneficial in dialogue system design where many of data are unlabeled.
+
+In ethical aspect, naturalness of dialog structure heavily defines the scope of intent detection and usually changes during the dialog state transition. How to capture adequate intents from user is somehow critical in SLU and the following tasks like dialog state tracking. Wrong interpretation of intents may offend users and cause unsatisfactory answers. And we should also avoid predicting sensitive labels regarding user privacy. In such sense, we mainly test our model in all public released datasets which have been widely justified as unbiased in multiple domains and are not sensitive in revealing specific user information.
+
+Overall, we see great opportunities for research applying LABAN to investigate interactions between utterance and their latent intents. It gives good intuition how the model understands the underlying human acts and improves the transparency in decision-critical applications. To mitigate the risks associated with our model, we aim to anonymize user sensitive information in training data and focus on extracting domain-agnostic knowledge for better generalization and interpretability.
+
+# References
+
+Leonard Abbeduto. 1983. Linguistic communication and speech acts. kent bach, robert m. harnish. cambridge: M.i.t. press, 1979, pp. xvii 327. Applied Psycholinguistics, 4(4):397-407.
+Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. 2016. Label-embedding for image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7):1425-1438.
+Ziad Al-Halah, Makarand Tapaswi, and Rainer Stiefelhagen. 2016. Recovering the missing link: Predicting class-attribute associations for unsupervised zero-shot learning. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 5975-5984. IEEE Computer Society.
+Yun-Nung Chen, Dilek Hakkani-Tür, and Xiaodong He. 2016. Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, March 20-25, 2016, pages 6045-6049. IEEE.
+Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, Maël Primet, and Joseph Dureau. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces.
+Guido E. del Pino and Hector Galaz. 1995. Statistical applications of the inverse gram matrix: A revisitation. Brazilian Journal of Probability and Statistics, 9(2):177-196.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Haihong E, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5467-5471, Florence, Italy. Association for Computational Linguistics.
+Andrea Frome, Gregory S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc'Aurelio Ranzato, and Tomás Mikolov. 2013. Devise: A deep visual-semantic embedding model. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2121-2129.
+Rashmi Gangadharaiiah and Balakrishnan Narayanaswamy. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 564-569, Minneapolis, Minnesota. Association for Computational Linguistics.
+Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018, pages 1371-1374. ACM.
+Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 753–757, New Orleans, Louisiana. Association for Computational Linguistics.
+Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787-2792, Brussels, Belgium. Association for Computational Linguistics.
+Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990.
+Anjishnu Kumar, Pavankumar Reddy Muddireddy, Markus Dreyer, and Bjorn Hoffmeister. 2017. Zero-shot learning across heterogeneous overlapping domains. In INTERSPEECH.
+Christoph H Lampert. 2014. Attribute-based classification for zero-shot visual object categorization.
+
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1311-1316, Hong Kong, China. Association for Computational Linguistics.
+Changliang Li, Liang Li, and Ji Qi. 2018a. A self-attentive model with gate mechanism for spoken language understanding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3824–3833, Brussels, Belgium. Association for Computational Linguistics.
+Xiujun Li, Sarah Panda, Jingjing Liu, and Jianfeng Gao. 2018b. Microsoft dialogue challenge: Building end-to-end task-completion dialogue systems. arXiv preprint arXiv:1807.11125.
+Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Han Liu, Xiaotong Zhang, Lu Fan, Xuandi Fu, Qimai Li, Xiao-Ming Wu, and Albert Y.S. Lam. 2019a. Reconstructing capsule networks for zero-shot intent classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4799-4809, Hong Kong, China. Association for Computational Linguistics.
+Ting Liu, Xiao DING, Yue QIAN, and Yiheng CHEN. 2017. Identification method of user's travel consumption intention in chatting robot. SCIENTIA SINICA Informationis, 47:997.
+Yijin Liu, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, and Jinan Xu. 2019b. CM-net: A novel collaborative memory network for spoken language understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1051-1060, Hong Kong, China. Association for Computational Linguistics.
+
+Mohammad Norouzi, Tomás Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg Corrado, and Jeffrey Dean. 2014. Zero-shot learning by convex combination of semantic embeddings. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
+Mark Palatucci, Dean Pomerleau, Geoffrey E. Hinton, and Tom M. Mitchell. 2009. Zero-shot learning with semantic output codes. In Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada, pages 1410-1418. Curran Associates, Inc.
+Yue Qian. 2017. Research on the identification method of users' travel consumption intent in chat robot.
+Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019. A stack-propagation framework with token-level intent detection for spoken language understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2078-2087, Hong Kong, China. Association for Computational Linguistics.
+Libo Qin, Xiao Xu, Wanxiang Che, and Ting Liu. 2020. AGIF: An adaptive graph-interactive framework for joint multiple intent detection and slot filling. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1807–1816, Online. Association for Computational Linguistics.
+Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855.
+Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
+B. Rychalska, H. Glabska, and A. Wroblewska. 2018. Multi-intent hierarchical natural language understanding for chatbots. In 2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS), pages 256-259.
+Qingyi Si, Yuanxin Liu, Peng Fu, Zheng Lin, Jiangnan Li, and Weiping Wang. 2021. Learning class-transductive intent representations for zero-shot intent detection. In IJCAI.
+Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning.
+
+In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4077-4087.
+Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2020. How to fine-tune bert for text classification?
+Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579-2605.
+Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher, and Caiming Xiong. 2020. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 917-929, Online. Association for Computational Linguistics.
+Ting-Wei Wu, Ruolin Su, and Biing-Hwang Juang. 2021. A Context-Aware Hierarchical BERT Fusion Network for Multi-Turn Dialog Act Detection. In Proc. Interspeech 2021, pages 1239-1243.
+Congying Xia, Caiming Xiong, Philip Yu, and Richard Socher. 2020. Composed variational natural language generation for few-shot intents. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3379-3388, Online. Association for Computational Linguistics.
+Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip Yu. 2018. Zero-shot user intent detection via capsule neural networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3090-3099, Brussels, Belgium. Association for Computational Linguistics.
+Yongqin Xian, Bernt Schiele, and Zeynep Akata. 2017. Zero-shot learning - the good, the bad and the ugly. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 3077-3086. IEEE Computer Society.
+Ziming Zhang and Venkatesh Saligrama. 2015. Zero-shot learning via semantic similarity embedding. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 4166-4174. IEEE Computer Society.
+
+# A Appendix
+
+# A.1 Linear Approximation in a Hilbert Space
+
+Let $S$ be a Hilbert space with inner product $\langle \cdot ,\cdot \rangle$ and induced norm $||\cdot ||$ , and let $\mathcal{T}$ be a subspace of $S$ . Given a vector $z\in S$ , we would like to find the closest point $\hat{z}\in \mathcal{T}$ to $z$ . Namely, we would like to solve the following optimization program:
+
+$$
+\min _ {x \in \mathcal {T}} | | z - x | | \tag {8}
+$$
+
+Given an arbitrary $z \in S$ , we know there exists exactly one point $\hat{z} \in \mathcal{T}$ that obeys
+
+$$
+z - \hat {z} \perp \mathcal {T} \tag {9}
+$$
+
+meaning $\langle z - \hat{z},y\rangle = 0$ for all $y\in \mathcal{T}$ and this point $\hat{z}$ is the unique minimizer of Equation 8. We can further construct $\hat{z}$ as the following:
+
+$$
+\hat {z} = \sum_ {k = 1} ^ {N} \beta_ {k} v _ {k} \tag {10}
+$$
+
+where $N$ is the dimension of $\mathcal{T}$ and $v_{1},\ldots ,v_{N}$ is a basis for $\mathcal{T}$ . Then we can transform our problem as finding coefficients $\beta_{1},\dots,\beta_{N}\in \mathbb{C}$ .
+
+From Equation 9, we know $\langle z - \hat{z}, v_n \rangle = 0$ for $n = 1, \dots, N$ . This means by plugging Equation 10, $\beta_n$ must obey $\langle z - \sum_{k=1}^{N} \beta_k v_k, v_n \rangle = 0$ for $n = 1, \dots, N$ . We can then obtain the following equation:
+
+$$
+\langle z, v _ {n} \rangle = \sum_ {k = 1} ^ {N} \beta_ {k} \langle v _ {k}, v _ {n} \rangle \tag {11}
+$$
+
+Since $z$ and the $\{v_{n}\}$ are given, we know both the $\langle z,v_n\rangle$ and $\langle v_k,v_n\rangle$ . We can write down the matrix form:
+
+$$
+\mathbf {G} \boldsymbol {\beta} = \mathbf {b} \tag {12}
+$$
+
+where $\beta \in \mathbb{C}^N$ , $b_{n} = \langle z,v_{n}\rangle$ and $G_{k,n} = \langle v_n,v_k\rangle$ . Or in the complete form:
+
+$$
+\mathbf {G} = \left[ \begin{array}{c c c} \langle v _ {1}, v _ {1} \rangle & \dots & \langle v _ {N}, v _ {1} \rangle \\ \vdots & \ddots & \vdots \\ \langle v _ {1}, v _ {N} \rangle & \dots & \langle v _ {N}, v _ {N} \rangle \end{array} \right] \tag {13}
+$$
+
+$$
+\mathbf {b} = \left[ \begin{array}{c} \langle z, v _ {1} \rangle \\ \vdots \\ \langle z, v _ {N} \rangle \end{array} \right] \tag {14}
+$$
+
+We can then solve the problem by finding $\beta = \mathbf{G}^{-1}\mathbf{b}$ where $\mathbf{G}$ is guaranteed invertible since $\{v_{n}\}$ is linear independent.
+
+# A.2 Dataset
+
+Here are some more detailed descriptions about datasets we used:
+
+MixATIS (Qin et al., 2020; Hemphill et al., 1990) ATIS (Airline Travel Information System) dataset is a standard benchmark dataset in the airline domain widely used as an intent classification. MixATIS is further synthesized based on ATIS by concatenating single utterances only with the conjunction word 'and'.
+
+MixSNIPS (Qin et al., 2020; Coucke et al., 2018) MixSNIPS dataset is collected from the SNIPS personal voice assistant and has the ratio of sentences with 1-3 intents at [0.3, 0.5, 0.2]. It also concatenates SNIPS utterances with the conjunction word 'and'.
+
+FSPS (Gupta et al., 2018) Facebook Semantic Parsing System (FSPS) dataset is a large dataset of 44k requests annotated with a hierarchical semantic representation for task oriented dialog systems. Intentions are prefixed with 'IN:' an slots with 'SL:'. Each utterance may contain one or more embedded intents and slots.
+
+MDC (Li et al., 2018b) Microsoft dialogue challenge dataset (MDC) is a well-annotated dataset for three task-completion domains: movie-ticket booking, restaurant reservation and taxi ordering. It was first released for SLT 2018 special session and contains information of dialogue acts and slots for each utterance.
+
+SGD (Rastogi et al., 2019) Schema-Guided Dialogue dataset (SGD) is a large dialogue dataset with over $20\mathrm{k}$ annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains. It could be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning or language generation.
\ No newline at end of file
diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/images.zip b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..370f57ca372130eaa27581280d2a53d8b474a28a
--- /dev/null
+++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f9a06704c4155568de4b5e17e8058c88f11e5ccea8944b4e290dbf39fdea512
+size 594931
diff --git a/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/layout.json b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b7717b8dbb968b0200f6562aca7b6d88103f626e
--- /dev/null
+++ b/alabelawarebertattentionnetworkforzeroshotmultiintentdetectioninspokenlanguageunderstanding/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e99b118fb53960a76d52c5d6bdcf7e32e691bb4671f5812c914a36b29363bfff
+size 472556
diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_content_list.json b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..97dd14293ca287d236283ff305ccbe1b47b0fb2f
--- /dev/null
+++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb6ff622ce857a7d3dbb87bb1e791ec5f0bb6776241f0edde2deb34952a37f95
+size 106728
diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_model.json b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..27680d1faa72236b7448371d71ec73421f559f98
--- /dev/null
+++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91028adfdae9c7917e783e51767044d77fef46096a0b0d6903175b5a97fa9af3
+size 125681
diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_origin.pdf b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..97c94fd5fd8f8d960ec66c42143db7a4115b2c91
--- /dev/null
+++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/e283716d-d1d4-4c2d-9557-d867a77a70e8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8494a4047ebb021740ab3bf5eb20cbac8e9539b716b66375b6d01a3831340c4f
+size 536720
diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/full.md b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9b313c4bb796cf63a93bf345d35ef84c847f3c37
--- /dev/null
+++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/full.md
@@ -0,0 +1,434 @@
+# A Language Model-based Generative Classifier for Sentence-level Discourse Parsing
+
+Ying Zhang, Hidetakaka Kamigaito and Manabu Okumura
+
+Tokyo Institute of Technology
+
+{zhang, kamigaito, oku}@lr.pi.titech.ac.jp
+
+# Abstract
+
+Discourse segmentation and sentence-level discourse parsing play important roles for various NLP tasks to consider textual coherence. Despite recent achievements in both tasks, there is still room for improvement due to the scarcity of labeled data. To solve the problem, we propose a language model-based generative classifier (LMGC) for using more information from labels by treating the labels as an input while enhancing label representations by embedding descriptions for each label. Moreover, since this enables LMGC to make ready the representations for labels, unseen in the pre-training step, we can effectively use a pretrained language model in LMGC. Experimental results on the RST-DT dataset show that our LMGC achieved the state-of-the-art $\mathrm{F}_1$ score of 96.72 in discourse segmentation. It further achieved the state-of-the-art relation $\mathrm{F}_1$ scores of 84.69 with gold EDU boundaries and 81.18 with automatically segmented boundaries, respectively, in sentence-level discourse parsing.
+
+# 1 Introduction
+
+Textual coherence is essential for writing a natural language text that is comprehensible to readers. To recognize the coherent structure of a natural language text, Rhetorical Structure Theory (RST) is applied to describe an internal discourse structure for the text as a constituent tree (Mann and Thompson, 1988). A discourse tree in RST consists of elementary discourse units (EDUs), spans that describe recursive connections between EDUs, and nuclearity and relation labels that describe relationships for each connection.
+
+Figure 1 (a) shows an example RST discourse tree. A span including one or more EDUs is a node of the tree. Given two adjacent non-overlapping spans, their nuclearity can be either nucleus or satellite, denoted by N and S, where the nucleus represents a more salient or essential piece of information than the satellite. Furthermore, a relation
+
+
+(b) Linearized Discourse Tree
+Figure 1: An example discourse tree structure.
+
+(Attribution (Elaboration (N (N We've got a lot)
+
+(S to do, )S)N Elaboration (She acknowledged .)S)Attribution
+
+label, such as Attribution and Elaboration, is used to describe the relation between the given spans (Mann and Thompson, 1988; Carlson and Marcu, 2001). To build such trees, RST parsing consists of discourse segmentation, a task to detect EDU boundaries in a given text, and discourse parsing, a task to link spans for detected EDUs.
+
+In this paper, we focus on discourse segmentation and sentence-level discourse parsing, which are indispensable in RST parsing (Joty et al., 2013; Feng and Hirst, 2014a; Joty et al., 2015; Wang et al., 2017; Kobayashi et al., 2020) and are applicable to many downstream tasks, such as machine translation (Guzmán et al., 2014; Joty et al., 2017) and sentence compression (Sporleder and Lapata, 2005).
+
+In discourse segmentation, Carlson et al. (2001) proposed a method for using lexical information and syntactic parsing results. Many researchers (Fisher and Roark, 2007; Xuan Bach et al., 2012; Feng and Hirst, 2014b) utilized these clues as features in a classifier although automatic parsing errors degraded segmentation performance. To avoid this problem, Wang et al. (2018b) used BiLSTM-CRF (Huang et al., 2015) to handle an input without these clues in an end-to-end manner. Lin et al. (2019) jointly performed discourse segmentation and sentence-level discourse parsing in their pointer-network-based model. They also intro
+
+duced multi-task learning for both tasks and reported the state-of-the-art results for discourse segmentation and sentence-level discourse parsing in terms of $\mathrm{F}_1$ scores. Despite these achievements, there is still room for improvement for both tasks due to the scarcity of labeled data. It is important to extract more potential information from the current dataset for further performance improvement.
+
+Under this motivation, in this research, we propose a language model-based generative classifier (LMGC) as a reranker for both discourse segmentation and sentence-level discourse parsing. LMGC can jointly predict text and label probabilities by treating a text and labels as a single sequence, like Figure 1 (b). Therefore, different from conventional methods, LMGC can use more information from labels by treating the labels as an input. Furthermore, LMGC can enhance label representations by embedding descriptions of each label defined in the annotation manual (Carlson and Marcu, 2001), that allows us to use a pre-trained language model such as MPNet (Song et al., 2020) effectively, since we can already have the representations for labels, that were unseen in the pre-training step.
+
+Experimental results on the RST-DT dataset (Carlson et al., 2002) show that LMGC can achieve the state-of-the-art scores in both discourse segmentation and sentence-level discourse parsing. LMGC utilizing our enhanced label embeddings achieves the best $\mathrm{F}_1$ score of 96.72 in discourse segmentation. Furthermore, in sentence-level discourse parsing, LMGC utilizing our enhanced relation label embeddings achieves the best relation $\mathrm{F}_1$ scores of 84.69 with gold EDU boundaries and 81.18 with automatically segmented boundaries, respectively.
+
+# 2 Related Work
+
+Discourse segmentation is a fundamental task for building an RST discourse tree from a text. Carlson et al. (2001) proposed a method for using lexical information and syntactic parsing results for detecting EDU boundaries in a sentence. Fisher and Roark (2007); Xuan Bach et al. (2012); Feng and Hirst (2014b) utilized these clues as features in a classifier, while Wang et al. (2018b) utilized BiLSTM-CRF (Huang et al., 2015) in an end-to-end manner to avoid performance degradation caused by syntactic parsing errors.
+
+Sentence-level discourse parsing is also an important task for parsing an RST discourse tree, as used in many RST parsers (Joty et al., 2013;
+
+Feng and Hirst, 2014a; Joty et al., 2015; Wang et al., 2017; Kobayashi et al., 2020). Recently, Lin et al. (2019) tried to jointly perform discourse segmentation and sentence-level discourse parsing with pointer-networks and achieved the state-of-the-art $\mathrm{F}_1$ scores in both discourse segmentation and sentence-level discourse parsing.
+
+In spite of the performance improvement of these models, a restricted number of labeled RST discourse trees is still a problem. In the discourse segmentation and parsing tasks, most prior work is on the basis of discriminative models, which learn mapping from input texts to predicted labels. Thus, there still remains room for improving model performance by considering mapping from predictable labels to input texts to exploit more label information. To consider such information in a model, Mabona et al. (2019) introduced a generative model-based parser, RNNG (Dyer et al., 2016), to document-level RST discourse parsing. Different from our LMGC, this model unidirectionally predicts action sequences.
+
+In this research, we model LMGC for the discourse segmentation and sentence-level discourse parsing tasks. LMGC utilizes a BERT-style bidirectional Transformer encoder (Devlin et al., 2019) to avoid prediction bias caused by using different decoding directions. Since LMGC is on the basis of generative models, it can jointly consider an input text and its predictable labels, and map the embeddings of both input tokens and labels onto the same space. Due to this characteristic, LMGC can effectively use the label information through constructing label embeddings from the description of a label definition (Carlson and Marcu, 2001). Furthermore, recent strong pre-trained models such as MPNet (Song et al., 2020) are available for any input tokens in LMGC.
+
+# 3 Base Models
+
+Our LMGC reranks the results from a conventional discourse segmenter and parser, which can be constructed as discriminative models. In this section, we explain these base models and introduce our mathematical notations.
+
+# 3.1 Discourse Segmenter
+
+In discourse segmentation, given an input text $\pmb{x} = \{x_{1},\dots ,x_{n}\}$ , where $x_{i}$ is a word, a segmenter detects EDUs $\pmb {e} = \{e_1,\dots ,e_m\}$ from $\pmb{x}$ . Since there is no overlap or gap between EDUs,
+
+
+Figure 2: Overview of our Language Model-based Generative Classifier (LMGC).
+
+discourse segmentation can be considered as a kind of sequential labeling task, which assigns labels $l = \{l_1,\dots ,l_n\}$ , where each $l_{i}\in \{0,1\}$ indicates whether the word is the start of an EDU or not. By using a discriminative model, such as BiLSTM-CRF (Wang et al., 2018b) and pointer-networks (Lin et al., 2019), the probability of predicting EDUs from $x$ can be $P(l|x)$ or $P(e|x)$ . Because of its simple structure and extensibility, we choose BiLSTM-CRF as our base model for discourse segmentation. In BiLSTM-CRF, $P(l|x)$ is formulated as follows:
+
+$$
+P (\boldsymbol {l} | \boldsymbol {x}) = \frac {\prod_ {t = 1} ^ {n} \psi_ {t} \left(l _ {t} , l _ {t - 1} , h\right)}{\sum_ {l ^ {\prime} \in Y} \prod_ {t = 1} ^ {n} \psi_ {t} \left(l _ {t} ^ {\prime} , l _ {t - 1} ^ {\prime} , h\right)}, \tag {1}
+$$
+
+where $\psi_t(l_t, l_{t-1}, h) = \exp(W^T h_t + b)$ is the potential function, $h_t$ is the hidden state at time step $t$ , $W$ is a weight matrix, $b$ is a bias term, and $Y$ is the set of possible label sequences.
+
+We inherit top- $k$ Viterbi results of BiLSTM-CRF, scored by Eq.(1), to our LMGC, as described in Section 4.
+
+# 3.2 Discourse Parser
+
+In discourse parsing, given an input text $\pmb{x}$ and its EDUs $e$ , we can build a binary tree $p = \{p_1, \dots, p_{2n-1}\}$ , where each node $p_i \in p$ has three kinds of labels: span $s_i$ , nuclearity $u_i$ , and relation $r_i$ . The sequences of span $s$ and nuclearity $u$ can be predicted simultaneously, as in 2-stage Parser (Wang et al., 2017), or span $s$ can be predicted in advance for labeling nuclearity $u$ and relation $r$ , as in pointer-networks (Lin et al., 2019) and span-based Parser (Kobayashi et al., 2020). Because of its better performance, we choose 2-stage Parser as our base model for sentence-level discourse parsing. 2-stage Parser extracts several features and does classification with SVMs in two stages. In the first stage, it identifies the span and nuclearity simultaneously to construct a tree based on the transition-based system with four types of actions: Shift, Reduce-NN, Reduce-NS, and Reduce-SN. In the second stage, for a given node $p_i$ , $r_i$ is
+
+predicted as the relation between the left and right children nodes of $p_i$ by using features extracted from $p_i$ and its children nodes. In spite of its limited features, it achieves the best results compared with pointer-networks and span-based Parser. Since 2-stage Parser utilizes SVMs, we normalize the action scores and inherit top- $k$ beam search results of 2-stage Parser for LMGC to perform discourse parsing.
+
+# 4 Language Model-based Generative Classifier (LMGC)
+
+In this section, we introduce our generative classifier, LMGC, that utilizes a masked and permuted language model to compute sequence probabilities in both discourse segmentation and sentence-level discourse parsing tasks. More specifically, as we mention in Section 5, we can utilize our LMGC in three tasks, (a) discourse segmentation, (b) sentence-level discourse parsing with gold segmentation, and (c) sentence-level discourse parsing with automatic segmentation. Figure 2 shows the overview of our LMGC for the whole task (c). As shown in the figure, the prediction process in LMGC is the following. We assume that, in task (c), discourse segmentation and sentence-level discourse parsing are performed in a pipeline manner with models trained for tasks (a) and (b).
+
+1. Predict top- $k_{s}$ EDU segmentations $\{e_1,\dots ,e_{k_s}\}$ from a given sentence $\pmb{x}$ with the base discourse segmenter, described in Section 3.1.
+2. Compute joint probability $P(\pmb {x},\pmb {e}_i)$ and select the best segmentation $\pmb{e}$ from $\{\pmb {e}_1,\dots ,\pmb{e}_{k_s}\}$ with a language model, as we describe below.
+3. Parse and rank top- $k_{p}$ trees $\{\pmb{p}_{1},\dots ,\pmb{p}_{k_{p}}\}$ from $x$ and best segmentation $\pmb{e}$ with the base discourse parser, described in Section 3.2.
+4. Compute joint probability $P(\pmb{x}, \pmb{e}, \pmb{p}_j)$ to select the best tree from $\{\pmb{p}_1, \dots, \pmb{p}_{k_p}\}$ with a language model, as we describe below.
+
+In task (a), we apply Step 2 to predict the best segmentation after Step 1. In task (b), we skip Steps 1 and 2, and apply just Steps 3 and 4 for gold segmentation to yield the best parse tree.
+
+# 4.1 Tree Representations
+
+To calculate joint probabilities for a discourse tree with a language model, we need to represent a tree as a linear form, like Figure 1 (b). Since there are several predictable label sets in discourse segmentation and parsing tasks, as shown in Figure 3, we prepare linearized forms for each label set.
+
+In discourse segmentation, we can consider joint probability $P(\pmb{x}, \pmb{e})$ for a sequence with inserting a symbol, [EDU], at an EDU boundary (Figure 3 (a)). In discourse parsing, a discourse tree is represented as a sequence with several kinds of label sets: span labels $s$ , nuclearity labels $u$ including span labels, and relation labels $r$ including span and nuclearity labels (Figures 3 (b)-(d)). To investigate the effectiveness of each label set in the reranking step, we consider $P(\pmb{x}, \pmb{e}, s)$ , $P(\pmb{x}, \pmb{e}, \pmb{u})$ , and $P(\pmb{x}, \pmb{e}, r)$ for each label set to represent $P(\pmb{x}, \pmb{e}, \pmb{p})$ in this paper. To build a sequence, we combine each label in a tree with brackets to imply the boundary for the label. For example, "N" and ")N" stand for the start and end of a nucleus EDU. For a node $p_i$ of the tree, $r_i$ describes the relation between its children nodes, leading to $r_i$ of leaf nodes being "Null". When the child nodes of $p_i$ are nucleus and satellite, we assign label "Span" to the nucleus child node of $p_i$ and label $r_i$ to the satellite child node of $p_i$ , respectively. When the child nodes of $p_i$ are both nucleus, we assign label $r_i$ to both child nodes of $p_i$ .
+
+For simpler illustration, in Figure 1 (b), we show the linearized discourse tree only with nuclearity and relation labels, since the nuclearity labels can also show span and EDU boundary labels. "Null" labels for leaf nodes are also omitted in the figure.
+
+# 4.2 Joint Probabilities
+
+To calculate joint probabilities in the last subsection with a language model, we consider probability $P(\pmb{z})$ for a sequence $\pmb{z} = (z_{1},\dots ,z_{a})$ , which corresponds to the probabilities for the sequential representations $P(\pmb{x},\pmb{e})$ , $P(\pmb{x},\pmb{e},\pmb{s})$ , $P(\pmb{x},\pmb{e},\pmb{u})$ , and $P(\pmb{x},\pmb{e},\pmb{r})$ .
+
+According to Song et al. (2020), masked and permuted language modeling (MPNet) takes the advantages of both masked language modeling and permuted language modeling while overcoming their issues. Compared with Bert (Devlin et al., 2019) and XLNet (Yang et al., 2019), MPNet considered more information about tokens and positions, and achieved better results for several downstream tasks (GLUE, SQuAD, etc). Taking into account its better performance, we choose pre-trained MPNet (Song et al., 2020) as our language model. Because considering all possible inter-dependence between $z_{t}$ is intractable, we follow the decomposition of pseudo-log-likelihood scores (PLL) (Salazar et al., 2020) in the model. Thus, we decompose and calculate logarithmic $P(z)$ as follows:
+
+$$
+\log P (z; \theta) \tag {2}
+$$
+
+$$
+\approx P L L (z; \theta) = \sum_ {t = 1} ^ {a} \log P \left(z _ {t} \mid z _ {< t}, z _ {> t}, M _ {t}; \theta\right),
+$$
+
+where $z_{< t}$ is the first sub-sequence $(z_{1},\dots ,z_{t - 1})$ in $\pmb{z}$ and $z_{>t}$ is the latter sub-sequence $(z_{t + 1},\dots ,z_{a})$ in $\pmb{z}$ . $M_t$ denotes the mask token [MASK] at position $t$ . $P(z_{t} \mid z_{< t},z_{>t},M_{t};\theta)$ is computed by two-stream self-attention (Yang et al., 2019). In inference, we select $\pmb{z}$ based on $\frac{1}{a} PLL(\pmb{z};\theta)$ .
+
+This model converts $\mathbf{z}$ into continuous vectors $\mathbf{w} = \{w_1, \dots, w_a\}$ through the embedding layer. Multi-head attention layers further transform the vectors to predict each $z_t$ in the softmax layer.
+
+Since pre-trained MPNet does not consider EDU, span, nuclearity, and relation labels in the pretraining step, we need to construct vectors $\boldsymbol{w}$ for these labels from the pre-trained parameters to enhance the prediction performance. We describe the details of this method in the next subsection.
+
+# 4.3 Label Embeddings
+
+In LMGC, we embed input text tokens and labels in the same vector space (Wang et al., 2018a) of the embedding layer. Under the setting, to deal with unseen labels in the pre-trained model, we compute the label embeddings by utilizing token embeddings in the pre-trained model.
+
+We try to combine the input text with four kinds of labels, EDU, span, nuclearity, and relation labels, which were defined and clearly described in the annotation document (Carlson and Marcu, 2001) (See Appendix B for the descriptions). In taking into account the descriptions for the labels as additional
+
+(a) Sentence with EDU boundary labels
+
+$$
+e _ {1} \_ [ E D U ] \_ e _ {2} \_ [ E D U ] \_ e _ {3} \_ [ E D U ]
+$$
+
+(b) Sentence with span labels
+
+$$
+\begin{array}{l} (\text {S p a n \_ (S p a n \_ e _ {1 \_}) _ {S p a n \_ (S p a n \_ e _ {2 \_}) _ {S p a n \_}) _ {S p a n}} \\ \text {\_ (S p a n \_ e _ {3 \_}) _ {S p a n}} \end{array}
+$$
+
+(c) Sentence with nuclearity labels
+
+$$
+(\mathrm {N} _ {-} (\mathrm {N} _ {-} e _ {1 -}) _ {\mathrm {N} -} (\mathrm {S} _ {-} e _ {2 -}) _ {\mathrm {S} -}) _ {\mathrm {N} -} (\mathrm {S} _ {-} e _ {3 -}) _ {\mathrm {S}}
+$$
+
+(d) Sentence with relation labels
+
+$$
+\begin{array}{l} (\text {S p a n \_ (S p a n \_ e _ {1 \_}) S p a n \_ (E l a b o r a t i o n \_ e _ {2 \_}} \\ \text {E l a b o r a t i o n \_) S p a n \_ (A t t r i b u t i o n \_ e _ {3 \_}) A t t r i b u t i o n} \end{array}
+$$
+
+Figure 3: Example joint representations of an input text and labels for sentence We've got a lot to do, he acknowledged. $e_i$ represents the corresponding EDU, and "_" is whitespace.
+
+information, we adopt two different methods, Average and Concatenate, for representing the label embeddings.
+
+Average: We average the embeddings of tokens that appear in the definition of a label and assign the averaged embedding to the label.
+
+Concatenate: We concatenate a label name with its definition and insert the concatenated text to the end of sequence $z$ ,2 so that the label embedding can be captured by self-attention mechanisms (Vaswani et al., 2017). Note that we do not try it in the parsing task, because the length of a sequence increases in proportion to the increase of the number of labels, that causes a shortage of memory space.
+
+# 4.4 Objective Function
+
+Because the search space for sequences of a text and its labels is exponentially large, instead of considering all possible sequences $Z(\pmb{x})$ for $\pmb{x}$ , we assume $Z^{\prime}(\pmb{x})$ as a subset of sequences based on top- $k$ results from the base model. We denote $z_{g} \in Z(\pmb{x})$ as the correct label sequence of $\pmb{x}$ . To keep pre-trained information in MPNet, we continue masking and permutation for training model parameter $\theta$ . Assuming that $O_{a}$ lists all permutations of set $\{1, 2, \dots, a\}$ , the number of elements in $O_{a}$ satisfies $|O_{a}| = a!$ . For $z \in Z^{\prime}(\pmb{x}) \cup \{z_{g}\}$ , we train the model parameter $\theta$ in LMGC by maximizing the following expectation over all permuta
+
+tions:
+
+$$
+\begin{array}{l} \mathbb {E} _ {\boldsymbol {o} \in O _ {a}} \sum_ {t = c + 1} ^ {a} \left[ I _ {\boldsymbol {z}} \log P \left(z _ {o _ {t}} \mid z _ {o _ {< t}}, M _ {o _ {> c}}; \theta\right) \right. \\ + \left(1 - I _ {\mathbf {z}}\right) \log \left(1 - P \left(z _ {o _ {t}} \mid z _ {o _ {< t}}, M _ {o _ {> c}}; \theta\right)\right), \tag {3} \\ \end{array}
+$$
+
+where $I_{z}$ is the indicator function, defined as follows:
+
+$$
+I _ {z} := \left\{ \begin{array}{l l} 1 & \text {i f} z = z _ {g} \\ 0 & \text {i f} z \neq z _ {g} \end{array} . \right. \tag {4}
+$$
+
+$c$ , denoting the number of non-predicted tokens $z_{o_{<}=c}$ , is set manually. $o_{c}}$ denotes the mask tokens [MASK] at position $o_{>c}$ . $P(z_{o_t} \mid z_{o_{c}}; \theta)$ is computed by two-stream self-attention (Yang et al., 2019).
+
+# 5 Experiments
+
+In this section, we present our experiments in three tasks, (a) discourse segmentation, (b) sentence-level discourse parsing with gold segmentation, and (c) sentence-level discourse parsing with automatic segmentation.
+
+# 5.1 Experimental Settings
+
+# 5.1.1 Datasets
+
+Following previous studies (Wang et al., 2017, 2018b; Lin et al., 2019), we used the RST Discourse Treebank (RST-DT) corpus (Carlson et al., 2002) as our dataset. This corpus contains 347 and 38 documents for training and test datasets, respectively. We divided the training dataset into two parts, following the module RSTFinder3 (Heilman and Sagae, 2015), where 307 documents were used to train models and the remaining 40 documents were used as the validation dataset.
+
+We split the documents into sentences while ignoring footnote sentences, as in Joty et al. (2012). There happens two possible problematic cases for the splitted sentences: (1) The sentence consists of exactly an EDU, and so it has no tree structure. (2) The tree structure of the sentence goes across to other sentences. Following the setting of Lin et al. (2019), we did not filter any sentences in task (a). In task (b), we filtered sentences of both cases. In task (c), we filtered sentences of case (2). Table 1 shows the number of available sentences for the three different tasks.
+
+3https://github.com/
+
+EducationalTestingService/rstfinder
+
+| Task | Train | Valid | Test |
| (a) Segmentation | 6,768 | 905 | 991 |
| (b) Parsing w/ gold segmentation | 4,524 | 636 | 602 |
| (c) Parsing w/ auto segmentation | - | 861 | 951 |
+
+Table 1: The number of sentences for each task.
+
+# 5.1.2 Evaluation Metrics
+
+In task (a), we evaluated the segmentation in micro-averaged precision, recall, and $\mathrm{F_1}$ score with respect to the start position of each EDU. The position at the beginning of a sentence was ignored. In task (b), we evaluated the parsing in microaveraged $\mathrm{F_1}$ score with respect to span, nuclearity, and relation. In task (c) for parsing with automatic segmentation, we evaluated both the segmentation and parsing in micro-averaged $\mathrm{F_1}$ score.
+
+We used the paired bootstrap resampling (Koehn, 2004) for the significance test in all tasks when comparing two systems.
+
+# 5.1.3 Compared Methods
+
+As our proposed methods, we used $\mathrm{LMGC}_e$ , $\mathrm{LMGC}_s$ , $\mathrm{LMGC}_u$ , and $\mathrm{LMGC}_r$ , which respectively model probability $P(\boldsymbol{x}, \boldsymbol{e})$ , $P(\boldsymbol{x}, \boldsymbol{e}, s)$ , $P(\boldsymbol{x}, \boldsymbol{e}, u)$ , and $P(\boldsymbol{x}, \boldsymbol{e}, r)$ with initialized label embeddings. We represent LMGC with Average and Concatenate label embeddings as Enhance and Extend, respectively.
+
+We used the base discourse segmenter and parser described in Section 3 as our baseline. We reproduced the base discourse segmenter BiLSTM-CRF $^4$ (Wang et al., 2018b). Because BiLSTM-CRF adopted the hidden states of ELMo (Peters et al., 2018) as word embeddings, we also tried the last hidden state of MPNet as the word embeddings for BiLSTM-CRF for fairness. We retrained the segmenter in five runs, and the experimental results are showed in Appendix C. The publicly shared BiLSTM-CRF by Wang et al. (2018b) is our base segmenter in the following experiments.
+
+As for the base parser, we retrained two models, 2-stage $\mathsf{Parser}^5$ (Wang et al., 2017) and span-based $\mathsf{Parser}^6$ (Kobayashi et al., 2020). Different from the setting of Lin et al. (2019), we retrained 2-stage Parser in the sentence-level rather than in the document-level. Since the experimental re
+
+sults show our retrained 2-stage Parser achieved the highest $\mathrm{F}_1$ scores among several parsers (See Appendix C), we selected it as our base parser in the following experiments.
+
+Furthermore, for comparing LMGC with an unidirectional generative model (Mabona et al., 2019), we constructed another baseline method which utilizes a GPT-2 (Radford et al., 2019) based reranker. This method follows an unidirectional language model-based generative parser (Choe and Charniak, 2016), and considers top- $k$ results from the base model by an add-1 version of infinilog loss (Ding et al., 2020) during training. We denote this baseline as GPT2LM hereafter. GPT2LM models $P(\boldsymbol{x}, \boldsymbol{e})$ for task (a) and $P(\boldsymbol{x}, \boldsymbol{e}, \boldsymbol{r})$ for tasks (b) and (c), respectively. Both LMGC and GPT2LM are the ensemble of 5 models with different random seeds. See Appendix D for a complete list of hyperparameter settings.
+
+# 5.1.4 Number of Candidates
+
+As described in Section 4, LMGC requires parameters $k_{s}$ and $k_{p}$ for the number of candidates in the steps for different tasks. We tuned $k_{s}$ and $k_{p}$ based on the performance on the validation dataset.
+
+In task (a), $k_{s}$ was set to 20 and 5 for training and prediction, respectively. In task (b), $k_{p}$ was set to 20 and 5 for training and prediction, respectively. In task(c), $k_{s}$ and $k_{p}$ were both set to 5 for prediction. The set of parameters was similarly tuned for GPT2LM on the validation dataset. We list all of them in Appendix E.
+
+# 5.2 Results
+
+# 5.2.1 Discourse Segmentation
+
+Table 2 shows the experimental results for the discourse segmentation task. Oracle indicates the upper bound score that can be achieved with candidates generated by the base model. To compute the Oracle score, if the candidades by the base model include the correct answer, we assume the prediction is correct.
+
+$\mathrm{LMGC}_e$ significantly outperformed $\mathrm{GPT2LM}_e$ . We think the reason is similar to what Zhu et al. (2020) reported: BERT-based bidirectional Transformer encoders encode more rhetorical features than GPT2-based unidirectional Transformer en
+
+| Model | Precision | Recall | F1 |
| Oracle | 97.73 | 98.67 | 98.20 |
| Pointer-networks* | 93.34 | 97.88 | 95.55 |
| Base segmenter | 92.22 | 95.35 | 93.76 |
| GPT2LMe | 94.05 | 95.72 | 94.88 |
| LMGCe | 95.31 | 97.56 | 96.43† |
| Enhancee | 95.54 | 97.93 | 96.72† |
| Extendc | 95.05 | 97.86 | 96.44† |
+
+Table 2: Results for the discourse segmentation task. * indicates the reported score by Lin et al. (2019). The best score in each metric among the models is indicated in bold. † indicates that the score is significantly superior to GPT2LM with a p-value < 0.01.
+
+| Model | Span | Nuclearity | Relation |
| Oracle | 98.67 | 95.88 | 90.07 |
| Pointer-networks* | 97.44 | 91.34 | 81.70 |
| Base parser | 97.92 | 92.07 | 82.06 |
| GPT2LMr | 96.35 | 88.11 | 77.86 |
| LMGCs | 98.23‡ | 92.31 | 82.22 |
| Enhances | 98.27‡ | 92.39 | 82.42 |
| LMGCu | 98.31‡ | 94.00† | 83.63† |
| Enhanced | 98.31† | 93.88† | 83.56† |
| LMGCr | 98.00 | 93.09† | 83.99† |
| Enhanced | 98.12 | 93.13† | 84.69† |
+
+Table 3: Results for the sentence-level discourse parsing task with gold segmentation. * indicates the reported score by Lin et al. (2019). The best score in each metric among the models is indicated in **bold. † and ‡ indicate that the score is significantly superior to the base parser with a p-value < 0.01 and < 0.05, respectively.
+
+coders. Using Average label embeddings is more helpful than using Concatenate label embeddings for $\mathrm{LMGC}_e$ . Enhance $_e$ achieved the state-of-the-art $\mathrm{F}_1$ score of 96.72, which outperformed both the base segmenter and the pointer-networks.
+
+# 5.2.2 Sentence-level Discourse Parsing
+
+Gold Segmentation: Table 3, Figures 4 and 5 show the experimental results for the sentence-level discourse parsing task with gold segmentation. In Table 3, $\mathrm{LMGC}_u$ achieved the highest span and nuclearity $F_1$ scores of 98.31 and 94.00, respectively. Enhancer achieved the state-of-the-art relation $F_1$ score of 84.69, which is significantly superior to the base parser. Although using Average label embeddings improved $\mathrm{LMGC}_r$ , it can provide no or only limited improvement for $\mathrm{LMGC}_u$ and $\mathrm{LMGC}_s$ . We
+
+
+Figure 4: Performance of 2-stage parser and Enhancer in the sentence-level discourse parsing task with gold segmentation. The hollow bar denotes the number of different gold labels in the training dataset. Blue and red lines indicate the $\mathrm{F}_1$ scores of Enhancer and 2-stage parser, respectively, for each relation label.
+
+
+Figure 5: Confusion matrix for Enhancer in the sentence-level discourse parsing task with gold segmentation. We show the ratio of the number of instances with predicted labels (for a column) to the number of instances with gold labels (for a row) in the corresponding cell.
+
+guess that this difference is caused by the number of different kinds of labels in span, nuclearity, and relation. The performance of $\mathrm{GPT2LM}_r$ is even worse than the base parser. We think this is because we added the relation labels to the vocabulary of GPT-2 and resized the pre-trained word embeddings.
+
+Figure 4 shows the comparison between the base parser and Enhancer with respect to each ralation label. In most relation labels, Enhancer outperformed 2-stage Parser except for the labels Explanation, Evaluation, and Topic-Comment. 2-stage Parser achieved the $\mathrm{F_1}$ score of 17.14 for label Temporal while Enhancer achieved the $\mathrm{F_1}$ score of 44.44 by reranking the parsing results from 2-stage Parser. Such great improvement with Enhancer can also be found for labels such as Contrast, Back
+
+
+(a) $\mathrm{LMGC}_r$
+
+
+(b) Enhancer
+Figure 6: t-SNE plot of relation label embeddings trained in $\mathrm{LMGC}_r$ and Enhancer.
+
+ground, and Cause. Obviously, Enhancer tends to improve the performance for labels whose training data is limited.
+
+Figure 5 shows a confusion matrix of $\mathrm{Enhancer}_r$ for each relation label. It shows that the relation labels Comparison, Cause, and Temporal were often predicted wrongly as Contrast, Joint, and Joint or Background, respectively, by Enhancer, even though these labels have at least 100 training data. We guess this might be due to some similarities between those labels.
+
+By using the t-SNE plot (Van der Maaten and Hinton, 2008), we visualize the trained relation label embeddings of $\mathrm{LMGC}_r$ and Enhancer. Figures 6a and 6b show the results. Figure 6a shows a clearer diagonal that divides labels with parenthesis
+
+| Model | Seg | Span | ParseNuclearity | Relation |
| Pointer-networks* | - | 91.75 | 86.38 | 77.52 |
| \( Oracle_{seg} \) | 98.24 | - | - | - |
| Base segmenter | 93.92 | - | - | - |
| \( GPT2LM_e \) | 95.03 | - | - | - |
| \( LMGC_e \) | 96.51 | - | - | - |
| \( Enhanc_e \) | 96.79 | - | - | - |
| \( Extend_e \) | 96.48 | - | - | - |
| Oracle | - | 93.95 | 91.25 | 85.93 |
| Base parser | - | 93.53 | 88.08 | 78.75 |
| \( GPT2LM_r \) | - | 92.02 | 84.20 | 74.49 |
| \( LMGC_s \) | - | 93.96‡ | 88.46 | 79.25 |
| \( Enhanc_s \) | - | 94.00† | 88.50 | 79.33 |
| \( LMGC_u \) | - | 93.96† | 89.90† | 80.33† |
| \( Enhanc_u \) | - | 93.92‡ | 89.74† | 80.22† |
| \( LMGC_r \) | - | 93.65 | 89.08† | 80.57† |
| \( Enhanc_r \) | - | 93.73 | 89.16† | 81.18† |
+
+Table 4: Results for the sentence-level discourse parsing task with automatic segmentation. * indicates the reported score by Lin et al. (2019). The best score in each metric among the models for each block is indicated in bold. We used the discourse segmentation results of Enhance $_e$ as the input of the discourse parsing stage for all models, for fair comparison of sentence-level discourse parsing. † and ‡ indicate that the score is significantly superior to the base parser with a p-value < 0.01 and < 0.05, respectively.
+
+"from the ones with"), while Figure 6b shows more distinct divisions between labels.
+
+Automatic Segmentation: Table 4 shows the experimental results for the sentence-level discourse parsing task with automatic segmentation. The second and third blocks in the table show the results for the first and second stages, discourse segmentation and sentence-level discourse parsing, respectively.[9]
+
+Enhanced achieved the highest relation $\mathrm{F_1}$ score of 81.18, which is a significant improvement of 2.43 points compared to the base parser. Enhances and $\mathrm{LMGC}_u$ achieved the highest span and nuclearity $\mathrm{F_1}$ scores of 94.00 and 89.90, respectively. Since $\mathrm{LMGC}_*$ and Enhances were the models trained in task (b), and Enhances achieved the $\mathrm{F_1}$ score of 96.79 in discourse segmentation, it is not surprising to find that the tendency of those results is similar to that in sentence-level discourse parsing with gold segmentation.
+
+# 6 Conclusion
+
+In this research, we proposed a language model-based generative classifier, LMGC. Given the top
+
+$k$ discourse segmentations or parses from the base model, as a reranker, LMGC achieved the state-of-the-art performances in both discourse segmentation and sentence-level discourse parsing. The experimental results also showed the potential of constructing label embeddings from token embeddings by using label descriptions in the manual. In the future, we plan to apply LMGC to other diverse classification tasks.
+
+# References
+
+Lynn Carlson and Daniel Marcu. 2001. Discourse tagging reference manual. *ISI Technical Report* ISI-TR-545.
+Lynn Carlson, Daniel Marcu, and Ellen Okuowski
+Mary. 2002. Rst discourse treebank ldc2002t07.
+Philadelphia: Linguistic Data Consortium.
+Lynn Carlson, Daniel Marcu, and Mary Ellen Okurovsky. 2001. Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue.
+Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2331-2336, Austin, Texas. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Xiaoan Ding, Tianyu Liu, Baobao Chang, Zhifang Sui, and Kevin Gimpel. 2020. Discriminatively-Tuned Generative Classifiers for Robust Natural Language Inference. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8189-8202, Online. Association for Computational Linguistics.
+Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209, San Diego, California. Association for Computational Linguistics.
+Vanessa Wei Feng and Graeme Hirst. 2014a. A linear-time bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational
+
+Linguistics (Volume 1: Long Papers), pages 511-521, Baltimore, Maryland. Association for Computational Linguistics.
+Vanessa Wei Feng and Graeme Hirst. 2014b. Two-pass discourse segmentation with pairing and global features.
+Seeger Fisher and Brian Roark. 2007. The utility of parse-derived features for automatic discourse segmentation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 488-495, Prague, Czech Republic. Association for Computational Linguistics.
+Francisco Guzmán, Shafiq Joty, Lluis Márquez, and Preslav Nakov. 2014. Using discourse structure improves machine translation evaluation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 687-698, Baltimore, Maryland. Association for Computational Linguistics.
+Michael Heilman and Kenji Sagae. 2015. Fast rhetorical structure theory discourse parsing. arXiv preprint arXiv:1505.02425.
+Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-crf models for sequence tagging.
+Shafiq Joty, Giuseppe Carenini, and Raymond Ng. 2012. A novel discriminative framework for sentence-level discourse analysis. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 904-915, Jeju Island, Korea. Association for Computational Linguistics.
+Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra- and multisentential rhetorical parsing for document-level discourse analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 486-496, Sofia, Bulgaria. Association for Computational Linguistics.
+Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2015. CODRA: A novel discriminative framework for rhetorical analysis. Computational Linguistics, 41(3):385-435.
+Shafiq Joty, Francisco Guzmán, Lluis Márquez, and Preslav Nakov. 2017. Discourse structure in machine translation evaluation. Computational Linguistics, 43(4):683-722.
+Dan Jurafsky. 2000. Speech & language processing. Pearson Education India.
+Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, and Masaaki Nagata. 2020. Top-down rst parsing utilizing granularity levels in documents. volume 34, pages 8099-8106.
+
+Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics.
+Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, and M Saiful Bari. 2019. A unified linear-time framework for sentence-level discourse parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4190-4200, Florence, Italy. Association for Computational Linguistics.
+Amandla Mabona, Laura Rimell, Stephen Clark, and Andreas Vlachos. 2019. Neural generative rhetorical structure parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2284-2295, Hong Kong, China. Association for Computational Linguistics.
+William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
+Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
+Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699-2712, Online. Association for Computational Linguistics.
+Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics.
+
+Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. Mpnet: Masked and permuted pretraining for language understanding. arXiv preprint arXiv:2004.09297.
+Caroline Sporleder and Mirella Lapata. 2005. *Discourse chunking and its application to sentence compression.* In *Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing*, pages 257-264, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
+Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11).
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
+Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018a. Joint embedding of words and labels for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2321-2331, Melbourne, Australia. Association for Computational Linguistics.
+Yizhong Wang, Sujian Li, and Houfeng Wang. 2017. A two-stage parsing method for text-level discourse analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 184-188, Vancouver, Canada. Association for Computational Linguistics.
+Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018b. Toward fast and accurate neural discourse segmentation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 962-967, Brussels, Belgium. Association for Computational Linguistics.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Ngo Xuan Bach, Nguyen Le Minh, and Akira Shimazu. 2012. A reranking model for discourse segmentation using subtree features. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 160-168, Seoul, South Korea. Association for Computational Linguistics.
+
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
+
+Zining Zhu, Chuer Pan, Mohamed Abdalla, and Frank Rudzicz. 2020. Examining the rhetorical capacities of neural language models. In Proceedings of the Third Blackbox NLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 16-32.
+
+# A Experimental Results of LMGC with Tree
+
+Since the raw s-expression-style tree is longer than our joint representations with span, nuclearity and relation, we transformed the raw tree into a sequence as Figure 7 shows, where the nuclearity and relation labels are connected together by the colons. To construct the label embedding for $P(\boldsymbol{x}, \boldsymbol{e}, \boldsymbol{p})$ , we combined the descriptions of the nuclearity and relation (see descriptions in Appendix B), and assigned the combination to the corresponding node. For example, the description of "(Attribution:S)" is the start of a supporting or background piece of information attribution, attribution represents both direct and indirect instances of reported speech.
+
+(Span:N_ (Span:N_ $e_1\_$ )_{Span:N_} (Elaboration:S_ $e_2\_$ )_{Elaboration:S\_})_{Span:N_} (Attribution:S_ $e_3\_$ )_{Attribution:S}
+
+Figure 7: Example joint representation of an input text with all tree labels for sentence We've got a lot to do, he acknowledged. $e_i$ represents the corresponding EDU, and "\_ " is whitespace.
+
+$\mathrm{LMGC}_p$ models the joint probability $P(x, e, p)$ with initialized label embedding. The experimental results of $\mathrm{LMGC}_p$ and $\mathrm{Enhanced}_p$ for the sentence-level discourse parsing task with gold segmentation are showed in Table 5. $\mathrm{LMGC}_p$ and $\mathrm{Enhanced}_p$ are the ensemble of 5 models with different random seed, although the training loss of $\mathrm{Enhanced}_p$ in 2 of 5 models did not decrease.
+
+| Model | Span | Nuclearity | Relation |
| LMGCp | 97.84 | 92.90 | 84.11 |
| Enhanced | 98.04 | 92.74 | 84.18 |
+
+Table 5: Performances of $\mathrm{LMGC}_p$ and Enhancer in the sentence-level discourse parsing task with gold segmentation.
+
+# B Label Descriptions
+
+We list our extracted label descriptions from Carlson and Marcu (2001) in Table 6. For parsing symbols with brackets "(" and ")'' like "(N" and ")_N", we inserted the position phrase, the start of and the end of, to the beginning of their label definitions. So the description of ")_N" is the end of a more salient or essential piece of information.
+
+# C Experiment Results of Reproduced Base Model
+
+Table 7 shows the experimental results of BiLSTM-CRF in discourse segmentation, where the results of our reproduced BiLSTM-CRF are averaged in five runs. Table 8 shows the experimental results of different parsers in the sentence-level discourse parsing task with gold segmentation.
+
+# D Hyperparameters
+
+For LMGC, we used the source code shared in the public github10 of Song et al. (2020). We used the uploaded pre-trained MPNet and same setup as illustrated in Table 9. $15\%$ tokens as the predicted tokens were masked by replacement strategy 8:1:1. Relative positional embedding mechanism (Shaw et al., 2018) was utilized. Since the vocab we used is same as the one of BERT (Devlin et al., 2019), we used the symbol [SEP] to represent [EDU] and symbol [unused#] starting from 0 to represent parsing labels such as "(N" and "Attribution".
+
+For GPT2LM, we used the source code shared in the public github $^{11}$ (Ott et al., 2019). Following the steps in Choe and Charniak (2016), we utilized Eq (5) (Jurafsky, 2000) to compute the joint distribution,
+
+$$
+\begin{array}{l} P (\boldsymbol {x}, \boldsymbol {y}) = P (\boldsymbol {z}) = P (z _ {1}, \dots , z _ {a}) \tag {5} \\ = \prod_ {t = 1} ^ {a} P (z _ {t} | z _ {1}, \dots , z _ {t - 1}), \\ \end{array}
+$$
+
+where $P(z_{t}|z_{1},\ldots ,z_{t - 1})$ was computed by GPT2 (Radford et al., 2019). And in inference, we selected $\pmb{z}$ based on $\frac{1}{a}\log P(z)$ . An add-1 version of infinitilog loss (Ding et al., 2020) was utilized for training GPT2LM as follows:
+
+$$
+- \log f (z) + \log [ 1 + \sum_ {z ^ {\prime} \in Z ^ {\prime} (x), z ^ {\prime} \neq z} f \left(z ^ {\prime}\right) ], \tag {6}
+$$
+
+| Label | Definition |
| [EOS] | elementary discourse units are the minimal building blocks of a discourse tree |
| Span | span |
| Nucleus | a more salient or essential piece of information |
| Satellite | a supporting or background piece of information |
| Attribution | attribution, attribution represents both direct and indirect instances of reported speech |
| Background | background or circumstance |
| Cause | cause or result |
| Comparison | comparison, preference, analogy or proportion |
| Condition | condition, hypothetical, contingency or otherwise |
| Contrast | contrast relation, spans contrast with each other along some dimension. Typically, it includes a contrastive discourse cue, such as but, however, while. |
| Elaboration | elaboration, elaboration provides specific information or details to help define a very general concept |
| Enablement | enablement, enablement presents action to increase the chances of the unreal-ized situation being realized. |
| Evaluation | evaluation, interpretation, conclusion or comment |
| Explanation | evidence, explanation or reason |
| Joint | list, list contains some sort of parallel structure or similar fashion between the units |
| Manner-Means | explaining or specifying a method, mechanism, instrument, channel or conduit for accomplishing some goal |
| Topic-Comment | problem solution, question answer, statement response, topic comment or rhetorical question |
| Summary | summary or restatement |
| Temporal | situations with temporal order, before, after or at the same time |
| Topic change | topic change |
| Textual-organization | links that are marked by schemata labels |
| Same-unit | links between two non-adjacent parts when separated by an intervening relative clause or parenthetical |
+
+Table 6: Extracted label definitions.
+
+| Model | Precision | Recall | F1 |
| Reported* | 92.04 | 94.41 | 93.21 |
| Shared | 92.22 | 95.35 | 93.76 |
| Reproduced (ELMo) | 93.16 | 96.26 | 94.68 |
| Reproduced (MPNet) | 92.84 | 95.63 | 94.21 |
+
+Table 7: Performances of BiLSTM-CRF (Wang et al., 2018b) in the discourse segmentation task. The best score in each metric among the models is indicated in bold. * indicates the reported score by Lin et al. (2019). Shared is the publicly shared model by Wang et al. (2018b). Reproduced (ELMo) and Reproduced (MPNet) are our reproduced models with different word embeddings.
+
+| Model | Span | Nuclearity | Relation |
| 2-Stage Parser* | 95.60 | 87.80 | 77.60 |
| Pointer-networks* | 97.44 | 91.34 | 81.70 |
| Span-based Parser | 96.67 | 90.23 | 74.76 |
| 2-Stage Parser | 97.92 | 92.07 | 82.06 |
+
+where
+
+$$
+f (\boldsymbol {z}) = \frac {\exp \left(\frac {1}{a} \log P (\boldsymbol {z})\right)}{\sum_ {\boldsymbol {z} ^ {\prime} \in Z ^ {\prime} (\boldsymbol {x})} \exp \left(\frac {1}{a ^ {\prime}} \log P \left(\boldsymbol {z} ^ {\prime}\right)\right)}. \tag {7}
+$$
+
+We used the uploaded pretrained "gpt2" model (Wolf et al., 2020) and same setup as illustrated in Table 10. We used symbol $\equiv \equiv \equiv \equiv$ in vocab to represent the symbol [EDU]. Because the vocab of GPT-2 has no available symbol for representing an unseen symbol, we added and our relation symbols to the vocab of GPT-2 and resized the pre-trained word embeddings.
+
+# E Setting of Candidates
+
+Table 11 shows the setting of candidates for different tasks. As described in Section 4.4, we do data augmentation by using additional top- $k$ results generated by a base method, a larger $k$ during training is expected to bring more promotion for LMGC. However, a larger $k$ during prediction step introduces more candidates and may make the prediction more difficult. Taking this into consideration, we tuned $k_{s}$ and $k_{p}$ for training and prediction separately based on the performance on the validation dataset.
+
+Table 8: Performance of retrained parsers in the sentence-level discourse parsing task with gold segmentation. The best score in each metric among the models is indicated in **bold**. * indicates the reported score by Lin et al. (2019).
+
+| Hyperparameter | Value |
| Optimizer | adam |
| Adam β1 | 0.9 |
| Adam β2 | 0.98 |
| Adam ε | 1e-6 |
| weight decay | 0.01 |
| Learning rate | 0.00009 |
| Batch size | 8192 tokens |
| Warm up steps | 2.4 epoch |
| Epoch | 30 |
| Attention layer | 12 |
| Attention head | 12 |
| dropout | 0.1 |
| attention dropout | 0.1 |
| Hidden size | 768 |
| Vocab size | 30527 |
| Tokenizer | Byte pair encoder |
| Max sentence length | 512 |
+
+Table 9: List of used hyperparameters for LMGC.
+
+| Hyperparameter | Value |
| Optimizer | adam |
| Adam β1 | 0.9 |
| Adam β2 | 0.98 |
| Adam ε | 1e-6 |
| weight decay | 0.01 |
| Learning rate | 0.0001 |
| Batch size | 512 gold tokens + candidate tokens |
| Warm up steps | 2.4 epoch |
| Epoch | 30 |
| Attention layer | 12 |
| Attention head | 12 |
| dropout | 0.1 |
| attention dropout | 0.1 |
| Hidden size | 768 |
| Vocab size | 50257+ added tokens |
| Tokenizer | Byte pair encoder |
| Max sentence length | 512 |
+
+Table 10: List of used hyperparameters for GPT2LM.
+
+In task (a), we used the Viterbi-topk algorithm for the base segmenter to select top- $k_{s}$ segmentations. We tuned $k_{s} \in \{0, 10, 20\}$ for training while $k_{s}$ for prediction was fixed as 5. Note that we used only gold segmentations for training when $k_{s}$ was set to 0. Table 12 shows the experimental results, where both $\mathrm{LMGC}_{e}$ and $\mathrm{GP2TLM}_{e}$ are the ensemble of 5 models. Then we tuned $k_{s} \in \{5, 10, 20\}$ for prediction by using the $\mathrm{LMGC}_{e}$ and $\mathrm{GP2TLM}_{e}$ trained with top-20 candidates, Table 13 shows the results.
+
+In task (b), we utilized beam search in each stage
+
+| Task | Data | Segmentation ks | Parsing | # of data |
| 1st stage | 2rd stage | kp |
| (a) | Training | 20 | - | - | - | 140924 |
| Prediction | 5 | - | - | - | - |
| (b) | Trainingw/ span or nuclearity | - | 20 | 1 | 20 | 60742 |
| Trainingw/ relation or all | - | 3 | 7 | 20 | 95004 |
| Prediction | - | 5 | 5 | 5 | - |
| (c) | Prediction | 5 | 5 | 5 | 5 | - |
+
+of the base parser and after two stages we computed the perplexity to keep top- $k_{p}$ parsings. We tuned $k_{p} \in \{0, 10, 20\}$ for training while $k_{p}$ for prediction was fixed as 5. Note that we used only gold parsings for training when $k_{p}$ was set to 0. Table 14 shows the experimental results, where both $\mathrm{LMGC}_r$ and $\mathrm{GPT2LM}_r$ are the ensemble of 5 models. Then we tuned $k_{p} \in \{5, 10, 20\}$ for prediction by using the $\mathrm{LMGC}_r$ and $\mathrm{GPT2LM}_r$ trained with top-20 candidates, Table 15 shows the results.
+
+In task (c), same as in task (a), we tuned $k_{s} \in \{5, 10, 20\}$ for predicting discourse segmentation by using the $\mathrm{LMGC}_e$ and $\mathrm{GP2TLM}_e$ trained with top-20 candidates for task (a), Table 16 shows the result. We utilized $\mathrm{LMGC}_e$ to select the best segmentation from top-5 segmentations for following discourse parsing. Then same as in task (b), we tuned $k_{p} \in \{5, 10, 20\}$ for predicting discourse parsing by using the $\mathrm{LMGC}_r$ and $\mathrm{GPT2LM}_r$ trained with top-20 candidates for task (b), Table 17 shows the result.
+
+In tasks (b) and (c), $\mathrm{LMGC}_s$ and Enhances cannot distinguish the candidates with the same span labels but different nularity or relation labels, $\mathrm{LMGC}_u$ and Enhance cannot distinguish the candidates with the same nularity labels but different relation labels. Under this condition, the indistinguishable parses would be ranked by the base parser. And in task (b), for training data with span or nuclearity labels, we used the beam sizes 20 and 1 in the first and second stages of the base parser, respectively.
+
+Table 11: Setting of top candidates for different tasks. The Prediction data denotes the validation and test dataset.
+
+| Model | ksfor training | Precision | Recall | F1 |
| LMGCE | 0 | 87.76 | 95.72 | 91.57 |
| 10 | 97.67 | 97.73 | 97.70 |
| 20 | 97.99 | 97.86 | 97.92 |
| GPT2LME | 0 | 81.72 | 96.18 | 88.36 |
| 10 | 96.67 | 96.05 | 96.36 |
| 20 | 96.93 | 96.05 | 96.48 |
+
+Table 12: Results of tuning $k_{s}$ for training in task (a). The best score in each metric among different $k_{s}$ for training is indicated in bold.
+
+| Model | ksfor prediction | Precision | Recall | F1 |
| Oracle | 5 | 99.94 | 99.68 | 99.81 |
| 10 | 99.94 | 99.68 | 99.81 |
| 20 | 99.94 | 99.68 | 99.81 |
| LMGCe | 5 | 97.99 | 97.86 | 97.92 |
| 10 | 97.47 | 97.54 | 97.51 |
| 20 | 97.41 | 97.60 | 97.51 |
| GPT2LMe | 5 | 96.93 | 96.05 | 96.48 |
| 10 | 96.47 | 95.59 | 96.03 |
| 20 | 95.76 | 95.14 | 95.45 |
+
+Table 13: Results of tuning $k_{s}$ for prediction in task (a). The best score in each metric among different $k_{s}$ for prediction is indicated in bold.
+
+| Model | kpfor training | Span | Nuclearity | Relation |
| LMGCR | 0 | 97.25 | 92.21 | 83.37 |
| 10 | 97.46 | 92.71 | 83.23 |
| 20 | 97.50 | 93.02 | 83.44 |
| GPT2LMr | 0 | 97.36 | 92.07 | 79.11 |
| 10 | 96.93 | 90.80 | 80.76 |
| 20 | 96.79 | 90.66 | 80.94 |
+
+Table 14: Results of tuning $k_{p}$ for training in task (b). The best score in each metric among different $k_{p}$ for training is indicated in bold.
+
+| Model | kpfor prediction | Span | Nuclearity | Relation |
| Oracle | 5 | 98.66 | 96.41 | 92.11 |
| 10 | 99.30 | 98.03 | 94.43 |
| 20 | 99.47 | 98.48 | 95.42 |
| LMGCR | 5 | 97.50 | 93.02 | 83.44 |
| 10 | 97.50 | 92.46 | 83.30 |
| 20 | 97.29 | 92.25 | 83.30 |
| GPT2LMr | 5 | 96.79 | 90.66 | 80.94 |
| 10 | 94.26 | 81.08 | 70.82 |
| 20 | 93.27 | 77.20 | 66.67 |
+
+Table 15: Results of tuning $k_{p}$ for prediction in task (b). The best score in each metric among different $k_{p}$ for prediction is indicated in bold.
+
+| Model | ksfor prediction | Precision | Recall | F1 |
| Oracle | 5 | 99.93 | 99.65 | 99.79 |
| 10 | 99.93 | 99.65 | 99.79 |
| 20 | 99.93 | 99.65 | 99.79 |
| LMGCe | 5 | 97.96 | 97.74 | 97.85 |
| 10 | 97.32 | 97.39 | 97.36 |
| 20 | 97.33 | 97.53 | 97.43 |
| GPT2LMe | 5 | 96.94 | 95.91 | 96.42 |
| 10 | 96.45 | 95.63 | 96.04 |
| 20 | 95.75 | 95.35 | 95.55 |
+
+Table 16: Results of tuning $k_{s}$ for prediction in task (c). The best score in each metric among different $k_{s}$ for prediciton is indicated in bold.
+
+| Model | kpfor prediction | Span | Nuclearity | Relation |
| Oracle | 5 | 95.05 | 92.95 | 89.02 |
| 10 | 95.93 | 94.73 | 91.25 |
| 20 | 96.21 | 95.36 | 92.45 |
| LMGCR | 5 | 94.39 | 90.12 | 80.88 |
| 10 | 94.39 | 89.45 | 80.74 |
| 20 | 94.18 | 89.24 | 80.63 |
| GPT2LMr | 5 | 93.65 | 87.80 | 78.59 |
| 10 | 91.18 | 78.55 | 68.99 |
| 20 | 90.30 | 74.96 | 65.19 |
+
+Table 17: Results of tuning $k_{p}$ for prediction in task (c). The best score in each metric among different $k_{p}$ for prediciton is indicated in bold.
\ No newline at end of file
diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/images.zip b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..11e02d17c15969df16335cbf4aeab8da468c31d5
--- /dev/null
+++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb2aa97c2aefe873cae0b5a3d32d912561c23f03fec796d1c205e7f6bbacbdf9
+size 901790
diff --git a/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/layout.json b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..360c336be4b40d332307f936c886654251a892e8
--- /dev/null
+++ b/alanguagemodelbasedgenerativeclassifierforsentenceleveldiscourseparsing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e4c28ef10177f85c6a323b0e57530fdc00a81667c8f15805105f4244553ce53
+size 596554
diff --git a/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_content_list.json b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f1bb9f2085791ced3a2347b4e86d89b839eafb13
--- /dev/null
+++ b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3492ff0c6e996c6fbb964db60e347d32c387b2f8ee0594b439aba43ee708f1fe
+size 83880
diff --git a/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_model.json b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8979810c4fc4ee4a7802eb6930b997094462ef36
--- /dev/null
+++ b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3928903f8dcc4bb55f94dedb2ac2b086e949b34aa5328557e72251618856349b
+size 97568
diff --git a/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_origin.pdf b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ee53e83f17d25f20703a41bae0a076e86b13122e
--- /dev/null
+++ b/alargescaledatasetforempatheticresponsegeneration/141c351b-57c6-40c9-bfb0-b324edccde09_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7a2745b7118ab56a1f5c3633a7a151a103ac5c7486f5c3d019af851a65c1c02
+size 5757545
diff --git a/alargescaledatasetforempatheticresponsegeneration/full.md b/alargescaledatasetforempatheticresponsegeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0f533e3e72927cce04d275475605f7d51182725e
--- /dev/null
+++ b/alargescaledatasetforempatheticresponsegeneration/full.md
@@ -0,0 +1,279 @@
+# A Large-Scale Dataset for Empathetic Response Generation
+
+Anuradha Welivita, Yubo Xie and Pearl Pu
+School of Computer and Communication Sciences
+École Polytechnique Fédérale de Lausanne
+Switzerland
+{kalpani.welivita,yubo.xie,pearl.pu}@epfl.ch
+
+# Abstract
+
+Recent development in NLP shows a strong trend towards refining pre-trained models with a domain-specific dataset. This is especially the case for response generation where emotion plays an important role. However, existing empathetic datasets remain small, delaying research efforts in this area, for example, the development of emotion-aware chatbots. One main technical challenge has been the cost of manually annotating dialogues with the right emotion labels. In this paper, we describe a large-scale silver dataset consisting of 1M dialogues annotated with 32 fine-grained emotions, eight empathetic response intents, and the Neutral category. To achieve this goal, we have developed a novel data curation pipeline starting with a small seed of manually annotated data and eventually scaling it to a satisfactory size. We compare its quality against a state-of-the-art gold dataset using offline experiments and visual validation methods. The resultant procedure can be used to create similar datasets in the same domain as well as in other domains.
+
+# 1 Introduction
+
+Researchers are increasingly inclined towards refining pre-trained language models with domain-specific datasets to achieve certain tasks (Devlin et al., 2019; Liu et al., 2019; Rashkin et al., 2018). One such area is the development of empathetic conversational agents that can understand human emotions and respond appropriately. The aim of the empathetic response generation task is to generate syntactically correct, contextually relevant, and more importantly emotionally appropriate responses following previous dialogue turns. Such tasks require the creation and availability of large dialogue datasets, in which each utterance is annotated with the correct intents and emotions. Though
+
+many such datasets have been developed in the past (Busso et al., 2008; Poria et al., 2019; Li et al., 2017; Rashkin et al., 2018), due to the cost of manual labor, they are limited in size, thus insufficient to train robust conversational agents. Since collecting and manually annotating such gold standard data is expensive, replacing them with automatically annotated silver standard data has become a rising interest (Filannino and Di Bari, 2015). We show how such a large-scale silver standard dataset with sufficient quality can be curated and used to fine-tune pre-trained language models for the generation of empathetic responses.
+
+Emotions revealed in social chitchat are rather complex. It has many categories of emotions to distinguish due to subtle variations present in human emotion. For example, Sadness and Disappointment are pursued and dealt with differently in human conversations even though both of them are negative emotions. Also, the listener's reaction to emotion is not always a straightforward mirroring effect of the speaker's emotion. Rather it can be more neutral and convey a specific intent, as is evident from the dialogue example in Table 1.
+
+| Speaker: | I've been hearing some strange noises around the house at night. (Afraid) |
| Listener: | oh no! That's scary! What do you think it is? (Neutral: Acknowledging; Questioning) |
| Speaker: | I don't know, that's what's making me anxious. (Anxious) |
| Listener: | I'm sorry to hear that. (Neutral: Sympathizing) |
+
+Table 1: An example showing the listener's reactions to emotions do not always mirror the speaker's emotions.
+
+Welivita and Pu (2020) have analyzed listener responses in the EmpatheticDialogues dataset (Rashkin et al., 2018) and discovered eight listener specific empathetic response intents contained in emotional dialogues: Questioning; Agreeing; Acknowledging; Sympathizing; Encouraging; Consoling; Suggesting; and Wishing. They have annotated the EmpatheticDialogues dataset with 32 fine-grained emotions, eight empathetic response
+
+
+Figure 1: Steps for curating the EDOS dataset.
+
+intent, and the Neutral category, and discovered frequent emotion-intent exchange patterns in empathetic conversations. They observe that this type of dataset tagged with fine-grained emotions and intents can be used to train neural chatbots to generate empathetically appropriate responses. But for this purpose, a large-scale emotion and intent labeled dataset is even more desirable. Curating such a dataset is technically challenging since 1) annotating such a large-scale dataset require costly human labor, and 2) given the fine-granularity of the emotion and intent labels, the human labeling task is more difficult and error-prone compared to the more coarse grained Angry-Happy-Sad emotion categories. As a result, existing manually labeled emotional dialogue datasets such as IEMOCAP (Busso et al., 2008), MELD (Poria et al., 2019), and DailyDialogue (Li et al., 2017) are smaller in scale and contain only a limited set of emotions (emotions derived from basic emotion models such as the Ekman's). Most importantly, existing datasets fail to distinguish between Neutral and Questioning, or any of the other eight empathetic response intents. They combine everything into a big label Neutral or Other when the utterance is not emotional. But Questioning, Agreeing, Acknowledging, Sympathizing, Encouraging, Consoling, Suggesting, and Wishing are important details in constructing empathetic dialogues. These eight response intents, which we call the plus categories, are novel in our work and contribute to the model's learning of important response patterns in the data.
+
+To fill the above gap, we curate a novel large-scale silver dialogue dataset, EDOS (Emotional Dialogues in OpenSubtitles), containing 1M emotional dialogues from movie subtitles, in which each dialogue turn is automatically annotated with 32 fine-grained emotions, eight plus categories as
+
+well as the Neutral category. Movie subtitles are extensively used for emotion analysis in text in earlier and recent research (Kayhani et al., 2020; Merdivan et al., 2020; Giannakopoulos et al., 2009). The Nature article "How movies mirror our mimicry" (Ball, 2011) states "screenwriters mine everyday discourse to make dialogues appear authentic" and "audiences use language devices in movies to shape their own discourse". Hence, it can be one of the major sources to train chatbots and learn emotional variations and corresponding response strategies in dialogues. To reduce the cost of human labeling and the complexity of labeling dialogues with fine-grained emotions and intents, we devised a semi-automated human computation task to collect fine-grained emotion and intent labels for a small set of movie dialogues (9K). We then followed automatic data augmentation techniques to expand the labeled data and trained a dialogue emotion classifier to automatically annotate 1M emotional dialogues.
+
+The process of curating the dataset involved several stages. First, we applied automatic turn and dialogue segmentation methods, data cleaning and removal of duplicates on movie subtitles in the OpenSubtitles (OS) corpus (Lison et al., 2019) and obtained close to 4M dialogues. Then, we applied a weak labeler (a BERT-based sentence-level classifier) trained on the EmpatheticDialogues dataset (Rashkin et al., 2018), to label utterances in OS dialogues and filtered 1M emotional dialogues (EDOS initial). Thereafter, we applied data augmentation techniques on a small set of human-annotated data and used the manually annotated and extended labels to train a strong labeler that is used to annotate dialogues in EDOS initial and obtained the final 1M EDOS dataset. We evaluated the quality of the resultant dataset by comparing it against the
+
+| Dataset | Labels | No. of dialogues | No. of utterances | Publicly available |
| IEMOCAP (Busso et al., 2008) | Joy, Sadness, Anger, Frustrated, Excited, and Neutral | 151 | 7,433 | ✓ |
| MELD (Poria et al., 2019) | Joy, Surprise, Sadness, Anger, Disgust, Fear, and Neutral | 1,433 | 13,708 | ✓ |
| DailyDialogue (Li et al., 2017) | Joy, Surprise, Sadness, Anger, Disgust, Fear, and Neutral | 12,218 | 103,607 | ✓ |
| EmotionLines (Hsu et al., 2018) | Joy, Surprise, Sadness, Anger, Disgust, Fear, and Neutral | 1,000 | 14,503 | ✓ |
| EmoContext (Chatterjee et al., 2019) | Joy, Sadness, Anger, and Other | 38,421 | 115,263 | ✓ |
| Twitter customer support (Herzig et al., 2016) | Customer emotions: Confusion; Frustration; Anger; Sadness; Happiness; Hopefulness; Disappointment; Gratitude; Politeness; and Agent emotional techniques: Empathy; Gratitude; Apology; Cheerfulness | 2,413 | ≈14,078 | ✗ |
| Empathetic Dialogues (Rashkin et al., 2018; We-livita and Pu, 2020) | 32 fine-grained emotions (positive and negative), Neutral, and 8 empa-thetic response intents: Questioning; Agreeing; Acknowledging; Sympa-thizing; Encouraging; Consoling; Suggesting; and Wishing. | 24,850 | 107,220 | ✓ |
| EDOS | 32 fine-grained emotions, 8 empathetic response intents, and Neutral. | 1M | 3,488,300 | ✓ |
+
+Table 2: Comparison of emotion annotated dialogue datasets available in the literature against EDOS.
+
+EmpatheticDialogues dataset by means of offline experiments and visual validation methods. Figure 1 summarizes the process of creating EDOS. The data curation pipeline we followed substantially reduced the cost of human labor while ensuring quality annotations.
+
+Our contributions in this paper are three-fold. 1) We curate a large-scale dialogue dataset, EDOS, containing 1M emotional dialogues labeled with 32 fine-grained emotions, eight empathetic response intents (the plus categories), and Neutral. Compared to existing dialogue datasets tagged with emotions, EDOS is significantly larger ( $\approx$ 40 times larger than EmpatheticDialogues), and contains more fine-grained emotions and empathetic response strategies. 2) We outline the complex pipeline used to derive this dataset. 3) We evaluate the quality of the dataset compared to a state-of-the-art gold standard dataset using offline experiments and visual validation methods.
+
+# 2 Literature review
+
+IEMOCAP (Busso et al., 2008), MELD (Poria et al., 2019), DailyDialogue (Li et al., 2017), EmotionLines (Hsu et al., 2018), and EmoContext (Chatterjee et al., 2019) are some existing state-of-the-art dialogue datasets with emotion labels. However, these datasets are limited in size and are labeled with only a small set of emotions without any response strategies. Table 2 shows a summary of the size and the labels in these datasets. All the datasets compared here are in the English language.
+
+Herzig et al. (2016) detected customer emotions and agent emotional techniques (e.g., Apology, Empathy) in customer support dialogues. They curated
+
+a dialogue dataset from two customer support Twitter accounts and manually annotated the customer turns with one of 9 emotions and the agent turns with one of 4 emotional techniques. But emotions expressed by customers in social media service dialogues are mainly negative (e.g. anger, frustration), and the customer service agents also respond in a restricted manner, which limits the utility of this dataset, in addition to its small size.
+
+The EmpatheticDialogues dataset (Rashkin et al., 2018) contains 25K open-domain dialogues grounded on 32 emotions. The 32 emotions range from basic emotions derived from biological responses (Ekman, 1992; Plutchik, 1984) to larger sets of subtle emotions derived from contextual situations (Skerry and Saxe, 2015). Welivita and Pu (2020) manually analyzed a subset of the listener turns in EmpatheticDialogues and identified eight listener-specific response intents. They developed a sentence-level weak labeler using which they annotated the entire dataset with 32 emotions, eight empathetic response intents, and the Neutral category. However, due to the limited size of EmpatheticDialogues, it is difficult to be used for data-intensive applications. To address the above limitations, we curate EDOS containing 1M movie dialogues. We label each dialogue turn with 32 emotions, eight empathetic response intents, and Neutral using our own dialogue emotion and intent classifier. Table 2 compares EDOS to state-of-the-art emotion annotated dialogue datasets.
+
+# 3 Methodology
+
+This section describes the dialogue selection process, the design of the human annotation task,
+
+the data augmentation techniques used to expand human-labeled dialogues, and the development of a strong labeler to annotate the dataset.
+
+# 3.1 Dialogue curation from movie subtitles
+
+The OpenSubtitles 2018 corpus consists of 3.7M movie and TV subtitles. It comprises 3.4B sentences and 22.2B tokens. It is an excellent source to learn emotional variations in dialogue and corresponding response mechanisms. But due to the absence of speaker markers, movie subtitles do not contain an explicit dialogue turn structure (who speaks what) and specific indicators where one dialogue ends and the next dialogue begins. To overcome the first issue, we reproduced the work by Lison and Meena (2016) to build an SVM-based classifier that determines if two consecutive sentences are part of the same dialogue turn. Our classifier achieved a segmentation accuracy of $76.69\%$ , which is close to the accuracy of $78\%$ that the authors claim. The set of features that gave the best turn segmentation accuracy are: 1) unigram and bi-gram features of adjacent sentences after lemmatization; 2) first and final tokens of adjacent sentences; 3) first and final bi-grams of adjacent sentences; 4) whether the two sentences belong to the same subtitle block or not (boolean); 5) genre of the movie (Drama, Crime, Musical etc.); 6) sentence density of the subtitles file (no. of sentences/subtitle duration); and 7) quadratic combinations of the above features with itself and the rest.
+
+After performing turn segmentation on the OpenSubtitles corpus, we divided the turns into separate dialogues based on a simple heuristic. If the difference between the end time of the previous turn and the start time of the current turn is more than 5 seconds, we take these two turns as belonging to 2 different dialogues. An exception occurs if this timestamp information is missing in at least one of the turns. In this case, we assume that these two turns appear in the same subtitle block and consider them as belonging to the same dialogue. This way, we formed 9M dialogues from the OpenSubtitles corpus altogether. The choice of 5 sec.s to separate dialogues is explained in Appendix C.
+
+To further clean the dialogues, we removed character names, the repetitive dialogue turns, turns that start with "previous on..." (monologue at the beginning of TV episodes), turns with character length less than 2 or greater than 100, turns with
+
+an alphabetic proportion less than $60\%$ , and turns with a lot of repetitive tokens. When a dialogue turn was removed, all the turns following that turn were also removed from the dialogue to maintain consistency. After that, all the dialogues left with only one turn were removed from the corpus. We removed dialogues from movies of the genre 'Documentary' since they do not correspond to actual dialogues. This resulted in a cleaned OS dialogue dataset consisting of 4M dialogues.
+
+To filter out dialogues containing emotional statements and empathetic responses from the cleaned OS dialogues dataset, we employed a weak labeler, (a BERT transformer-based sentence level classifier) trained on 25K situation descriptions from EmpatheticDialogues (Rashkin et al., 2018) tagged with 32 emotion classes, and 7K listener utterances tagged with eight empathetic response intents and the Neutral category (Welivita and Pu, 2020). The classifier had a high top-1 classification accuracy of $65.88\%$ . We call it a weak labeler since it predicts emotion or intent only at the sentence level and is trained on a different dataset other than OS. We filtered the top 1M dialogues having the highest label confidence as predicted by this classifier to form the 1M EDOS (initial) dataset. The statistics of the EDOS dataset are given in Table 3. More detailed statistics including the number of dialogues per emotion are included in Appendix D.
+
+| Criteria | Statistics |
| Total no. of dialogues | 1,000,000 |
| Total no. of turns | 2,829,426 |
| Total no. of tokens | 39,469,825 |
| Avg. no. of turns per dialogue | 2.83 |
| Avg. no. of tokens per dialogue | 39.47 |
| Avg. no. of tokens per turn | 13.95 |
+
+Table 3: Statistics of the EDOS dataset.
+
+# 3.2 Human computation
+
+To train a dialogue emotion classifier that can identify both fine-grained emotions and empathetic response intents, we devised an Amazon Mechanical Turk (AMT) experiment to collect an initial set of ground truth labels for OS dialogues. But annotating dialogue turns with one of 41 labels is a daunting task. To make the task less exhaustive, we devised a semi-automated approach using our weak labeler. By applying the weak labeler on each turn of the cleaned OS dialogue dataset, we filtered out the turns having prediction confidence $\geq 0.9$ along with their dialogue history. Next, we ranked these dialogues according to their readability and selected the highest readable dialogues from each
+
+class to be labeled. This is to reduce the time spent by the workers in having to read long and complicated dialogues. The steps followed in computing dialogues' readability are included in Appendix A. Workers had to select a label from the top-3 predictions made by the weak labeler. If none of the top-3 predictions matched, they could manually specify the correct class. The main purpose of incorporating a weak labeler here was to make the task less daunting for the crowd worker. Otherwise, having to choose a label out of 41 labels may lead to even worse results due to the complicated nature of the task. The risk of reduced data reliability is avoided by taking only the labels with the majority vote. The AMT task's user interface design is included in Appendix B.
+
+After ranking the dialogues according to readability, we selected the top 250 dialogues in each category for the AMT task. We bundled 15 dialogues in a HIT with 5 quiz questions that served as checkpoints to evaluate the crowd workers' quality. Situation descriptions from the Empathetic-Dialogues dataset for which we already knew the emotion labels were used to formulate the quiz questions. Finally, we obtained dialogues where we had 2 out of 3 worker agreements, which resulted in 8,913 dialogues altogether. Table 4 shows the results of the AMT task.
+
+| Description | Statistics |
| Total no. of dialogues | 10,250 |
| # dialogues labeled with majority vote | 8,913(86.96%) |
| Inter-annotator agreement (Fleiss’ Kappa) | 0.46 (moderate agreement) |
| % of times workers got 3/5 quiz questions correct | 77.75% |
| # dialogues in which the workers manually specified the label | 425 |
+
+Table 4: AMT task results.
+
+# 3.3 Data augmentation and annotation
+
+To scale up the training data obtained from the AMT task, we utilized a distant learning technique using dialogue embeddings (Reimers and Gurevych, 2019) and self-labeling (Triguero et al., 2015), a semi-supervised learning technique. The first approach we used is using Sentence-BERT (SBERT) proposed by Reimers and Gurevych (2019), which uses siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. Using this approach, we obtained semantically similar dialogues to those annotated
+
+by crowd workers and tagged them with the same class label. Among several models the authors have proposed, we used the roberta-base-nli-stsb-mean-tokens model, fine-tuned on the NLI (Bowman et al., 2015) and STS benchmark (STSb) (Cer et al., 2017) datasets, since it has reported a high Spearman's rank correlation of $84.79 \pm 0.38$ between the cosine-similarity of the sentence embeddings and the gold labels in the STS benchmark test set outperforming the existing state-of-the-art. It is also more efficient to use than roberta-large. Before proceeding, we left out $20\%$ of the crowd-annotated dialogues, balanced across all class labels, as testing data. Then, we followed the following steps in extending the rest of the dialogues using SBERT.
+
+1) Using the SBERT model, first, we computed dialogue turn embeddings (each with a vector representation of 768 dimensionalities) for all the turns $(\approx 19\mathrm{M})$ in the cleaned OS dataset. 2) Then, we calculated dialogue embeddings for human-annotated and unlabeled dialogues from the cleaned OS dialogues dataset. For this, we applied a decaying weight starting from the last turn and took the weighted average of the turn embeddings of each dialogue. We used half decaying, i.e., if we have a dialogue with turn embeddings $v_{1}, v_{2}$ , and $v_{3}$ , the final dialogue embedding would be $(4/7)v_{3} + (2/7)v_{2} + (1/7)v_{1}$ . 3) Next, we calculated the cosine similarity between annotated and unlabeled dialogue embeddings and ranked the results. 4) Finally, we applied a similarity threshold and obtained all the unlabeled dialogues with a cosine similarity that exceeds this threshold and tagged them with the same crowd annotated class label. Here, we used a threshold of 0.92 after manually inspecting a random subset of the results obtained for a range of thresholds (Examples from this stage are denoted in Appendix C).
+
+We extended the original crowd annotated dialogue dataset by 3,196 more dialogues with distantly annotated class labels using the above method. Thereafter, using the crowd-annotated and extended labels, we trained an initial classifier that we used to annotate the rest of the dialogues and add more labels to our dataset that had annotation confidence over 0.9. This method is termed self-labeling (Triguero et al., 2015), a semi-supervised learning technique that can be used to grow labeled data. With this, we were able to extend the labeled data by 4,100 more dialogues. Next, we again
+
+applied SBERT over the self-labeled data and extended them by 2,118 more dialogues. Finally, we were able to have $\approx 14K$ labeled dialogues altogether. We used this data to train a final dialogue emotion classifier to annotate the rest of the unlabeled data. This resulted in a classifier with precision $64.11\%$ , recall $64.59\%$ , macro F1-score $63.86\%$ , and accuracy $65.00\%$ , which is comparable with the state-of-the-art dialogue emotion classifiers (as denoted in Table 5). The design of the dialogue emotion classifier we utilized to annotate the dataset is explained in section 3.3.1.
+
+# 3.3.1 Design of the dialogue emotion classifier
+
+Our dialogue emotion classifier consists of a representation network that uses the BERT architecture, an attention layer that aggregates all hidden states at each time step, a hidden layer, and a softmax layer. We used the BERT-base architecture with 12 layers, 768 dimensions, 12 heads, and 110M parameters as the representation network. It was initialized with weights from RoBERTa (Liu et al., 2019). We fed in a dialogue turn along with the preceding context in the reverse order as input to the representation network. To give more importance to the dialogue turn for which prediction has to be made and the turns that immediately precede it, we multiplied the token embeddings belonging to each turn by a decreasing weight factor. Its input representation is constructed by summing the corresponding token embedding multiplied by the weighting factor and its position embedding. More details including the hyper-parameters used are included in the Appendix C.
+
+# 4 EDOS quality analysis and comparison with the state-of-the-art gold standard
+
+Table 6 shows some example dialogues taken from the EDOS dataset along with annotations and confidence scores. By observing the examples, it could be noticed that even for less confident predictions, the label quite accurately describes the emotion or intent of the corresponding dialogue turn.
+
+We also conducted a qualitative comparison of the annotations in the EDOS dataset with EmpatheticDialogues (Rashkin et al., 2018; Welivita and Pu, 2020), a state-of-the-art gold standard dataset for empathetic conversations. Figure 2 compares the distributions of emotions and intents in the two datasets. It is observed that in both datasets, intent categories take prominence over individual emotion classes. This is in par with observations of
+
+Welivita and Pu (2020), where they notice that one or more intents from the taxonomy of empathetic intents are mostly utilized when responding to emotions in dialogue, rather than similar or opposite emotions. Especially, the intent Questioning takes the highest percentage among the annotations in EmpatheticDialogues and EDOS. We also computed the KL-divergence $(\geq 0)$ of the emotion and intent distribution of EDOS with respect to that of EmpatheticDialogues, which measures how one probability distribution is different from a second, reference probability distribution (Kullback and Leibler, 1951). It resulted in a KL-divergence value of 0.2447, which indicates a considerable similarity between the two distributions (the lower the KL divergence, the more similar the distributions are).
+
+Figure 3 compares the emotion-intent flow patterns in EmpatheticDialogues and EDOS. In the visualization corresponding to EmpatheticDialogues, the $1^{\text{st}}$ and $3^{\text{rd}}$ dialogue turns correspond to the speaker and the $2^{\text{nd}}$ and $4^{\text{th}}$ dialogue turns correspond to the listener. However, in EDOS, we cannot distinguish the dialogue turns as speaker and listener turns due to the absence of speaker annotations. Though this is the case, we could still observe some conversational dynamics present in EmpatheticDialogues are preserved in EDOS. For example, in both datasets, the speaker mostly starts the conversation with some emotional statement and in the subsequent turn, the response tends to be of the intent Questioning. In both datasets, intents Agreeing and Acknowledging follow emotions seen in the first turn irrespective of whether they are positive or negative. As the dialogues proceed, it could be seen in both datasets the emotions deescalate as more empathetic response intents emerge.
+
+# 5 Experimental baselines
+
+We propose some experimental baselines using the curated dataset for empathetic response generation and compare the performance against a dialogue model trained on the EmpatheticDialogues dataset. For this purpose, we trained a transformer (Vaswani et al., 2017) model with various training settings. Specifically, the following datasets were involved: 1) OS dialogues (As described in Section 3.1, these dialogues were obtained by segmenting the movie subtitles. Note that for the purpose of pre-training, we excluded the EADOS dialogues, resulting in around 3M dialogues.); 2) EDOS (1M dialogues); and 3) EmpatheticDialogues (25K dialogues). All
+
+
+(a) EmpatheticDialogues
+
+
+(b) EDOS
+
+
+Figure 2: Comparison of distribution of emotions and intents in the EmpatheticDialogues and EDOS datasets.
+
+
+(a) EmpatheticDialogues dataset
+(b) EDOS dataset
+Figure 3: Comparison of emotion-intent flow patterns in the EmpatheticDialogues and EDOS datasets. For simplicity, only the first four dialogue turns are visualized.
+
+| Classifier | Dataset | No. of labels | F1 | Acc. |
| AR (Khosla, 2018) | EmotionLines dataset (Hsu et al., 2018) | 4 Emotion labels | - | Friends: 62.50 EmotionPush: 62.48 |
| CMN (Hazarika et al., 2018b) | IEMOCAP dataset (Busso et al., 2008) | 6 Emotion labels | 56.13 | 56.56 |
| ICON (Hazarika et al., 2018a) | | | 57.90 | 58.30 |
| IAAN (Yeh et al., 2019) | | | - | 64.70 |
| Dialog-RNN (Majumder et al., 2019) | IEMOCAP (Busso et al., 2008) and AVEC (Schuller et al., 2012) datasets | IEMOCAP: 4 Emotion labels; AVEC: 4 dimensional emotion labels | 62.75 | 63.40 |
| Dialog-GCN (Ghosal et al., 2019) | IEMOCAP (Busso et al., 2008), AVEC (Schuller et al., 2012), and MELD (Poria et al., 2019) datasets | IEMOCAP: 4 Emotion labels; AVEC: 4 dimensional emotion labels; MELD: 7 Emotion labels | 64.18 | 65.25 |
| Ours | OS dialogue dataset | 32 Emotions + 8 Intents + Neutral | 63.86 | 65.00 |
+
+Table 5: Comparison of the performance of the dialogue emotion classifier used for annotation with performance of the state-of-the-art dialogue emotion classifiers. F1-score reported here is the macro-F1 score.
+
+| Dialogue #1: |
| Turn 1 | (Excited, 0.98) The concert will start soon. |
| Turn 2 | (Questioning, 0.01) Are you excited? |
| Turn 3 | (Proud, 0.99) I am. Because one of my friends made his efforts to make the concert happen. He wanted to fulfill a promise he made to his first love. |
| Turn 4 | (Sentimental, 0.99) I like their story very much. I want to dedicate this concert to everyone who has truly loved someone. |
| Dialogue #2: |
| Turn 1 | (Apprehensive, 0.89) Staying here might not be safe. |
| Turn 2 | (Questioning, 0.41) Take the earliest flight tomorrow? |
| Turn 3 | (Caring, 0.94) Take Josie to mother. My home is where you are. |
| Turn 4 | (Faithful, 0.86) We're not leaving. |
+
+Table 6: Example dialogues from the EADOS dataset along with annotations and confidence scores.
+
+three datasets were split into a training $(80\%)$ , validation $(10\%)$ , and test $(10\%)$ sets. Based on the training strategies, we have the following models: 1) Pre-trained—to take advantage of transfer learning, we pre-trained the transformer model on the 3M OS dialogues. The large scale of this training set is expected to provide a good starting point for fine-tuning; 2) Fine-tuned—we took the pre-trained transformer and then fine-tuned it on EDOS and EmpatheticDialogues datasets respectively. All the models have 4 layers, 6 multi-heads, and a hidden size of 300, and were trained until the minimum validation loss was reached. For inference, we used beam search with beam size 32 and 4-gram repeats blocking.
+
+To evaluate the performance of the dialogue models, we adopted the following metrics: 1) perplexity; 2) distinct-1 and -2 metrics (Li et al., 2016), which measure the diversity of the generated responses; 3) sentence embedding similarity—we used SBERT (Reimers and Gurevych, 2019) to obtain an embedding for the generated response as well as the ground-truth and then calculated the cosine similarity between the two embeddings. The performance of the dialogue models was tested in held-out and zero-shot settings. The evaluation results are shown in Table 7.
+
+In the held-out setting, where the model is evaluated on data from the same domain as the training data, all three models achieved good performance, and the perplexity values are much lower compared with the zero-shot setting, where the model is evaluated on data from a different domain. We also observe that the model fine-tuned on OS and EDOS dialogues achieves much higher Distinct-1 and -2 scores, even in the zero-shot setting when evaluated on EmpatheticDialogues. This indicates that by training on our curated OpenSubtitles dialogues, the model gains more diversity in the generated responses. It might be due to the larger size of the datasets containing many diverse responses. Out of the two, EDOS performs the best in terms of diversity, which reflects the quality of dialogues filtered from OpenSubtitles.
+
+# 6 Discussion and conclusion
+
+In this work, we curated a large-scale dialogue dataset, EDOS, comprising of 1M emotional dialogues from movie subtitles. This dataset is significantly larger in size and contains more fine-grained emotion categories and empathetic response intents than the existing emotional dialogue datasets. To facilitate annotation, we utilized data augmentation techniques to extend a small set of manually annotated data and trained a dialogue emotion classifier having comparable accuracy to the state-of-the-art. The data augmentation and automatic annotation procedure we employed significantly reduced the manual annotation cost and time.
+
+Obtaining a large dataset is important only if the quality can be assured. The qualitative comparison conducted between EDOS and the state-of-the-art EmpatheticDialogues dataset by means of visual validation was one way to confirm that. The results of the comparison confirmed that most of the conversational dynamics present in EmpatheticDia
+
+| Model | OS | EDOS | EmpatheticDialogues |
| PPL | D1 | D2 | SES | PPL | D1 | D2 | SES | PPL | D1 | D2 | SES |
| Pre-trained (OS) | 24.8 | .046 | .159 | .172 | 37.8 | .046 | .154 | .126 | 564.6 | .044 | .167 | .178 |
| Fine-tuned (EDOS) | 26.9 | .044 | .139 | .162 | 32.3 | .056 | .165 | .137 | 452.6 | .031 | .107 | .176 |
| Fine-tuned (ED) | 88.9 | .030 | .109 | .174 | 140.8 | .028 | .096 | .130 | 19.3 | .026 | .091 | .316 |
+
+Table 7: Dialogue model evaluation results. Here PPL denotes perplexity, D1 and D2 denote Distinct-1 and -2, and SES denotes the sentence embedding similarity. : held-out, : zero-shot.
+
+logues were observed in EDOS. We also proposed some experimental baselines by training a transformer model for empathetic response generation on OS, EDOS, and EmpatheticDialogues datasets and tested them in held-out and zero-shot settings. The results showed that the model fine-tuned on EDOS scored the best in terms of diversity metrics. This dataset can be readily utilized to develop empathetic conversational agents and for fine-grained emotion analysis in dialogues. The pipeline we present can be used when creating similar large-scale datasets in similar or even different domains.
+
+As future work, we plan to utilize this dataset to further conduct experiments on empathetic response generation. Since it is annotated with emotions and intents, we will use it for experiments involving controllable and interpretable response generation. Particularly, the plus categories present in the dataset can be utilized to condition the chatbot's response generation process, making it possible to control and interpret the generated responses. The dataset can also be used to train state-of-the-art dialogue emotion classifiers.
+
+# 7 Ethical considerations
+
+EDOS contains dialogues derived from the OpenSubtitles corpus (Lison et al., 2019), which is publicly available. It is part of the OPUS (Open Parallel corpUS), which is based on open source products and is delivered as an open content package. The workers annotating the dataset were compensated with $0.4 per HIT, which takes 4.12 minutes on average to complete (excluding the time taken by workers who took an unusually long time to complete the task) and a bonus of $0.1 if they completed at least 3 out of 5 quiz questions correctly. Fair compensation was determined based on the US minimum wage of $7.12 per hour. Since the dataset is in English, the annotators recruited from AMT were restricted the majority native English-speaking countries: US; UK; Canada; Australia; and New Zealand. The fact that the dataset is
+
+English-only potentially perpetuates an English bias in NLP systems.
+
+Using this dataset to directly train end-to-end chatbot models can involve certain risks. Though we have taken steps to remove profanity from the responses in the dataset, due to the lack of controllability and interpretability in end-to-end neural response generation models, there exists the risk of generating inappropriate or biased responses for certain emotional prompts. A recent example is Microsoft's Taybot that started producing unintended and offensive tweets denying the Holocaust as a result of learning from offensive information from Twitter (Lee, 2016). To mitigate this, researchers have recently focussed on inducing controllability in these end-to-end response generation models by means of jointly modeling dialogue intent selection and response generation (Wu et al., 2018; Sankar and Ravi, 2019; Hedayatnia et al., 2020; Santhanam et al., 2020; Ke et al., 2018; Lee et al., 2020). We encourage the readers to look into these approaches when developing conversational agents using this dataset.
+
+Though human-like chatbots with emotion recognition and empathetic responding abilities can be beneficial in a number of situations such as in the medical domain, crisis management, customer service, and elderly care, it should not be underestimated that they involve some potential harms. For example, a chatbot can be used to impersonate a real human being and used for cybercrimes such as scamming and phishing. It is also important to note that one could get emotionally attached to a bot, or even become codependent, distracting him or herself from relationships with humans and causing distress if the chatbot becomes dysfunctional. Users may tend to reveal their private and confidential information such as certain health conditions and private attributes during such interaction, which could be misused when in the hands of the wrong people. Developers should take these risks into account when deploying such chatbots in the real world to ensure safe and ethical use.
+
+# References
+
+Philip Ball. 2011. How movies mirror our mimicry.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
+Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation, 42(4):335.
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
+Ankush Chatterjee, Umang Gupta, Manoj Kumar Chinnakotla, Radhakrishnan Srikanth, Michel Galley, and Puneet Agrawal. 2019. Understanding emotions in text using deep learning and big data. Computers in Human Behavior, 93:309-317.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200.
+Michele Filannino and Marilena Di Bari. 2015. Gold standard vs. silver standard: the case of dependency parsing for Italian. *CLiC it*, page 141.
+Theodoros Giannakopoulos, Aggelos Pikrakis, and Sergios Theodoridis. 2009. A dimensional approach to emotion recognition of speech from movies. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 65-68. IEEE.
+Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, and Dilek Hakkani-Tur. 2020. Policy-driven neural response generation for knowledge-grounded dialog systems. In Proceedings of the 13th International Conference on Natural Language Generation, pages 412-421, Dublin, Ireland. Association for Computational Linguistics.
+
+Jonathan Herzig, Guy Feigenblat, Michal Shmueli-Scheuer, David Konopnicki, Anat Rafaeli, Daniel Altman, and David Spivak. 2016. Classifying emotions in customer support dialogues in social media. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 64-73.
+Chao-Chun Hsu, Sheng-Yeh Chen, Chuan-Chun Kuo, Ting-Hao Huang, and Lun-Wei Ku. 2018. Emotion-Lines: An emotion corpus of multi-party conversations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
+Amir Kazem Kayhani, Farid Meziane, and Raja Chiky. 2020. Movies emotional analysis using textual contents. In International Conference on Applications of Natural Language to Information Systems, pages 205-212. Springer.
+Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1499-1508, Melbourne, Australia. Association for Computational Linguistics.
+Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, 22(1):79-86.
+Dave Lee. 2016. Tay: Microsoft issues apology over racist chatbot fiasco.
+Hung-yi Lee, Cheng-Hao Ho, Chien-Fu Lin, Chiung-Chih Chang, Chih-Wei Lee, Yau-Shian Wang, Tsung-Yuan Hsu, and Kuan-Yu Chen. 2020. Investigation of sentiment controllable chatbot. arXiv preprint arXiv:2007.07196.
+Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT 2016, pages 110-119.
+Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986-995, Taipei, Taiwan. Asian Federation of Natural Language Processing.
+Pierre Lison and Raveesh Meena. 2016. Automatic turn segmentation for movie & tv subtitles. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 245-252. IEEE.
+Pierre Lison, Jörg Tiedemann, Milen Kouylekov, et al. 2019. Open subtitles 2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In LREC 2018, Eleventh International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA).
+
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Erinc Merdivan, Deepika Singh, Sten Hanke, Johannes Kropf, Andreas Holzinger, and Matthieu Geist. 2020. Human annotated dialogues dataset for natural conversational agents. Applied Sciences, 10(3):762.
+Robert Plutchik. 1984. Emotions: A general psychoevolutionary theory. Approaches to emotion, 1984:197-219.
+Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 527-536, Florence, Italy. Association for Computational Linguistics.
+Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207.
+Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
+Chinnadhurai Sankar and Sujith Ravi. 2019. Deep reinforcement learning for modeling chit-chat dialog with discrete attributes. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 1-10, Stockholm, Sweden. Association for Computational Linguistics.
+Sashank Santhanam, Zhuo Cheng, Brodie Mather, Bonnie Dorr, Archna Bhatia, Bryanna Hebenstreit, Alan Zemel, Adam Dalton, Tomek Strzalkowski, and Samira Shaikh. 2020. Learning to plan and realize separately for open-ended dialogue systems. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2736-2750, Online. Association for Computational Linguistics.
+Amy E Skerry and Rebecca Saxe. 2015. Neural representations of emotion are organized around abstract event features. Current biology, 25(15):1945-1954.
+Isaac Triguero, Salvador Garcia, and Francisco Herrera. 2015. Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study. Knowledge and Information systems, 42(2):245-284.
+
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS 2017, pages 5998-6008.
+Anuradha Welivita and Pearl Pu. 2020. A taxonomy of empathetic response intents in human social conversations. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4886-4899.
+Wei Wu, Can Xu, Yu Wu, and Zhoujun Li. 2018. Towards interpretable chit-chat: Open domain dialogue generation with dialogue acts.
+
+# A Computing the readability of OS dialogues
+
+We followed the following steps in calculating the readability of the dialogues. The dialogues that scored high in readability were preferred for the crowd-annotation task since they avoid the overhead of having to read long and complex dialogues that may exhaust the crowd-worker.
+
+1. Build a frequency vocabulary by calculating the token count for all the dialogues in the cleaned OS dataset.
+
+2. For each dialog, aggregate the frequencies of all tokens and take the average using the following formula, in which $f_{sum}$ is the sum of frequencies of all tokens, $n_{tokens}$ is the total number of tokens in the dialog, and $\alpha$ is a constant (set to 87 in our case). The idea behind this is that difficult to read dialogues contain less frequent words and should result in less readability.
+
+$$
+f = f _ {s u m} / (\alpha + n _ {t o k e n s})
+$$
+
+3. For each dialog, also calculate the percentage of distinct words, say $d$ .
+4. Finally, compute the readability score for each dialogue by taking the weighted sum of $f$ and $d$ . Experimental results showed that the combination of $f + 0.04d$ was giving the best results. We take the combination of both $f$ and $d$ because, if only $f$ is considered, then dialogues that contain a lot of repetitive tokens can score high in readability, which is undesirable.
+
+# B AMT task interfaces
+
+The user interface used to collect labels from the AMT workers is denoted in Figure 4.
+
+
+Figure 4: The user interface of the AMT crowd-annotation task.
+
+# C Choice of hyper-parameters and additional training details regarding the dialogue emotion classifier used for annotation
+
+The choice of 5 seconds to separate dialogues is based on a histogram of time intervals between adjacent subtitle blocks in the OpenSubtitles corpus, which is denoted in Figure 5. As it can be observed in the histogram, most of the time gaps fall below 3 seconds. A clear drop in count was observed between 3-5 seconds. Therefore, we chose 5 seconds as the time interval to separate dialogues.
+
+
+Figure 5: Histogram of time intervals between adjacent subtitle blocks in the OpenSubtitles corpus.
+
+The choice a threshold of 0.92 to select dialogues similar to those that were already annotated was based on manually inspecting a random subset of
+
+the results obtained after using a range of similarity thresholds. Table 8 shows some example dialogues discovered at this threshold.
+
+Using decreasing weights for context utterances is based on the intuition that in human dialogues, more attention is paid to the most recent utterances in dialogue history. This idea is backed up by time-decay functions used in neural dialogue understanding approaches (See et al., 2019). We conducted an ablation study with without using decreasing weights in the model. Performance of the unweighted models was lower than the performance of weighted models yielding final F1 scores of 63.44 and 64.86 for unweighted weighted models, respectively.
+
+We used the same hyper-parameter setting used in RoBERTa (Liu et al., 2019) when training the dialogue emotion classifier used for annotation. We used the Adam optimizer with $\beta_{1}$ of 0.9, $\beta_{2}$ of 0.98, an $\epsilon$ value of $1\times 10^{-6}$ , and a learning rate of $2\times 10^{-5}$ . A dropout of 0.1 was used on all layers and attention weights, and a GELU activation function (Hendrycks and Gimpel, 2016). We limited the maximum number of input tokens to 100, and used a batch size of 256. All the experiments were conducted on a machine with 2x12cores@2.5GHz, 256 GB RAM, 2x240 GB SSD, and 2xGPU (NVIDIA Titan X Maxwell). 546.84 sec.s in total were taken to train the final emotion classifier. The optimal model was selected based on the average cross entropy loss calculated between the ground-truth and predicted labels of the validation set.
+
+# D EDOS statistics
+
+Table 9 shows more descriptive statistics of the EDOS dataset: the number of dialogues; and the number of dialogues turns per emotion and intent category. A dialogue is counted under an emotion or an intent if the beginning dialogue prompt is annotated with that emotion or intent.
+
+# E Additional training details about the experimental baselines
+
+Here we summarize some of the parameters of the model implementation. We used the RoBERTa tokenizer to tokenize the input utterances, and the vocabulary size is 50,265. We allow a maximum number of 100 tokens as the input to the model. We used 4 sub-layers in the encoder and decoder, with 6 heads in the multi-head attention. The dimension of the hidden units is 300, and the dimension of the
+
+| Manually annotated dialogues | Dialogues discovered using similarity matching (with similarity ≥ 0.92) |
| - That 's beautiful !. (Acknowledging) | - Now, let 's take a look at this beautiful piece of work
+- Oh, my God. It 's beautiful.
+- Oh. That 's beautiful. |
| - I thought the coils were closer to me.
+- Oh, well ... It was a good one nonetheless.
+- I'm so happy! (Joyful) | - Actually, I just wanted to say I love you. And I'm sorry if I'm a bit edgy about my book, but all that counts for me is you. You becoming my wife.
+- That 's what really matters.
+- I'm very happy. |
| - Hey! Don't eat at my house anymore.
+- You're disgusting. (Disgusted) | - I thought I told you to stay the fuck away from me if you were back on that shit.
+- You're disgusting. |
| - Was the team mad, then? | - It's starting to hurt so bad. |
| - I wasn't happy! | - Really? That bad? |
| - That's pretty bad. (Acknowledging) | - Really bad. |
+
+Table 8: Examples of similar dialogues discovered above a cosine similarity threshold of 0.92. The last turn in each dialogue discovered through similarity matching was labeled with the emotion or intent of that of the last turn of the manually labeled dialogue.
+
+| Emotion or Intent | No. of dialogues | No. of turns |
| Prepared | 21,178 | 48,883 |
| Anticipating | 27,256 | 100,433 |
| Hopeful | 21,328 | 54,012 |
| Proud | 13,910 | 33,365 |
| Excited | 22,118 | 53,756 |
| Joyful | 6,586 | 24,282 |
| Content | 20,688 | 64,569 |
| Caring | 13,599 | 42,806 |
| Grateful | 15,416 | 42,222 |
| Trusting | 41,650 | 134,197 |
| Confident | 26,199 | 84,918 |
| Faithful | 8,095 | 25,029 |
| Impressed | 12,867 | 25,045 |
| Surprised | 16,658 | 46,022 |
| Terrified | 9,449 | 28,730 |
| Afraid | 15,964 | 49,285 |
| Apprehensive | 8,634 | 46,727 |
| Anxious | 2,376 | 8,578 |
| Embarrassed | 11,541 | 32,338 |
| Ashamed | 3,401 | 14,797 |
| Devastated | 6,245 | 17,539 |
| Sad | 23,023 | 66,262 |
| Disappointed | 5,234 | 18,298 |
| Lonely | 3,662 | 16,396 |
| Sentimental | 7,104 | 20,715 |
| Nostalgic | 7,880 | 20,461 |
| Guilty | 9,632 | 30,043 |
| Disgusted | 5,546 | 15,070 |
| Furious | 54,647 | 169,917 |
| Angry | 13,228 | 34,924 |
| Annoyed | 6,637 | 30,072 |
| Jealous | 5,766 | 20,902 |
| Agreeing | 20,173 | 96,562 |
| Acknowledging | 39,781 | 138,165 |
| Encouraging | 3,024 | 10,329 |
| Consoling | 3,785 | 17,256 |
| Sympathizing | 15,557 | 38,774 |
| Suggesting | 42,470 | 101,591 |
| Questioning | 357,255 | 841,556 |
| Wishing | 42,789 | 108,668 |
| Neutral | 7,649 | 55,932 |
| Total | 1,000,000 | 2,829,426 |
+
+Table 9: Descriptive statistics of the EDOS dataset pertaining to each emotion and intent category.
+
+pointwise feed-forward layers is 1200. We use a dropout rate of 0.1, and the GELU (Hendrycks and Gimpel, 2016) activation function for the hidden layers. The loss function was optimized with the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of $5 \times 105$ For inference, we use beam search with a beam size of 32. To prevent the models from generating repetitive tokens or n-grams, we modified the beam search algorithm
+
+so that at each time step, if any of the branches contains repetitive 4-grams, we set the log probability of this branch to infinitely negative, to stop it from being further expanded. All the models were trained with a batch size of 512, on machines with 4 Nvidia Titan X Pascal GPUs, 2 Intel Xeon E5-2680 v3 CPUs, and 256GB RAM. Table 10 lists the training details as well as the validation performance for all the models.
+
+| Model | # Parameters | # Training Epochs | Training Time | Validation PPL |
| Pre-trained (OS) | 121M | 50 epochs | 171.00 hr | 24.51 |
| Fine-tuned (EDOS) | 121M | 5 epochs | 4.23 hr | 31.78 |
| Fine-tuned (ED) | 121M | 9 epochs | 19.50 min | 21.04 |
+
+Table 10: Training details and validation performance of each model configuration.
\ No newline at end of file
diff --git a/alargescaledatasetforempatheticresponsegeneration/images.zip b/alargescaledatasetforempatheticresponsegeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1a45189c0567f495ce8a2f76340cb334ba2b3d07
--- /dev/null
+++ b/alargescaledatasetforempatheticresponsegeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39e76f524f4e599cca46c5983902ed682110632f6c10c03fa0a77bf892c676ed
+size 784449
diff --git a/alargescaledatasetforempatheticresponsegeneration/layout.json b/alargescaledatasetforempatheticresponsegeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0e52c9acc03e0ec1bfd165794ce7b6af15dbf7b0
--- /dev/null
+++ b/alargescaledatasetforempatheticresponsegeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3497fea378128b5e4ecd0134a3fbfbbab98a853259df75cc5252404eecca7bf2
+size 316319
diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_content_list.json b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..125ceb205ceb6bd2397d7723c456ec61f2efd645
--- /dev/null
+++ b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ee2e99a0f62297c7dca6b4b80ef3bea40a800fd52411b106be9126694d9ff3a
+size 102286
diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_model.json b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3bcc6f607d8600ea2ea1fe98161dfca013500405
--- /dev/null
+++ b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:17f57c100d699fdedd4025b2df39fed15dabc1f0a7751c1982bb8adba6a398a4
+size 127759
diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_origin.pdf b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2c4f7987d3601eac42dd231a28319dc2ae448485
--- /dev/null
+++ b/alargescalestudyofmachinetranslationinturkiclanguages/ce0f2bee-b8cd-4af7-b517-b2986dc3dafe_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00d96de8a6477bcbe9a32028c5fc902b8623b83770cbf58ebe7303288d0865d9
+size 334468
diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/full.md b/alargescalestudyofmachinetranslationinturkiclanguages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..dd4941a7bccfa8c9afbe5ed7878641a578690d6e
--- /dev/null
+++ b/alargescalestudyofmachinetranslationinturkiclanguages/full.md
@@ -0,0 +1,288 @@
+# A Large-Scale Study of Machine Translation in the Turkic Languages
+
+Jamshidbek Mirzakhalov $^{a,b}$ , Anoop Babu $^{a,b}$ , Duygu Ataman $^{a,c}$ , Sherzod Kariev $^{a,b}$ , Francis Tyers $^{a,d}$ , Otabek Abduraufov $^{a,b}$ , Mammad Hajili $^{a,e}$ , Sardana Ivanova $^{a,f}$ , Abror Khaytbaev $^{a,b}$ , Antonio Laverghetta Jr. $^{a,b}$ , Behzodbek Moydinboyev $^{a,b}$ , Esra Onal $^{a,d}$ , Shaxnoza Pulatova $^{a,g}$ , Ahsan Wahab $^{a}$ , Orhan Firat $^{a,h}$ , Sriram Chellappan $^{a,b}$
+
+$^{a}$ Turkic Interlingua, $^{b}$ University of South Florida, $^{c}$ NYU, $^{d}$ Indiana University, $^{e}$ EPFL, $^{f}$ University of Helsinki, $^{g}$ Namangan State University, $^{h}$ Google Research
+
+# Abstract
+
+Recent advances in neural machine translation (NMT) have pushed the quality of machine translation systems to the point where they are becoming widely adopted for building competitive systems. However, there is still a large number of languages that are yet to reap the benefits of NMT. In this paper, we provide the first large-scale case study of the practical application of MT in the Turkic language family in order to realize the gains of NMT for Turkic languages under high-resource to extremely low-resource scenarios. In addition to presenting an extensive analysis that identifies the bottlenecks towards building competitive systems to ameliorate data scarcity, our study has several key contributions, including, i) a large parallel corpus covering 22 Turkic languages consisting of common public datasets in combination with new datasets of approximately 2 million parallel sentences, ii) bilingual baselines for 26 language pairs, iii) novel high-quality test sets in three different translation domains and iv) human evaluation scores. All of our data, software and models are publicly available. $^{1}$
+
+# 1 Introduction
+
+Having been studied widely over the last few decades, machine translation (MT) evaluation has traditionally focused on European languages, due to limitations of the available technology as well as resources. Although low-resource MT has recently started to gain more attention and new evaluation benchmarks are becoming available (Guzmán et al., 2019; Ojha et al., 2020; Fraser, 2020; Ansari et al., 2020), there are still a large amount of underrepresented languages excluded from MT evaluation. In addition to the cost of preparing such labor-intensive annotations, the lack of training resources also limits the evaluation of MT models in
+
+| Name | Codes | Articles | Speakers | MT? |
| English | en, eng | 6,237,470 | 400M | ✓ |
| Russian | ru, rus | 1,694,280 | 258M | ✓ |
| Turkish | tr, tur | 388,641 | 85.0M | ✓ |
| Uzbek | uz, uzb | 139,635 | 27.0M | ✓ |
| Azerbaijani | az, aze | 177,536 | 23.0M | ✓ |
| Kazakh | kk, kaz | 228,123 | 13.2M | ✓ |
| Uyghur | ug, uig | 4,898 | 10.0M | ✓ |
| Turkmen | tk, tuk | 5,876 | 6.70M | ✓ |
| Tatar | tt, tat | 237,332 | 5.20M | ✓ |
| Kyrgyz | ky, kir | 80,738 | 4.30M | ✓ |
| Bashkir | ba, bak | 55,477 | 1.40M | ✓ |
| Chuvash | cv, chv | 45,275 | 1.04M | ✓ |
| Karakalpak | kaa | 1,882 | 583K | ✗ |
| Crimean Tatar | crh | 8,633 | 540K | ✗ |
| Sakha (Yakut) | sah | 13,027 | 450K | ✓ |
| Kumyk | kum | — | 450K | ✗ |
| Karachay-Balkar | krc | 2,049 | 310K | ✗ |
| Tuvan | tyv | 3,164 | 280K | ✗ |
| Urum | uum | — | 190K | ✗ |
| Gagauz | gag | 2,737 | 148K | ✗ |
| Salar | slr | — | 70K | ✗ |
| Altai | alt | — | 56K | ✗ |
| Khakas | kjh | — | 43K | ✗ |
| Shor | cjs | — | 3K | ✗ |
+
+Table 1: Number of Wikipedia articles for Turkic languages compared to English and Russian along with number of L1 speakers and two- and three-letter language codes. The column MT? indicates if there are currently available online machine translation systems for the language. (K: thousand, M: million.)
+
+terms of their applicability across a wide range of world languages. On the other hand, many studies have pointed to the limited applicability of prominent methods in MT research including models and evaluation metrics (Birch et al., 2008; Stanojevic et al., 2015; Bugliarello et al., 2020) in translating languages with varying linguistic typology.
+
+In order to extend the evaluation of the state-of-the-art methods in MT (Joshi et al., 2019) and ultimately aid in designing methods with wider range of applicability, in this paper, we present a large-scale case study of MT methods in a very challenging case of the Turkic language family. The Turkic
+
+language family consists of around 35 languages spoken by communities across Eurasia by around 200 million people. Of this number, around 20 are official languages of a state, or sub-national entity, with the remaining being minority languages. The languages are distinct in their highly complex use of morphology, and thus create extremely sparse vocabularies, presenting a challenging case of evaluation of statistical models, in particular MT systems (Tantug et al., 2008) and n-gram language models (Bender, 2011; Tsarfaty et al., 2020). Table 1 presents the amount of resources and the number of speakers in Turkic languages2 which aids our analysis on the feasibility in crowdsourcing, based on the approach of Moshagen et al. (2014).
+
+Our study includes the preparation of novel public resources covering many languages in the Turkic family, most of which included for the first time in parallel corpora. We also present new benchmarks for MT which could be used for assessing different factors determining the limits of MT methods in various languages, such as data size, evaluation metrics, translation domain, linguistic typology, relatedness, and the writing system. We test the use of our resources in MT and present the first evaluation results for many Turkic languages. Our novel resources consist of $i)$ a large-scale multicentric parallel corpus of $75\mathrm{M}+$ sentence pairs in 22 Turkic languages and their translations into English, Russian, as well as in-family languages, covering over 400 translation directions, $ii)$ 3 new test sets for each translation direction curated from our corpus in 3 different translation domains, $iii)$ bilingual baselines in 26 different language pairs. Our baselines are evaluated using automatic metrics as well as human assessments against commercial or open-source systems where applicable. We release our parallel corpora, test sets, and baseline systems publicly to encourage future research in Turkic languages.
+
+# 2 Turkic Languages & MT
+
+This section gives a brief overview of Turkic languages from a linguistic perspective as well as presenting the previous work on MT of these languages. In our study, we include 22 Turkic languages: Altai, Azerbaijani, Bashkir, Crimean Tatar, Chuvash, Gagauz, Karachay-Balkar, Karakalpak, Khakas, Kazakh, Kumyk, Kyrgyz, Sakha, Salar, Shor, Turkmen, Turkish, Tatar, Tuvan, Uyghur,
+
+Urum, and Uzbek. There are several other widely spoken languages that were left out from our study such as Nogai, Khorasani Turkic, Qashqai, and Khalaj, due to the lack of any available parallel corpora. Future work will focus on extending the corpus to these languages as well.
+
+# 2.1 Linguistic Typology
+
+The Turkic languages are spoken in a wide area that stretches from south-eastern Europe to northeastern Asia. The languages are of the agglutinative morphological type and uniformly have Subject-Object-Verb main constituent order.
+
+Nominal morphology is highly similar between the languages, with all of them exhibiting inflection for number, possession, and case. There are a variable number of cases, but the six-core cases of nominative, genitive, accusative, dative, locative, and ablative are extant in the vast majority of languages. As part of the nominal inflectional system, the languages also have a derivational process whereby locatives and genitives can be pronominalized and constitute full noun phrases in their own right. Verbal inflection, on the other hand, is more heterogeneous between the languages with each language having a variety of strategies for encoding tense, aspect, voice, modality, and evidentiality. One common feature however is that each of the languages has an extensive system of non-finite forms: verbal adjectives, verbal nouns, and verbal adverbs. These are full clauses that can be used as either modifiers (in the case of verbal adjectives and verbal adverbs) or heads (in the case of verbal nouns). Many of the languages also have constructions consisting of a non-finite verbal form and an auxiliary verb which constitute a single predicate, with the auxiliary verb giving extra information about tense or mood (Johanson and Johanson, 2015).
+
+The modern Turkic languages are written in a variety of scripts, with Latin, Cyrillic, and Perso-Arabic being most common. Many of the languages have been written in several writing systems over the past century, making collecting texts more problematic. For example, we can find instances where the same language have texts that are written in Perso-Arabic before the 1920s, in Latin until the 1930s, in Cyrillic until the 1990s, and then in Latin again (Róna-Tas, 2015). In addition, many languages have gone through several orthographic norms based on the same script, and
+
+some languages are currently written in different scripts depending on which country the speakers are in. This orthographic diversity makes collecting and collating text resources difficult, as many texts may be available only in a previously-used orthography and conversion between orthographic systems is never deterministic owing to the large number of loan words in many texts.
+
+# 2.2 MT of Turkic Languages
+
+The need for more comprehensive and diverse multilingual parallel corpora has sped up the creation of such large-scale resources for many language families and linguistic regions (Koehn, 2005; Choudhary and Jha, 2011; Post et al., 2012; Nomoto et al., 2018; Esplà-Gomis et al., 2019; $\forall$ et al., 2020). Tiedemann (2020) released a large-scale corpus for over 500 languages covering thousands of translation directions. The corpus includes 14 Turkic languages and provides bilingual baselines for all translation directions present in the corpus. However, the varying and limited size of the test sets does not allow for the extensive analysis and comparisons between different model artifacts, linguistic features, and translation domains. Khusainov et al. (2020) collected a large-scale Russian-Turkic parallel corpus for 6 language pairs and reports bilingual baselines using a number of NMT-based approaches, although the dataset, test sets, and the models are not released to the public which limits its use to serve as a comparable benchmark. Alküm and Çebi (2019) introduces a rule-based MT framework for Turkic languages and demonstrates the performance with 4 language pairs. Washington et al. (2019) demonstrates several rule-based MT systems built for Turkic languages which are available through the Apertium3 website.
+
+For individual languages in our corpus, there are several proposed MT systems and linguistic resources: Azerbaijani (Hamzaoglu, 1993; Fatullayev et al., 2008), Bashkir (Tyers et al., 2012), Crimean Tatar (Gokirmak et al., 2019; Altintas, 2001), Karakalpak (Kadirov, 2015), Kazakh (Assylbekov and Nurkas, 2014; Sundetova et al., 2015; Littell et al., 2019; Briakou and Carpuat, 2019; Tukeyev et al., 2019), Kyrgyz (Cetin and Ismailova), Sakha (Ivanova et al., 2019), Turkmen (Tantug et al., 2007), Turkish (Turhan, 1997; El-Kahlout and Oflazer, 2006; Bisazza and Federico, 2009; Tantug et al., 2011; Ataman et al., 2017),
+
+Tatar (Salimzyanov et al., 2013; Khusainov et al., 2018; Valeev et al., 2019; Gokirmak et al., 2019), Tuvan (Killackey, 2013), Uyghur (Mahsut et al., 2004; Nimaiti and Izumi, 2014; Song and Dai, 2015; Wang et al., 2020), Uzbek (Axmedova et al., 2019). Yet, to our knowledge, there has not been a study that covers Turkic languages in such a large extent as ours, both in terms of multi-lingual parallel corpora and benchmarks including multi-way comparable test sets in all languages.
+
+# 3 TIL Corpus
+
+Our parallel corpus is collected through unifying publicly available datasets and additional parallel data we prepare by crawling public domain resources. Table 2 shows the total amount of sentences in that particular language across the corpus along with number of sentences that are newly introduced (previously unavailable). This section describes the details of our data collection process.
+
+# 3.1 Public Datasets
+
+In our corpus we include the following public data sets:
+
+- The Tatoeba corpus (Tiedemann, 2020) provides training and test sets for over 500 languages and thousands of translation pairs. It uses the latest version of OPUS $^4$ (Tiedemann and Nygaard, 2004) as training sets and use parallel sentences from the Tatoeba project for testing. Tatoeba consists of 58 language pairs of interest. For the purposes of our corpus, we merge the training, development, and test sets into a single set for all available languages.
+- JW300 (Agić and Vulić, 2019) is a public dataset available for download through OPUS. Although most of the parallel data in JW300 was provided through the Tatoeba corpus, we have identified several pairs that were missing in Tatoeba but present in JW300. To avoid further data loss, we have obtained the JW300 dataset directly from OPUS and deduplicated it against the Tatoeba corpus. This dataset provided data for 59 language pairs of interest and resulted in 5.2 million parallel sentences.
+- GoURMET is another dataset available through OPUS and provides parallel sen
+
+| Language | Data | Script | Category | New Data |
| Turkish | 52.6M | Latin | The Underdogs (4) | 755.9K |
| Kazakh | 5.3M | Arabic, Cyrillic, Latin | The Rising Star (3) | 201.9K |
| Uzbek | 2.9M | Arabic, Cyrillic, Latin | The Rising Star (3) | 1.7M |
| Azerbaijani | 2.2M | Arabic, Cyrillic, Latin | The Scraping-Bys (1) | 284.8K |
| Tatar | 1.8M | Arabic, Cyrillic | The Scraping-Bys (1) | 192.0K |
| Kyrgyz | 1.8M | Arabic, Cyrillic | The Scraping-Bys (1) | 188.6K |
| Chuvash | 1.5M | Cyrillic | The Scraping-Bys (1) | 191.0K |
| Turkmen | 921.0K | Arabic, Cyrillic, Latin | The Scraping-Bys (1) | 191.7K |
| Bashkir | 893.1K | Cyrillic | The Scraping-Bys (1) | 713.9K |
| Uyghur | 343.0K | Arabic, Cyrillic, Latin | The Scraping-Bys (1) | 187.0K |
| Karakalpak | 253.8K | Cyrillic, Latin | The Scraping-Bys (1) | 274.3K |
| Khakas | 219.0K | Cyrillic | The Left-Behinds (0) | 242.8K |
| Altai | 192.6K | Cyrillic | The Left-Behinds (0) | 190.0K |
| Crimean Tatar | 185.3K | Cyrillic, Latin | The Scraping-Bys (1) | 197.6K |
| Kumyk | 165.6K | Cyrillic | The Left-Behinds (0) | 192.4K |
| Karachay-Balkar | 162.8K | Cyrillic, Latin | The Scraping-Bys (1) | 182.6K |
| Gagauz | 157.4K | Cyrillic, Latin | The Scraping-Bys (1) | 177.1K |
| Sakha | 157.1K | Cyrillic | The Scraping-Bys (1) | 174.8K |
| Tuvinian | 103.2K | Cyrillic | The Scraping-Bys (1) | 148.3K |
| Shor | 2.3K | Cyrillic | The Left-Behinds (0) | 6.9K |
| Salar | 766 | Latin | The Left-Behinds (0) | 1.5K |
| Urum | 491 | Greek, Cyrillic, Latin | The Left-Behinds (0) | 491 |
+
+Table 2: Corpus details for each Turkic language. Data shows the aggregated amount of sentences across the corpus. Category refers to the language classes based on data resource according to (Joshi et al., 2020).
+
+tences for 7 language pairs including English-Turkish and English-Kyrgyz. They are not available in Tatoeba due to a more recent release. English-Kyrgyz consists of 14.5 thousand sentence pairs while English-Turkish contains 1.3 million.
+
+In addition to this, with the permission from the owners, we include privately owned corpora for English-Azerbaijani6 containing data from news articles, English-Uzbek7 containing data from KhanAcademy website localization, and Bashkir-Russian8 having a mix of data from news articles and literary works.
+
+# 3.2 Data Crawling
+
+We obtained additional parallel data from a few different public domain websites that contain a large amount of text translated into many different languages. One of these includes TED Talks, which contains talks across various domains that
+
+are translated by volunteers. Qi et al. (2018) compiled a dataset for 60 languages, however, only a few Turkic languages were available at their time of curation. We have compiled an updated version of this dataset and obtained sentence pairs for 8 Turkic languages. *Bible.is* is another website that contains an extensive list of languages into which religious texts and books are translated. 19 out of 22 Turkic languages were covered in this source with an average of approximately 8,000 sentence pairs for each translation direction. Additionally, we have crawled other public websites, online dictionaries, and resources with parallel data that were identified by native speakers of these languages. The full list of online resources we used in our crawling is given in the Appendices.
+
+# 3.3 Data Alignment
+
+All crawled documents are aligned using Hunalign (Varga et al., 2005), with a threshold of either 0.2 or 0.4 depending on the availability of a native speaker for the language. When crawling prealigned sources such as TED Talks, we noticed
+
+serious alignment issues with certain Turkic languages, especially when the source and target differ greatly in size. In these cases, we split both sides into sentences using NLTK sentence tokenizer11 and realign using the Hunalign tool. Specifically for the Bible dataset, all the data has been aligned at the verse level first, then split into sentence-level bittexts whenever possible. This results in parallel texts that are relatively longer while ensuring higher quality alignments.
+
+# 3.4 Data Preprocessing
+
+Many of the languages in our dataset are written using multiple scripts, which creates consistency problems for building MT systems. Therefore, we transliterate three of the languages in our dataset that have a high mix of multiple scripts. Namely, we transliterate Uzbek into a Latin script, while all Karakalpak text is converted into Cyrillic. Although the performance of transliteration tools (Uzbek $^{12}$ and Karakalpak $^{13}$ ) were not strictly evaluated, the tools we have used were recommended and widely adopted by the native speakers of the languages. Once we combine the entire corpus data, we deduplicate the sentences in each language pair.
+
+# 4 Bilingual Baselines
+
+We train bilingual baselines for 26 language pairs in three different resource categories: high ( $\zeta 5\mathrm{M}$ ), medium (100K-5M) and low ( $\zeta 100\mathrm{K}$ ). The choice of pairs to train was based on multiple factors such as the availability of test sets, native speakers (for human evaluation), and other comparable MT systems.
+
+# 4.1 Model Details
+
+All models are Transformers (Vaswani et al., 2017) (transformer-base) whose exact configuration depends on the amount of data available for training. Models for low-resource pairs use 256-dimensional embeddings and hidden layers. Models for mid-resource pairs use 512-dimensional embeddings and hidden layers. The models for high-resource pairs use the same 512-dimensional embedding and hidden layer sizes for the encoder, but for the decoder both dimensions are increased to 1024. All models are trained with the Adam optimizer (Kingma and Ba, 2015) over cross-entropy loss with a maximum learning rate of $3 * 10^{-4}$ and a minimum of
+
+$1*10^{-8}$ , which warms up for the first 4800 training steps and then decays after reaching the maximum. We use a training batch size of 4096. We use perplexity as our early stopping metric with a patience of 5 epochs. We set a dropout (Srivastava et al., 2014) probability of 0.3 in both the encoder and the decoder. We apply a byte pair encoding (BPE) (Sennrich et al., 2015; Dong et al., 2015) with a joint vocabulary size of 4K and 32K for low- and mid/high-resource scenarios respectively.
+
+All models use the Joey NMT (Kreutzer et al., 2019) implementation and apex14 where possible to speed up training. Models were trained on preemptible GPUs freely available on Google Colab.15
+
+# 4.2 Test Sets
+
+High-quality and diverse test sets are essential in evaluating the strength and weaknesses of MT systems. We curate 3 test sets covering 3 translation domains: religious (Bible), conversational (TED Talks), and news (X-WMT).
+
+Bible dataset is the main source that exists across almost all of the 24 language pairs that are included in our corpus. From this dataset, around 400 to 800 most commonly present sentences for every language pair were separated to create a test set. This allowed having a test set comparable in all language pairs, which we find essential for a controlled evaluation and believe would be a useful resource in future studies involving multilingual models.
+
+TED Talks is another resource we use for collecting sentences across multiple languages to create a language-wise comparable test set in the conversational domain. This allows our approach to be comparable also across different domains. After dedduplication, 3000-5000 sentences per language pair are picked as a part of our TED Talks test set.
+
+X-WMT is our test set in the news domain based on the professionally translated test sets in English-Russian from the WMT 2020 Shared Task (Mathur et al., 2020). This set contains approximately 1,000 sentences curated both from English and Russiancentric news sources. Through the engagement of native speakers and professional translators $^{16}$ , we partially translate this test set into 8 Turkic languages (Bashkir, Uzbek, Turkish, Kazakh, Kyrgyz, Azerbaijani, Karamalpak, and Sakha).
+
+ | en | ru | ba | tr | uz | ky | kk | az | sah | kaa |
| en | — | | | | | | | | | |
| ru | 1000 | — | | | | | | | | |
| ba | 1000 | 1000 | — | | | | | | | |
| tr | 800 | 800 | 800 | — | | | | | | |
| uz | 900 | 900 | 900 | 600 | — | | | | | |
| ky | 500 | 500 | 500 | 400 | 500 | — | | | | |
| kk | 700 | 700 | 700 | 500 | 700 | 500 | — | | | |
| az | 600 | 600 | 600 | 500 | 600 | 500 | 500 | — | | |
| sah | 300 | 300 | 300 | 300 | 300 | 300 | 300 | 300 | — | |
| kaa | 300 | 300 | 300 | 300 | 300 | 300 | 300 | 300 | 300 | — |
+
+Table 3: X-WMT test sets. Bolded entries indicate the original translation direction.
+
+Table 3 highlights the currently available test set directions. Bolded entries in the table indicate the original direction of the translation. While Bashkir and Sakha have been translated by professional translators, other languages have been translated and validated (by another person) by proficient bilingual speakers of both the source and target language. The curation of this test set is an ongoing and growing effort currently covering 88 language directions.
+
+# 5 Evaluation
+
+Automatic evaluation metrics are very commonplace in MT research, and there has been a recent line of work exploring better metrics that capture translation quality beyond the syntactic and lexical features (Zhang et al., 2019; Sellam et al., 2020; Rei et al., 2020). Methods relying on contextual embeddings to capture the semantic similarity between the hypothesis and references fall short in terms of their language coverage. This is largely due to the pretraining of these evaluation models that require a significant of monolingual data which most of the low-resource languages lack. In this study, we evaluate our systems using both automatic metrics and human evaluation of translations.
+
+# 5.1 Automatic Metrics for MT
+
+We employ two widely adopted metrics: BLEU (Papineni et al., 2002) and $\mathrm{ChrF}$ (Popovic, 2015). BLEU utilizes modified $n$ -gram precision where the consecutive $n$ -grams of the system translation are compared with the consecutive $n$ -grams of the reference translation. We use the standard Sacre-BLEU implementation (Post, 2018). $\mathrm{ChrF}$ applies the same method at the level of character $n$ -gram and we use the original implementation from the paper as provided through NLTK library. $^{17}$
+
+# 5.2 Human Evaluation
+
+To perform a more holistic analysis of MT systems, it is critical to involve native speakers in the evaluation process. We conducted a human evaluation campaign using a randomly sampled subset of 250 sentences from X-WMT or Bible (whenever X-WMT was not available) for evaluating the outputs of 14 bilingual baseline models. Our assessment is based on Direct Assessment (DA) test (Nießen et al., 2000; Papineni et al., 2002; Doddington, 2002), where annotators were asked to rate a translation according to adequacy and fluency on a 5 point Likert scale. All participants of the study were bilingual speakers of the source and target language. To better understand the importance of directionality (e.g. English-X vs X-English) and avoid variance in scores, we ensure that both directions of the same pair are evaluated by the same annotator (whenever possible). While reporting, we average the scores for each pair but report adequacy and fluency separately. Adequacy is defined as how much information is preserved in the translation. A score of 1 would mean that the translation is meaningless and has no correlation with the target sentence. A score of 5 would mean the translation retains all of the information. Fluency is defined as how grammatically, syntactically, and stylistically correct the translation is. A score of 1 would mean the sentence makes no sense grammatically or syntactically. A score of 5 would mean the sentence is perfectly correct.
+
+# 6 Results & Discussion
+
+The upper section of Table 4 highlights the bilingual baselines for high-resource pairs and their evaluation scores in the three domains. Despite the large training size, both models perform relatively modestly on the Bible and TED Talks with the en-tr model slightly better than ru-tr. Our hypothesis is that the domain of the Bible test set is far from the rest of the training set for both pairs, as most of the training data for Turkish comes from OpenSubti- tles.[18] Another likely bottleneck is the suboptimal model size and hyperparameters, which were not tuned due to limited computational resources.
+
+Baseline results for the mid- and low-resource pairs are in the lower part of Table 4. While there are a lot of fluctuations in the results, it is important to note the large disparities in BLEU scores
+
+| Pair | Train size | Test size | Bible | Test size | Ted Talks | Test size | X-WMT |
| BLEU | ChrF | BLEU | ChrF | BLEU | ChrF |
| en-tr | 39.9m | 416 | 7.15 | 0.30 | 5.2k | 12.32 | 0.43 | 800 | 19.87 | 0.51 |
| ru-tr | 16.8m | 455 | 7.44 | 0.33 | 5.1k | 8.64 | 0.38 | 800 | 8.81 | 0.41 |
| ru-uz | 1.22M | 684 | 6.01 | 0.41 | 2.7K | 4.51 | 0.76 | 800 | 5.95 | 0.39 |
| uz-ru | 1.22M | 684 | 9.84 | 0.51 | 2.7K | 7.57 | 0.73 | 800 | 7.45 | 0.37 |
| en-az | 784K | 455 | 10.56 | 0.24 | 3.3K | 10.58 | 0.29 | 600 | 8.88 | 0.41 |
| az-en | 784K | 455 | 21.17 | 0.45 | 3.3K | 17.01 | 0.17 | 600 | 12.14 | 0.42 |
| en-ky | 733K | 451 | 6.47 | 0.32 | - | - | - | 500 | 3.18 | 0.19 |
| ky-en | 733K | 451 | 13.08 | 0.43 | - | - | - | 500 | 4.30 | 0.40 |
| tr-az | 634K | 606 | 13.78 | 0.65 | 3.6K | 20.50 | 0.40 | 500 | 9.68 | 0.33 |
| az-tr | 634K | 606 | 11.66 | 0.71 | 3.6K | 24.20 | 0.95 | 500 | 11.53 | 0.49 |
| en-kk | 601K | 453 | 3.62 | 0.61 | 3.6K | 6.31 | 0.29 | 700 | 6.99 | 0.38 |
| kk-en | 601K | 453 | 11.22 | 0.27 | 3.6K | 9.78 | 0.30 | 700 | 9.75 | 0.46 |
| en-uz | 555K | 465 | 5.23 | 0.40 | 3.2K | 5.89 | 0.20 | 800 | 6.60 | 0.42 |
| uz-en | 555K | 465 | 16.20 | 0.63 | 3.2K | 11.61 | 0.18 | 800 | 12.32 | 0.48 |
| tr-uz | 161K | 486 | 6.50 | 0.14 | 2.9K | 4.28 | 0.20 | 700 | 1.58 | 0.23 |
| uz-tr | 161K | 486 | 7.40 | 0.32 | 2.9K | 3.92 | 0.26 | 700 | 1.73 | 0.22 |
| kk-ky | 6.4K | 696 | 2.39 | 0.33 | - | - | - | 500 | 0.14 | 0.09 |
| ky-kk | 6.4K | 696 | 2.53 | 0.24 | - | - | - | 500 | 0.11 | 0.13 |
| en-krc | 6.5K | 374 | 5.57 | 0.25 | - | - | - | - | - | - |
| krc-en | 6.5K | 374 | 11.57 | 0.22 | - | - | - | - | - | - |
| kk-tt | 7.7K | 678 | 4.13 | 0.22 | - | - | - | - | - | - |
| tt-kk | 7.7K | 678 | 3.75 | 0.17 | - | - | - | - | - | - |
| ru-sah | 8K | 759 | 2.48 | 0.27 | - | - | - | 300 | 0.08 | 0.20 |
| sah-ru | 8K | 759 | 2.44 | 0.23 | - | - | - | 300 | 0.31 | 0.16 |
| uz-kaa | 8.9K | 772 | 9.90 | 0.71 | - | - | - | 300 | 5.39 | 0.41 |
| kaa-uz | 8.9K | 772 | 9.58 | 0.60 | - | - | - | 300 | 5.24 | 0.44 |
+
+Table 4: Bilingual baselines separated by high-res., mid-res., and low-res. pairs (K: thousand, M: million).
+
+between models when translated in and out of non-Turkic languages. However, these differences are not as prominent when evaluated using ChrF, which is a character-level metric. This can partially be attributed to the complex morphology of Turkic languages which penalizes lexical mispredictions at a much higher rate than in English for example (Tantug et al., 2008). This in return would lead to lower BLEU scores. To examine this phenomena in more detail, we compare the results of X-WMT against human evaluations for the translations these models produced in Section 6.1.
+
+Another notable aspect is the importance of scripts in the performance of the models. Language pairs with more than one script consistently underperform (both in automatic and human evaluations) the ones where both the source and target language use the same script. In fact, the best 6 models on the X-WMT test sets all have Latin scripts in both the source and target language. A suboptimal performance in the face of a script disparity is a known phenomenon (Anastasopoulos and Neubig, 2019; Murikinati et al., 2020; Aji et al., 2020; Amrhein
+
+and Sennrich, 2020), where techniques such as transliteration show to improve performance. This is mostly attributable to model's inability to represent both languages in a shared space effectively when they do not share the same script, which can be damaging for the downstream performance.
+
+# 6.1 Comparing Human Evaluations to BLEU
+
+Using the Direct Assessment (DA) surveys described in Section 5.2, we obtain average scores of adequacy and fluency for almost all baseline models. Figures 1 show the scores for BLEU/ChrF and adequacy/fluency respectively. Comparing the scores from native speakers of these languages, it is quite evident that the disparities in BLEU scores between two translation directions are exaggerated and, even misleading (e.g. en-az vs az-en). Results in the human evaluations for mid-resource pairs seem a lot more closely clustered than in the BLEU/ChrF figure. These results further emphasize the pitfalls of automatic metrics of MT evaluation and emphasize the role of native speakers in the MT process.
+
+| Pair | Test size | Baseline | Google Translate | Yandex Translate | Apertium |
| BLEU | ChrF | BLEU | ChrF | BLEU | ChrF | BLEU | ChrF |
| en-tr | 800 | 19.87 | 0.51 | 69.24 | 0.83 | 40.03 | 0.69 | - | - |
| ru-tr | 800 | 8.81 | 0.41 | 24.79 | 0.54 | 16.64 | 0.44 | - | - |
| tr-uz | 700 | 1.58 | 0.23 | 27.25 | 0.60 | 6.58 | 0.42 | - | - |
| uz-tr | 700 | 1.73 | 0.22 | 28.03 | 0.58 | 5.58 | 0.38 | 4.31 | 0.33 |
| en-uz | 800 | 6.60 | 0.42 | 48.50 | 0.72 | 15.66 | 0.51 | - | - |
| uz-en | 800 | 12.32 | 0.48 | 32.35 | 0.39 | 6.93 | 0.41 | - | - |
| en-kk | 700 | 6.99 | 0.38 | 26.60 | 0.55 | 5.51 | 0.39 | - | - |
| kk-en | 700 | 9.75 | 0.46 | 22.50 | 0.47 | 23.2 | 0.50 | - | - |
| tr-az | 500 | 9.68 | 0.33 | 36.78 | 0.65 | 5.53 | 0.38 | - | - |
| az-tr | 500 | 11.53 | 0.49 | 32.67 | 0.62 | 11.75 | 0.44 | - | - |
| en-ky | 500 | 3.18 | 0.19 | 26.97 | 0.56 | 5.21 | 0.36 | - | - |
| ky-en | 500 | 4.30 | 0.40 | 21.66 | 0.50 | 3.89 | 0.20 | - | - |
| en-az | 600 | 8.88 | 0.41 | 78.54 | 0.89 | 6.59 | 0.40 | - | - |
| az-en | 600 | 12.14 | 0.42 | 39.42 | 0.65 | 12.54 | 0.46 | - | - |
| ru-uz | 800 | 5.95 | 0.39 | 22.26 | 0.56 | 13.19 | 0.50 | - | - |
| uz-ru | 800 | 7.45 | 0.37 | 19.00 | 0.48 | 10.87 | 0.43 | - | - |
| kk-tt* | 678 | 4.13 | 0.22 | 5.45 | 0.35 | 1.58 | 0.24 | 2.77 | 0.28 |
| tt-kk* | 678 | 3.75 | 0.17 | 5.44 | 0.35 | 1.41 | 0.22 | - | - |
| ru-sah | 300 | 0.08 | 0.20 | - | - | 8.27 | 0.40 | - | - |
| sah-ru | 300 | 0.31 | 0.16 | - | - | 24.93 | 0.54 | - | - |
| uz-kaa | 300 | 5.39 | 0.41 | - | - | - | | 11.71 | 0.42 |
| kaa-uz | 300 | 5.24 | 0.44 | - | - | - | - | 5.22 | 0.30 |
| kk-ky | 500 | 0.14 | 0.09 | 20.56 | 0.51 | 4.78 | 0.35 | 9.12 | 0.35 |
| ky-kk | 500 | 0.11 | 0.13 | 20.57 | 0.52 | 3.52 | 0.34 | 6.55 | 0.34 |
+
+Table 5: Bilingual Baseline Compared to online MT Systems on X-WMT (Pair with * uses Bible Data).
+
+ | Adequacy | Fluency |
| BLEU | ChrF | BLEU | ChrF |
| Turkic | 0.62 | 0.71 | 0.75 | 0.67 |
| Non-Turkic | 0.75 | 0.68 | 0.83 | 0.86 |
+
+Table 6: Correlation between scores from human evaluation and automatic metrics for translating into Turkic and non-Turkic. Correlation is measured using Pearson's $r$ .
+
+# 6.2 Turkic Languages on the target side
+
+Even though BLEU scores do not offer a holistic way to compare two MT systems, they are effective in telling which system performs better. As seen clearly from the results in Table 4, the performance of the baseline system as measured by the BLEU metric when translating into a Turkic language from English is substantially worse than when translating into English from a Turkic language. Translating into the Turkic language is typically twice as bad in terms of BLEU as translating from the Turkic language. The reliability of the BLEU score also decreases especially in the case of translating into morphologically-rich languages,
+
+which has indeed been shown to correlate poorly with human judgments in Turkic languages (Ma et al., 2018, 2019). Table 6 shows the correlation between BLEU/Chrf and adequacy/fluency scores. BLEU seems to correlate with adequacy/fluency a lot better when the target side is a non-Turkic language, which emphasizes our earlier points regarding the language morphology. ChrF's correlation to adequacy scores is about the same regardless of the target language.
+
+# 6.3 Comparison to Existing Systems
+
+Table 5 compares our baselines to three commercial/open-source MT systems: Google Translate, $^{19}$ Yandex Translate, $^{20}$ and Apertium (Forcada et al., 2011). Google Translate results are significantly higher than our baselines and other MT systems. There are quite a few reasons for the score disparities. First, commercial systems have access to more data for training and possibly also include the public data we exclude from our test sets. Moreover, several test-set translators used Google Translate to do the translations and performed post
+
+
+(a) BLEU and ChrF scores for select pairs. Note: ChrF scores were multiplied by 20 for better visibility.
+
+
+(b) Adequacy and Fluency scores (1-5) obtained from human evaluations.
+Figure 1: Comparison between BLEU/ChrF scores and Adequacy/Fluency scores. Best viewed in color.
+
+edits afterwards (e.g. en-uz) which creates a bias favoring sentences generated by Google's service. A safer comparison of the baselines is achieved with Yandex Translate, which despite the lower performance also supports more Turkic languages (8 in Google and 9 in Yandex). However, it is important to note that their API yielded worse results than their web interface. Apertium is a rule-based MT framework that supports several Turkic-Turkic pairs and we include the results whenever one is available. For those pairs, the results are comparable with our baselines and Yandex Translate.
+
+# 7 Conclusion & Future Work
+
+In this paper, we introduce a large parallel corpus covering 22 Turkic languages along with in-domain and out-of-domain evaluation sets. We also train the first baseline models for several language pairs and take the initial steps to address the challenges associated with machine translation in the Turkic languages. This study was carried out as in a participatory research setting by a diverse community of researchers, engineers, language specialists, and native speakers of Turkic languages. Future work will focus on studies of methods for effective crosslingual transfer, extending of the coverage of the corpus to more languages and domains, and increasing the size of the test sets to provide more comprehensive benchmarks.
+
+# Acknowledgements
+
+This project received support from the Google AI Academic Research Awards and the Swiss National Science Foundation (MUTAMUR; no. 176727). We thank all of the members and partners of the
+
+Turkic Interlingua (TIL) community for their contributions to the project. Namely, we would like to thank our dedicated translators and annotators: Nurlan Maharramli, Leyla Baghirli, Ipek Baris, Aigiz Kunafin, Aydos Muxammadiyarov, Ziyodabonu Qobiljon qizi, Alperen Cantez, Doniyorbek Rafikjonov, Mukhammadbektosh Khaydarov, Medina Zokirjonova, Erkinbek Vokhabov, Mohiyaxon Uzoqova, Petr Popov, Abilxayr Zholdybai, and Akylbek Khamitov. We also acknowledge and appreciate significant dataset contributions from Rasul Karimov, Iskandar Mamasoliev, Khan Academy O'zbek, and the Foundation for the Preservation and Development of the Bashkir Language. Furthermore, we would like thank Dr. John Licato, Dr. Jonathan Washington and Animesh Nighojkar for their valuable feedback throughout the project.
+
+# References
+
+Zeljko Agić and Ivan Vulić. 2019. JW300: A wide-coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204–3210, Florence, Italy. Association for Computational Linguistics.
+Alham Fikri Aji, Nikolay Bogoychev, Kenneth Heafield, and Rico Sennrich. 2020. In neural machine translation, what does transfer learning transfer? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7701-7710.
+Emel Alküm and Yalçın Cebi. 2019. Machine translation infrastructure for turkic languages (MT-Turk). The International Arab Journal of Information Technology, 16(3):380-388.
+
+Kemal Altıntas. 2001. Turkish to Crimean Tatar machine translation system. Ph.D. thesis, Bilkent University.
+Chantal Amrhein and Rico Sennrich. 2020. On romanization for model transfer between scripts in neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2461-2469.
+Antonios Anastasopoulos and Graham Neubig. 2019. Pushing the limits of low-resource morphological inflection. arXiv preprint arXiv:1908.05838.
+Ebrahim Ansari, Nguyen Bach, Ondrej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, et al. 2020. Findings of the iwslt 2020 evaluation campaign. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 1-34.
+Zhenisbek Assylbekov and Assulan Nurkas. 2014. Initial explorations in kazakh to english statistical machine translation. In The First Italian Conference on Computational Linguistics CLiC-it 2014, page 12.
+Duygu Ataman, Matteo Negri, Marco Turchi, and Marcello Federico. 2017. Linguistically motivated vocabulary reduction for neural machine translation from turkish to english. The Prague Bulletin of Mathematical Linguistics, 108(1):331-342.
+Xolisa Axmedova, Guzal Abdujalilova, and Umida Abdurahmonova. 2019. Algorithm based on linguistic models in machine translation between russian and Uzbek. ACADEMICIA: An International Multidisciplinary Research Journal, 9(12):16-21.
+Emily M Bender. 2011. On achieving and evaluating language-independence in nlp. Linguistic Issues in Language Technology, 6(3):1-26.
+Alexandra Birch, Miles Osborne, and Philipp Koehn. 2008. Predicting success in machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 745-754, Honolulu, Hawaii. Association for Computational Linguistics.
+Arianna Bisazza and Marcello Federico. 2009. Morphological pre-processing for turkish to english statistical machine translation. In nnnn.
+Eleftheria Briakou and Marine Carpuat. 2019. The university of Maryland's Kazakh-English neural machine translation system at WMT19. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 134-140.
+Emanuele Bugliarello, Sabrina J. Mielke, Antonios Anastasopoulos, Ryan Cotterell, and Naoaki Okazaki. 2020. It's easier to translate out of English than into it: Measuring neural translation difficulty by cross-mutual information. In Proceedings of the
+
+58th Annual Meeting of the Association for Computational Linguistics, pages 1640-1649, Online. Association for Computational Linguistics.
+Mustafa Alp Cetin and Rita Ismailova. Assisting tool for essay grading for Turkish language instructors. MANAS Journal of Engineering, 7(2):141-146.
+Narayan Choudhary and Girish Nath Jha. 2011. Creating multilingual parallel corpora in indian languages. In Language and Technology Conference, pages 527-537. Springer.
+George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, pages 138-145.
+Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723-1732, Beijing, China. Association for Computational Linguistics.
+Ilknur Durgar El-Kahlout and Kemal Oflazer. 2006. Initial explorations in english to turkish statistical machine translation. In Proceedings on the Workshop on Statistical Machine Translation, pages 7-14.
+Miquel Esplà-Gomis, Mikel L Forcada, Gema Ramírez-Sánchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119.
+Rauf Fatullayev, Ali Abbasov, and Abulfat Fatullayev. 2008. Dilmanc is the 1st MT system for Azerbaijan. Proc. of SLTC-08, Stockholm, Sweden, pages 63-64.
+$\forall$ , Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Tajudeen Kolawole, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddee Hassan Muhammad, Salomon Kabongo, Salomey Osei, et al. 2020. Participatory research for low-resourced machine translation: A case study in african languages. Findings of EMNLP.
+Mikel L Forcada, Mireia Ginesti-Rosell, Jacob Nordfalk, Jim O'Regan, Sergio Ortiz-Rojas, Juan Antonio Pérez-Ortiz, Felipe Sánchez-Martínez, Gema Ramírez-Sánchez, and Francis M Tyers. 2011. Aperium: a free/open-source platform for rule-based machine translation. Machine translation, 25(2):127-144.
+Alexander Fraser. 2020. Findings of the WMT 2020 shared tasks in unsupervised MT and very low resource supervised MT. In Proceedings of the Fifth Conference on Machine Translation, pages 765-771, Online. Association for Computational Linguistics.
+
+Memduh Gokirmak, Francis Tyers, and Jonathan Washington. 2019. Machine Translation for Crimean Tatar to Turkish. In Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages, pages 24-31.
+Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. CoRR, abs/1902.01382.
+Ilker Hamzaoglu. 1993. Machine translation from Turkish to other Turkic languages and an implementation for the Azeri language. Ph.D. thesis, MSc Thesis, Bogazici University, Istanbul.
+Sardana Ivanova, Anisia Katinskaia, and Roman Yan-garber. 2019. Tools for supporting language learning for sakha. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 155-163, Turku, Finland. Linköping University Electronic Press.
+Lars Johanson and Éva Ágnes Csató Johanson. 2015. The Turkic Languages. Routledge.
+Pratik Joshi, Christain Barnes, Sebastin Santy, Simran Khanuja, Sanket Shah, Anirudh Srinivasan, Satwik Bhattachamishra, Sunayana Sitaram, Monjit Choudhury, and Kalika Bali. 2019. Unsung challenges of building and deploying language technologies for low resource language communities. arXiv preprint arXiv:1912.03457.
+Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the nlp world. arXiv preprint arXiv:2004.09095.
+Azizbek Kadirov. 2015. The algorithm of machine translation from uzbek to karakalpak. TurkLang-2015, page 24.
+Aidar Khusainov, Dzhavdet Suleymanov, Rinat Gilmullin, and Ajrat Gatiatullin. 2018. Building the Tatar-Russian NMT system based on re-translation of multilingual data. In International Conference on Text, Speech, and Dialogue, pages 163-170. Springer.
+Aidar Khusainov, Dzhavdet Suleymanov, Rinat Gilmullin, Alina Minsafina, Lenara Kubedinova, and Nilufar Abdurakhmonova. 2020. First Results of the "TurkLang-7" Project: Creating Russian-Turkic Parallel Corpora and MT Systems.
+Rachel Killackey. 2013. Statistical Machine Translation from English to Tuvan.
+Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A Method for Stochastic Optimization. In *ICLR 2015: International Conference on Learning Representations* 2015.
+
+Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In $MT$ summit, volume 5, pages 79-86. CiteSeer.
+Julia Kreutzer, Jasmijn Bastings, and Stefan Riezler. 2019. Joey NMT: A minimalist NMT toolkit for novices. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 109-114, Hong Kong, China. Association for Computational Linguistics.
+Patrick Littell, Chi-kiu Lo, Samuel Larkin, and Darlene Stewart. 2019. Multi-source transformer for Kazakh-Russian-English neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 267-274.
+Qingsong Ma, Ondrej Bojar, and Yvette Graham. 2018. Results of the wmt18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of the third conference on machine translation: shared task papers, pages 671-688.
+Qingsong Ma, Johnny Wei, Ondrej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90.
+Muhtar Mahsut, Yasuhiro Ogawa, Kazue Sugino, Katsuhiko Toyama, and Yasuyoshi Inagaki. 2004. An experiment on Japanese-Uighur machine translation and its evaluation. In Conference of the Association for Machine Translation in the Americas, pages 208-216. Springer.
+Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, and Ondrej Bojar. 2020. Results of the WMT20 metrics shared task. In Proceedings of the Fifth Conference on Machine Translation, pages 688-725, Online. Association for Computational Linguistics.
+Sjur Moshagen, Trond Trosterud, Jack Rueter, Francis M. Tyers, and Tommi A. Pirinen. 2014. Open-source infrastructures for collaborative work on under-resourced languages. In Proceedings of CCURL workshop 2014.
+Nikitha Murikinati, Antonios Anastasopoulos, and Graham Neubig. 2020. Transliteration for cross-lingual morphological inflection. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 189–197.
+Sonja Nießen, Franz Josef Och, Gregor Leusch, Hermann Ney, et al. 2000. An Evaluation Tool for Machine Translation: Fast Evaluation for MT Research. In LREC.
+
+Maimitili Nimititi and Yamamoto Izumi. 2014. A Rule Based Approach for Japanese-Uyghur Machine Translation System. International Journal of Software Science and Computational Intelligence (IJSSCI), 6(1):56-69.
+Hiroki Nomoto, Kenji Okano, David Moeljadi, and Hideo Sawada. 2018. Tufs asian language parallel corpus (talpco). In Proceedings of the Twenty-fourth Annual Meeting of the Association for Natural Language Processing, pages 436-439.
+Atul Kr. Ojha, Valentin Malykh, Alina Karakanta, and Chao-Hong Liu. 2020. Findings of the LoResMT 2020 shared task on zero-shot for low-resource languages. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 33-37, Suzhou, China. Association for Computational Linguistics.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318.
+Maja Popovic. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
+Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing parallel corpora for six indian languages via crowdsourcing. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 401-409.
+Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Janani Padmanabhan, and Graham Neubig. 2018. When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
+Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. arXiv preprint arXiv:2009.09025.
+András Róna-Tas. 2015. Turkic Writing Systems. In Lars Johanson and Éva Ágnes Csató Johanson, editors, The Turkic Languages, chapter 6, pages 126-137. Routledge.
+Ilnar Salimzyanov, J Washington, and F Tyers. 2013. A free/open-source Kazakh-Tatar machine translation system. Machine Translation Summit XIV, pages 175-182.
+Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. BLEURT: Learning robust metrics for text generation. arXiv preprint arXiv:2004.04696.
+
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
+JL Song and L Dai. 2015. Construction of Uighur-Chinese parallel corpus. In Multimedia, Communication and Computing Application: Proceedings of the 2014 International Conference on Multimedia, Communication and Computing Application (MCCA 2014), Xiamen, China, October 16-17, 2014, page 353. CRC Press.
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958.
+Miloš Stanojevic, Amir Kamran, Philipp Koehn, and Ondrej Bojar. 2015. Results of the WMT15 metrics shared task. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 256-273, Lisbon, Portugal. Association for Computational Linguistics.
+Aida Sundetova, Mikel Forcada, and Francis Tyers. 2015. A free/open-source machine translation system from english to kazakh. In Proceedings OF THE INTERNATIONAL CONFERENCE TURKIC LANGUAGE PROCESSING" TurkLang-2015, pages 78-90.
+A. C. Tantug, E. Adali, and Kemal Offlazer. 2007. Machine translation between turkic languages. In ACL.
+Ahmet Cüneyd Tantug, Eşref ADALI, and Kemal OFLAZER. 2011. Türkmenceden Türkiye bilgisiyarı metin cevirisi. ITÜDERGISİ/d, 7(4).
+A Cüneyd Tantug, Kemal Oflazer, and Ilknur Durgar El-Kahlout. 2008. BLEU+: a Tool for Fine-Grained BLEU Computation. In LREC.
+Jörg Tiedemann. 2020. The Tatoeba Translation Challenge-Realistic Data Sets for Low Resource and Multilingual MT. arXiv preprint arXiv:2010.06354.
+Jörg Tiedemann and Lars Nygaard. 2004. The OPUS Corpus-Parallel and Free: http://logos.uio.no/opus. In LREC. Citeseer.
+Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From SPMRL to NMRL: What did we learn (and unlearn) in a decade of parsing morphologically-rich languages (MRLs)? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7396-7408, Online. Association for Computational Linguistics.
+Ualsher Tukeyev, Aidana Karibayeva, and Balzhan Abduali. 2019. Neural machine translation system for the kazakh language based on synthetic corpora. In MATEC Web of Conferences, volume 252, page 03006. EDP Sciences.
+
+Cigdem Keyder Turhan. 1997. An English to Turkish machine translation system using structural mapping. In Fifth Conference on Applied Natural Language Processing, pages 320-323.
+Francis M Tyers, Jonathan North Washington, Ilnar Salimzyanov, and Rustam Batalov. 2012. A prototype machine translation system for Tatar and Bashkir based on free/open-source components. In First Workshop on Language Resources and Technologies for Turkic Languages, page 11.
+Aidar Valeev, Ilshat Gibadullin, Albina Khusainova, and Adil Khan. 2019. Application of Low-resource Machine Translation Techniques to Russian-Tatar Language Pair. arXiv preprint arXiv:1910.00368.
+D. Varga, L. Nemeth, P. Halácsy, A. Kornai, V. Trón, and V. Nagy. 2005. Parallel corpora for medium density languages. In Proceedings of the RANLP 2005, pages 590-596.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, volume 30, pages 5998-6008.
+Dongqi Wang, Zihan Liu, Qingnan Jiang, Zewei Sun, Shujian Huang, and Jiajun Chen. 2020. NJUNLP's Machine Translation System for CCMT-2020 Uighur - Chinese Translation Task. In China Conference on Machine Translation, pages 76-82. Springer.
+Jonathan North Washington, Ilnar Salimzianov, Francis M. Tyers, Memduh Gokirmak, Sardana Ivanova, and Oğuz Kuyrukcu. 2019. Free/open-source technologies for Turkic languages developed in the Aperium project. In Proceedings of the International Conference on Turkic Language Processing (TURK-LANG 2019).
+Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
+A Overall corpus statistics
+Table 7 lists the training size (in sentences) for each language direction. It is important to note that the corpus is continuously growing and improving, so this version of the corpus was used for the bilingual baselines and human evaluations in this paper.
+
+# B Dataset Sources
+
+Our parallel corpus is a combination of public resources and individual/group contributions. We list the sources for all the resources and websites used
+
+in curating our corpus in Table 8. More recent information on the licences and reuse of the corpus can be found in the Github repository $^{21}$ .
+
+ | alt | az | ba | cjs | crh | cv | en | gag | kaa | kjh | kk | krc | kum | ky | ru | sah | slr | tk | tr | tt | tyv | ug | uum | uz | |
| alt | | 9.8K | 11.8K | 48 | 6.8K | 9.9K | 11.3K | 6.6K | 6.8K | 6.8K | 6.8K | 6.8K | 6.7K | 11.2K | 6.8K | 6.8K | 6.6K | 61 | 9.7K | 11.3K | 11.3K | 4.5K | 6.8K | | 10.5K |
| az | 9.8K | | 33.0K | 48 | 6.9K | 82.3K | 787.6K | 6.7K | 6.9K | 6.9K | 8.0K | 6.9K | 6.7K | 220.1K | 389.0K | 6.9K | 52 | 123.4K | 636.5K | 215.1K | 13.6K | 6.9K | | 227.4K | |
| ba | 11.8K | 33.0K | | | 6.8K | 34.4K | 64.5K | 6.6K | 6.8K | 6.7K | 6.8K | 6.8K | 6.7K | 35.9K | 36.2K | 6.6K | | 30.2K | 65.4K | 67.2K | 16.5K | 6.7K | | 27.5K | |
| cjs | 48 | 48 | | | 49 | | 57 | 48 | 31 | 47 | 46 | | | 47 | 2.4K | 48 | 44 | 47 | 50 | 50 | | | 53 | | 48 |
| crh | 6.8K | 6.9K | 6.8K | 49 | | 6.7K | 16.4K | 6.8K | 6.9K | 7.0K | 6.9K | 6.9K | 6.7K | 6.7K | 12.3K | 6.7K | 52 | 6.7K | 13.9K | 8.3K | | | 7.0K | | 7.3K |
| cv | 9.9K | 82.3K | 34.4K | | 6.7K | | 156.4K | 6.5K | 6.7K | 6.7K | 6.8K | 6.8K | 6.6K | 83.1K | 82.9K | 6.7K | | 78.6K | 158.2K | 160.0K | 15.1K | 6.6K | | 59.9K | |
| en | 11.3K | 787.6K | 64.5K | 57 | 16.4K | 156.4K | | 6.8K | 7.3K | 7.0K | 614.5K | 7.0K | 6.8K | 625.6K | 47.1M | 7.1K | 837 | 250.4K | 39.9M | 572.5K | 29.0K | 111.2K | 507 | 559.7K | |
| gag | 6.6K | 6.7K | 6.6K | 48 | 6.8K | 6.5K | 6.8K | | 6.7K | 6.8K | 6.7K | 6.7K | 6.6K | 6.6K | 6.7K | 6.6K | 55 | 6.5K | 6.7K | 6.6K | | | 6.8K | | 7.1K |
| kaa | 6.8K | 6.9K | 6.8K | 31 | 6.9K | 6.7K | 7.3K | 6.7K | | 6.8K | 6.9K | 6.9K | 6.7K | 6.7K | 7.3K | 6.7K | 32 | 6.6K | 6.9K | 6.8K | | | 6.8K | | 9.8K |
| kjh | 6.8K | 6.9K | 6.7K | 47 | 7.0K | 6.7K | 7.0K | 6.8K | 6.8K | | 6.8K | 7.7K | 6.7K | 6.7K | 6.8K | 8.1K | 52 | 6.6K | 6.8K | 6.7K | | | 7.0K | | 7.3K |
| kk | 6.8K | 8.0K | 6.8K | 46 | 6.9K | 6.8K | 614.5K | 6.7K | 6.9K | 6.8K | | 6.9K | 6.7K | 7.0K | 4.5M | 7.0K | 48 | 6.6K | 65.9K | 8.4K | | | 6.8K | | 124.9K |
| krc | 6.8K | 6.9K | 6.8K | | 6.9K | 6.8K | 7.0K | 6.7K | 6.9K | 7.7K | 6.9K | | 6.8K | 6.7K | 6.9K | 7.6K | | 6.6K | 6.9K | 6.8K | | | 6.8K | | 7.3K |
| kum | 6.7K | 6.7K | 6.7K | | 6.7K | 6.6K | 6.8K | 6.6K | 6.7K | 6.7K | 6.7K | 6.8K | | 6.6K | 6.8K | 6.6K | | 6.5K | 6.7K | 6.6K | | | 6.7K | | 7.1K |
| ky | 11.2K | 220.1K | 35.9K | 47 | 6.7K | 83.1K | 625.6K | 6.6K | 6.7K | 7.0K | 7.0K | 6.7K | 6.6K | | 309.3K | 6.7K | 50 | 122.5K | 549.0K | 232.4K | 15.0K | 6.7K | | 127.0K | |
| ru | 6.8K | 389.0K | 36.2K | 2.4K | 12.3K | 82.9K | 47.1M | 6.7K | 7.3K | 6.8K | 4.5M | 6.9K | 6.8K | 309.3K | | 8.7K | | 122.5K | 16.8M | 296.7K | | | 54.6K | | 1.2M |
| sah | 6.6K | 6.9K | 6.6K | 48 | 6.7K | 6.7K | 7.1K | 6.6K | 6.7K | 8.1K | 7.0K | 7.6K | 6.6K | 6.7K | 8.7K | | 51 | 6.5K | 6.9K | 6.9K | | | 6.6K | | 7.3K |
| slr | 61 | 52 | | 44 | 52 | | 837 | 55 | 32 | 52 | 48 | | | 50 | | 51 | | 56 | 59 | 32 | | | 59 | | 48 |
| tk | 9.7K | 123.4K | 30.2K | 47 | 6.7K | 78.6K | 250.4K | 6.5K | 6.6K | 6.6K | 6.6K | 6.6K | 6.5K | 122.5K | 122.5K | 6.5K | 56 | | 244.1K | 235.4K | 14.0K | 6.6K | | 7.2K | |
| tr | 11.3K | 636.5K | 65.4K | 50 | 13.9K | 158.2K | 39.9M | 6.7K | 6.9K | 6.8K | 65.9K | 6.9K | 6.7K | 549.0K | 16.8M | 6.9M | 59 | 244.1K | | 531.2K | 29.2K | 64.4K | | 254.8K | |
| tt | 11.3K | 215.1K | 67.2K | 50 | 8.3K | 160.0K | 572.5K | 6.6K | 6.8K | 6.7K | 8.4K | 6.8K | 6.6K | 232.4K | 296.7K | 6.9K | 32 | 235.4K | 531.2K | | | 15.0K | 6.7K | | 136.3K |
| tvy | 4.5K | 13.6K | 16.5K | | | 15.1K | 29.0K | | | | | | | 15.0K | | | | 14.0K | 29.2K | 15.0K | | | | | 10.8K |
| ug | 6.8K | 6.9K | 6.7K | 53 | 7.0K | 6.6K | 111.2K | 6.8K | 6.8K | 7.0K | 6.8K | 6.8K | 6.7K | 6.7K | 54.6K | 6.6K | 59 | 6.6K | 64.4K | 6.7K | | | | 19.5K | |
| uum | | | | | | | 507 | | | | | | | | | | | | | | | | | | |
| uz | 10.5K | 227.4K | 27.5K | 48 | 7.3K | 59.9K | 559.7K | 7.1K | 9.8K | 7.3K | 124.9K | 7.3K | 7.1K | 127.0K | 1.2M | 7.3K | 48 | 7.2K | 254.8K | 136.3K | 10.8K | 19.5K | | | |
+
+Table 7: Parallel corpora size for each language pair.
+
+| Source | Link | Languages | Size |
| Tatoeba Challenge (OPUS+Tatoeba+ Gourmet+JW300) | https://github.com/Helsinki-NLP/Tatoeba-Challenge | az, ba, crh, cv, gag, kjh, kk, krc, kum, ky, sah, tk, tr, tt, tvv, ug, uz, ru, en | ~40m |
| UDHR | https://www.ohchr.org/EN/UDHR/Pages/SearchByLang.aspx | alt, ba, az, cv, cjs, crh, gag, kaa, kjh, kk, ky, sah, slr, tk, tt, tr, ug, uz, ru, en | ~100 per direction |
| Bible | https://www.faithcomesbyhearing.com audio-bible-resources/recordings-database | alt, ba, az, cjs, cv, crh, en, gag, kaa, kjh, kk, ky, sah, tk, tt, ug, uz, tr | ~9k per direction |
| Ted Talks | https://www.ted.com/ participate/translate/ our-languages | az, en, kk, ky, ru, tt, tr, tt, uz, ug | ~600k |
| Mozilla | | az, ba, cv, en, kk, ky, sah, tk, tt, ug, uz, tr, ru | ~300 per direction |
| Azerbaijani News | https://github.com/ derintelligence/en-az-parallel-corpus | az, en | ~68k |
| Uzbek/English News | https://data.gov.uz https://president.uz https://uz.usembassy.gov https://www.gov.uz | uz, en | ~60k |
| Uzbekistan Legislative Dataset (Law) | https://lex.uz/ | uz, ru, en | ~1.5m |
| KhanAcademy Project Translations(Math/Science) | https://uz.khanacademy.org/ | uz, en | ~200k |
| Karakalpak News | https://kknews.uz https://www.gov.uz http://karakalpakstan.uz https://www.qrstat.uz/kk | kaa, uz, ru, en | ~60k |
| Bashkir-Russian Corpus | https://github.com/AigizK/bashkort-parallel-corpora | ba,ru | ~600k |
| Salar Language Materials | http://www.sino-platonic.org/complete/spp043_salar_language.pdf | slr,en | ~700 |
| Urum Language Materials | https://web.archive.org/web/20180919233848/http://projects.turkmas.uoa.gr/urum/ | urum, en | ~500 |
| Russian-Shor Online Dictionary | http://tili.tadarlar.ru/tadar/rus-shor.html | ru,cjs | ~300 |
+
+Table 8: Sources and links for resources and websites used.
\ No newline at end of file
diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/images.zip b/alargescalestudyofmachinetranslationinturkiclanguages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9e6528c06a732a1c320553fdef55675a2589f10e
--- /dev/null
+++ b/alargescalestudyofmachinetranslationinturkiclanguages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7276b8cfd7524a05bbcb66b3945c1674e75495bea480149246b252803f38708b
+size 961314
diff --git a/alargescalestudyofmachinetranslationinturkiclanguages/layout.json b/alargescalestudyofmachinetranslationinturkiclanguages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..89de373cd072f86d5c1333b9ac531fbd8b3899f7
--- /dev/null
+++ b/alargescalestudyofmachinetranslationinturkiclanguages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8652cf8c2a583d9bb7e84bf22d88c84ce4ebd5e413f7ebeb5137cc1a510fdd2b
+size 406839
diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_content_list.json b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..46ee6c8cfbe53693fe37ef3daf229186ac5c1d71
--- /dev/null
+++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a19365b8f09f7838dfa4d165c4cf122a1da77d6454bafb2c7558ca25eb01cf9e
+size 100402
diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_model.json b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..aeb427c7831fa295f7c8cdedc9a6fd8fad6afcfc
--- /dev/null
+++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be38b364e6eaed46bc75c84c7af0eb7dfd310de496faaff60ad574fb23980007
+size 118902
diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_origin.pdf b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..07e2069d75b84afd4ae47308caf5ad8653b9770f
--- /dev/null
+++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/84a60cc1-25ad-40ea-aec1-dc1afe71c120_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7e83c729ab74e3497c336340f5a2b89c3c4720e32c9070ac0ef973031b01e22
+size 614744
diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/full.md b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..374775814116c186e2b44e364949cc4393773bde
--- /dev/null
+++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/full.md
@@ -0,0 +1,458 @@
+# AlignNART: Non-autoregressive Neural Machine Translation by Jointly Learning to Estimate Alignment and Translate
+
+Jongyoon Song $^{1,2*}$
+
+Sungwon Kim
+
+Sungroh Yoon $^{1,3\dagger}$
+
+1Data Science and AI Laboratory, Seoul National University, South Korea
+
+2Kakao Enterprise, South Korea
+
+$^{3}$ Interdisciplinary Program in Artificial Intelligence, Seoul National University, South Korea
+
+{coms1580, ksw0306, sryoon}@snu.ac.kr
+
+# Abstract
+
+Non-autoregressive neural machine translation (NART) models suffer from the multi-modality problem which causes translation inconsistency such as token repetition. Most recent approaches have attempted to solve this problem by implicitly modeling dependencies between outputs. In this paper, we introduce AligNART, which leverages full alignment information to explicitly reduce the modality of the target distribution. AligNART divides the machine translation task into $(i)$ alignment estimation and $(ii)$ translation with aligned decoder inputs, guiding the decoder to focus on simplified one-to-one translation. To alleviate the alignment estimation problem, we further propose a novel alignment decomposition method. Our experiments show that AligNART outperforms previous non-iterative NART models that focus on explicit modality reduction on WMT14 En $\leftrightarrow$ De and WMT16 Ro $\rightarrow$ En. Furthermore, AligNART achieves BLEU scores comparable to those of the state-of-the-art connectionist temporal classification based models on WMT14 En $\leftrightarrow$ De. We also observe that AligNART effectively addresses the token repetition problem even without sequence-level knowledge distillation.
+
+# 1 Introduction
+
+In the neural machine translation (NMT) domain, non-autoregressive NMT (NART) models (Gu et al., 2018) have been proposed to alleviate the low translation speeds of autoregressive NMT (ART) models. However, these models suffer from degenerated translation quality (Gu et al., 2018; Sun et al., 2019). To improve the translation quality of NART, several studies on NART iteratively refine decoded outputs with minimal iterations (Ghazvininejad et al., 2019; Kasai et al., 2020a; Lee et al., 2020; Guo et al., 2020; Sahara et al., 2020); other recent
+
+works target to improve NART without iteration (Qian et al., 2021; Gu and Kong, 2021).
+
+One of the significant limitations of non-iterative NART models is the multi-modality problem. This problem originates from the fact that the models should maximize the probabilities of multiple targets without considering conditional dependencies between target tokens. For example, in English-to-German translation, a source sentence "Thank you very much." can be translated to "Danke wollen" or "Vielen Dank." Under the conditional independence assumption, the non-iterative NART models are likely to generate improper translations such as "Danke Dank." or "Vielen wollen." (Gu et al., 2018). For the same reason, other inconsistency problems such as token repetition or omission occur frequently in non-iterative NART (Gu and Kong, 2021).
+
+There are two main methods for non-iterative NART to address the multi-modality problem. Some works focus on an implicit modeling of the dependencies between the target tokens (Gu and Kong, 2021). For example, Ghazvininejad et al. (2020), Sahara et al. (2020), and Gu and Kong (2021) modify the objective function based on dynamic programming, whereas Qian et al. (2021) provide target tokens to the decoder during training.
+
+On the other hand, other works focus on an explicit reduction of the modality of the target distribution by utilizing external source or target sentence information rather than modifying the objective function. For example, Akoury et al. (2019) and Liu et al. (2021) use syntactic or semantic information; Gu et al. (2018), Zhou et al. (2020b), and Ran et al. (2021) use the alignment information between source and target tokens. However, previous explicit modality reduction methods show suboptimal performance.
+
+Zhou et al. (2020b) and Ran et al. (2021) extract fertility (Brown et al., 1993) and ordering
+
+information in word alignments, which enables the modeling of several types of mappings except for many-to-one and many-to-many cases. We hypothesize that leveraging entire mappings significantly reduces the modality and is the key to performance improvement.
+
+In this work, we propose AligNART, a non-iterative NART model that mitigates the multimodality problem by utilizing complete information in word alignments. AligNART divides the machine translation task into $(i)$ alignment estimation and $(ii)$ non-autoregressive translation under the given alignments. Modeling all the type of mapping guides $(ii)$ more close to one-to-one translation. In AligNART, a module called Aligner is simply augmented to NAT (Gu et al., 2018) which estimates alignments to generate aligned decoder inputs.
+
+However, it is challenging to estimate the complex alignment information using only source sentence during inference. Specifically, Aligner should simultaneously predict the number of target tokens corresponding to each source token and their mapping. To overcome this problem, we further propose alignment decomposition which factorizes the alignment process into three sub-processes: duplication, permutation, and grouping. Each sub-process corresponds to much feasible sub-problems: one-to-many mapping, ordering, and many-to-one mapping, respectively.
+
+Our experimental results show that AligNART outperforms previous non-iterative NART models of explicit modality reduction on WMT14 $\mathrm{En} \leftrightarrow \mathrm{De}$ and WMT16 $\mathrm{Ro} \rightarrow \mathrm{En}$ . AligNART achieves performance comparable to that of the recent state-of-the-art non-iterative NART model on WMT14 $\mathrm{En} \leftrightarrow \mathrm{De}$ . We observe that the modality reduction in AligNART addresses the token repetition issue even without sequence-level knowledge distillation (Kim and Rush, 2016). We also conduct quantitative and qualitative analyses on the effectiveness of alignment decomposition.
+
+# 2 Background
+
+Given a source sentence $x = \{x_{1},x_{2},\dots,x_{M}\}$ and its translation $y = \{y_{1},y_{2},\dots,y_{N}\}$ , ART models with encoder-decoder architecture are trained with chained target distributions and infer the target sentence autoregressively:
+
+$$
+p (y | x) = \prod_ {n = 1} ^ {N} p \left(y _ {n} \mid y _ {< n}, x\right). \tag {1}
+$$
+
+At each decoding position $n$ , the decoder of the model is conditioned with previous target tokens $y_{ 1 \\ 0 & \text {e l s e .} \end{array} \right. \tag {10}
+$$
+
+The grouping loss is defined as follows:
+
+$$
+\mathcal {L} _ {G} = - \frac {1}{L} \sum_ {l = 1} ^ {L} \log p _ {l} (g _ {l}), \tag {11}
+$$
+
+where $p_l$ is the predicted probability distribution of the grouping predictor at position $l$ .
+
+Our final loss function is defined as the sum of the negative log-likelihood based translation loss $\mathcal{L}_T$ and alignment loss $\mathcal{L}_A$ :
+
+$$
+\mathcal {L} = \mathcal {L} _ {T} + \mathcal {L} _ {A} = \mathcal {L} _ {T} + \alpha \mathcal {L} _ {D} + \beta \mathcal {L} _ {P} + \gamma \mathcal {L} _ {G}, \tag {12}
+$$
+
+where we set $\alpha = \beta = \gamma = 0.5$ for all the experiments.
+
+# 3.2.3 Inference
+
+During inference, Aligner sequentially predicts the duplication, permutation, and grouping matrices to compute the aligned decoder inputs $d$ as depicted in Figure 1b. The duplication predictor in Aligner infers $\hat{c}_m$ at each position $m$ ; then, we can directly construct a duplication matrix $\hat{D}$ using Equation 5. The permutation predictor predicts the distribution of the target position $P^{pred}$ . We obtain a permutation matrix $\hat{P}$ that minimizes the KL-divergence as follows:
+
+$$
+\hat {P} = \underset {P} {\arg \min } \left(- \sum_ {i} \sum_ {j} P _ {i, j} \log P _ {i, j} ^ {p r e d}\right). \tag {13}
+$$
+
+We utilize the linear sum assignment problem solver provided by Jones et al. (2001) to find $\hat{P}$ . The grouping predictor infers the binary predictions $\hat{g}_l$ from the permuted encoder outputs. We construct a grouping matrix $\hat{G}$ using $\hat{g}_l$ and Equations 6 and 10. With a predicted alignment matrix $\hat{A} = \hat{G} \cdot \hat{P} \cdot \hat{D}$ , Aligner constructs the decoder inputs using Equation 4, and the decoder performs translation from the aligned inputs.
+
+# 3.2.4 Decoding Strategies
+
+For the re-scoring based decoding method, we select candidates of alignments using the predicted distributions in the duplication and grouping predictors.
+
+We identify $m'$ positions in the outputs of the duplication predictor, where the probability of the predicted class is low. We then construct a $2^{m'}$ -candidate pool where the predictions in part of the $m'$ positions are replaced with the second probable class. Next, we identify the top- $a$ candidates with the highest joint probabilities. Similarly, we construct a $2^{l'}$ -candidate pool and identify $b$ candidates in the grouping predictor for the $a$ candidates. Finally, we rank $a \cdot b$ translations for the alignments candidates using a teacher ART model and select the best translation among them.
+
+# 3.3 Architecture of AligNART
+
+We use the deep-shallow (12-1 for short) Transformer (Vaswani et al., 2017) architecture (i.e., 12-layer encoder and 1-layer decoder) proposed by Kasai et al. (2020b) for two reasons. First, a deeper encoder assists Aligner to increase the estimation accuracy of the alignment matrix during inference. Second, the deep-shallow architecture improves the inference speed since the encoder layer has no cross-attention module compared to the decoder
+
+layer. The architecture of the duplication, permutation, and grouping predictor is shown in the Appendix.
+
+# 3.4 Alignment Score Filtering
+
+Some alignment tools such as GIZA++ (Och and Ney, 2003) provide an alignment score for each sentence pair as a default. Samples with low alignment scores are more likely to contain noise caused by sentence pairs or alignment tools. For GIZA++, we filter out a fixed portion of samples with low alignment scores to ease the alignment estimation. Since the pair of long sentences tends to be aligned with a low score, we apply the same filtering portion for each target sentence length.
+
+# 4 Experimental Setups
+
+# 4.1 Datasets and Preprocessing
+
+We evaluate our method on two translation datasets: WMT14 English-German (En-De) and WMT16 English-Romanian (En-Ro). WMT14 En-De/WMT16 En-Ro datasets contain 4.5M/610K training pairs, respectively.
+
+For WMT14 En-De dataset, we use preprocessing pipelines provided by $fairseq^{1}$ (Ott et al., 2019). For WMT16 En-Ro dataset, we use the preprocessed corpus provided by Lee et al. (2018). Preprocessed datasets share a vocabulary dictionary between the source and target languages. We use fast align (FA) (Dyer et al., 2013) and GIZA++ (GZ), which is known to be more accurate than fast align, as word alignment tools. All the corpus are passed to the alignment tools at the subword-level. We filter out samples where the maximum number of duplications exceed 16. We explain the details of the alignment processing in the Appendix.
+
+We use the sequence-level knowledge distillation method (KD) for the distillation set. Transformer ART models are trained to generate the distillation set for each translation direction.
+
+# 4.2 Models and Baselines
+
+We compare our model with several non-iterative NART baselines, and divide the non-iterative NART models into two types as aforementioned: implicit dependency modeling and explicit modality reduction (see Table 1). We also train the ART models and deep-shallow NAT for the analysis. Our models are implemented based on fairseq.
+
+| Models | WMT14 En-De | WMT16 En-Ro |
| En→ | De→ | Time | Speedup | En→ | Ro→ | Time | Speedup |
| Autoregressive Models |
| Transformer (Vaswani et al., 2017) | 27.3 | - | - | - | - | - | - | - |
| Transformer (ours) | 27.4 | 31.4 | 314 | ×1.0 | 34.1 | 33.9 | 307 | ×1.0 |
| Non-iterative Non-autoregressive Models (implicit dependency modeling) |
| FlowSeq (Ma et al., 2019) | 21.5 | 26.2 | - | - | 29.3 | 30.4 | - | - |
| AXE (Ghazvininejad et al., 2020) | 23.5 | 27.9 | - | - | 30.8 | 31.5 | - | - |
| NAT-EM (Sun and Yang, 2020) | 24.5 | 27.9 | 24 | ×16.4 | - | - | - | - |
| NARLVM (Lee et al., 2020) | 25.7 | - | 19 | ×15.0 | - | 28.4 | 18 | ×34.0 |
| GLAT (Qian et al., 2021) | 25.2 | 29.8 | - | ×15.3 | 31.2 | 32.0 | - | ×15.3 |
| Imputer (Saharia et al., 2020) | 25.8 | 28.4 | - | ×18.6 | 32.3 | 31.7 | - | - |
| CTC (Gu and Kong, 2021) | 26.5 | 30.5 | - | ×16.8 | 33.4 | 34.1 | - | ×16.8 |
| Non-iterative Non-autoregressive Models (explicit modality reduction) |
| NAT-FT (Gu et al., 2018) | 17.7 | 21.5 | 39 | ×15.6 | 27.3 | 29.1 | 39 | ×15.6 |
| Distortion (Zhou et al., 2020b) | 22.7 | - | - | - | 29.1 | - | - | - |
| ReorderNAT (Ran et al., 2021) | 22.8 | 27.3 | - | ×16.1 | 29.3 | 29.5 | - | ×16.1 |
| SNAT (Liu et al., 2021) | 24.6 | 28.4 | 27 | ×22.6 | 32.9 | 32.2 | 27 | ×22.6 |
| AligNART (FA, ours) | 25.7 | 29.1 | 23 | ×13.6 | 31.7 | 32.2 | 22 | ×13.9 |
| AligNART (GZ, ours) | 26.4 | 30.4 | 24 | ×13.4 | 32.5 | 33.1 | 24 | ×13.0 |
+
+AlgNART is implemented based on the deep-shallow Transformer architecture. We set $d_{model} / d_{hidden}$ to 512/2048 and the dropout rate to 0.3. The number of heads in multi-head attention modules is 8 except for the last attention module of the permutation predictor which is 1. We set the batch size to approximately 64K tokens for all the models we implement. All these models we implement are trained for 300K/50K steps on EnDe/En-Ro datasets, respectively. For AlgNART, we average 5 checkpoints with the highest validation BLEU scores in the 20 latest checkpoints.
+
+For optimization, we use Adam optimizer (Kingma and Ba, 2015) with $\beta = (0.9, 0.98)$ and $\epsilon = 10^{-8}$ . The learning rate scheduling follows that of Vaswani et al. (2017), starting from $10^{-7}$ and warms up to 5e-4 in 10K steps. We use the label smoothing technique with $\epsilon_{ls} = 0.1$ for the target token distribution and each row of permutation matrix. The translation latency is measured on an NVIDIA Tesla V100 GPU.
+
+# 5 Results
+
+# 5.1 Main Results
+
+Table 1 shows the BLEU scores, translation latency and speedup on WMT14 En-De and WMT16 En
+
+Table 1: BLEU scores and inference speed of baselines and our model on four translation tasks. Time is an average sentence-wise latency in milliseconds. Speedup is a relative speedup ratio compared to the Transformer-based ART model with beam width 5.
+
+| Models | WMT14 En-De | WMT16 En-Ro |
| En→ | De→ | En→ | Ro→ |
| FlowSeq (n=15) | 23.1 | 28.1 | 31.4 | 32.1 |
| NAT-EM (n=9) | 25.8 | 29.3 | - | - |
| GLAT (n=7) | 26.6 | 31.0 | 32.9 | 33.5 |
| ReorderNAT (n=7) | 24.7 | 29.1 | 31.2 | 31.4 |
| SNAT (n=9) | 26.9 | 30.1 | 34.9 | 33.1 |
| AligNART (FA, n=8) | 26.5 | 30.3 | 32.7 | 33.1 |
| AligNART (GZ, n=8) | 27.0 | 31.0 | 33.0 | 33.7 |
+
+Table 2: BLEU scores of non-iterative NART models with re-scoring decoding scheme of $n$ candidates.
+
+Ro. In explicit modality reduction, AligNART (FA) achieves higher BLEU scores than Distortion and ReorderNAT, which utilize the same alignment tool, since we leverage the entire alignment information rather than partial information such as fertility or ordering. Moreover, AligNART (GZ) significantly outperforms previous models for explicit modality reduction except for SNAT on $\mathrm{En}\rightarrow \mathrm{Ro}$ . In implicit dependency modeling, AligNART (GZ) outperforms Imputer and shows performance comparable to that of the state-of-the-art CTC-based model on $\mathrm{En}\leftrightarrow \mathrm{De}$ by simply augmenting Aligner module to deep-shallow NAT. In this study, we focus on introducing complete information in word alignments; we do not modify the objective function,
+
+| Models | En→De | De→En |
| D | P | G | D | P | G |
| AlignNART (FA, w/o KD) | 0.76/0.85 | 0.55/0.74 | 0.96/0.98 | 0.77/0.84 | 0.59/0.74 | 0.96/0.98 |
| AlignNART (FA, w/ KD) | 0.75/0.89 | 0.53/0.83 | 0.95/1.00 | 0.76/0.88 | 0.57/0.84 | 0.96/1.00 |
| AlignNART (GZ, w/o KD) | 0.69/0.78 | 0.76/0.91 | 1.00/1.00 | 0.71/0.82 | 0.81/0.92 | 1.00/1.00 |
| AlignNART (GZ, w/ KD) | 0.66/0.88 | 0.71/0.94 | 1.00/1.00 | 0.68/0.88 | 0.76/0.95 | 1.00/1.00 |
+
+Table 3: Duplication (D), permutation (P), and grouping (G) accuracy of Aligner on WMT14 En-De validation set. Accuracy on raw and distilled datasets are written on the left and right of slash, respectively.
+
+| Source
+Reference | Denken Sie, dass die Medien zu viel vom PS_G erwarten?
+Do you think the media expect too much of PS_G? |
| NAT (12-1) | Do you think that the expect expect much from the PS_G? |
| Ours | Duplication | Denken Denken Sie, dass die Medien zu viel vom vom PSG PSG erwarten? |
| Permutation | Denken Sie Denken, dass die Medien erwarten zu viel vom vom PSG PSG? |
| Grouping | Denken Sie Denken, dass die Medien erwarten zu viel vom vom PSG PSG? |
| Output | Do you think that the media expect too much from the PS_G? |
+
+Table 4: Visualized translation example of deep-shallow NAT and AlignNART (FA) on WMT14 De→En validation set. "_" stands for subword tokenization. Highlighted tokens in duplication, permutation, and grouping processes are modified by the each module of Aligner. Highlighted tokens in output correspond to the tokens highlighted with the same colors in the previous processes. Note that Aligner first applies mean pooling to convert subword-level encoder outputs into word-level, as explained in the Appendix.
+
+| Models | fast align | GIZA++ |
| En→ | De→ | En→ | De→ |
| AlgNART | 25.7 | 29.1 | 26.4 | 30.4 |
| - Infer with D=I | 15.5 | 18.1 | 11.5 | 15.2 |
| - Infer with P=I | 19.4 | 22.2 | 21.5 | 24.7 |
| - Infer with G=I | 21.9 | 27.1 | 26.4 | 30.4 |
+
+Table 5: BLEU scores of Aligner ablation study on WMT14 En-De test set.
+
+which can be explored in the future work.
+
+Table 2 shows the BLEU scores with re-scoring decoding strategies of the non-iterative NART models. We set $m' = l' = 4$ , $a = 4$ , and $b = 2$ for 8 candidates. AlgNART outperforms the baselines on $\mathrm{En} \rightarrow \mathrm{De}$ and $\mathrm{Ro} \rightarrow \mathrm{En}$ , and shows performance similar to that of GLAT on $\mathrm{De} \rightarrow \mathrm{En}$ . In non-iterative NART for explicit modality reduction, AlgNART shows the best performance on $\mathrm{En} \leftrightarrow \mathrm{De}$ and $\mathrm{Ro} \rightarrow \mathrm{En}$ .
+
+# 5.2 Analysis of Aligner Components
+
+In this section, we investigate the accuracy, example, and ablation results of Aligner components as shown in Table 3, 4, and 5, respectively. Note that we partially provide the ground truth D or P matrices during the accuracy measurement.
+
+Knowledge Distillation In Table 3, a comparison of accuracy between raw and distilled datasets shows that KD significantly decreases multi-modality of each component. After KD, AlignNART shows marginally reduced accuracy on the raw dataset, but high prediction accuracy in each component on the distillation set, resulting in increased BLEU scores.
+
+Alignment Tool Before KD, AlignNART using fast align and GIZA++ have accuracy bottlenecks in permutation and duplication predictors, respectively, as shown in Table 3. The results imply that the alignment tools have different degrees of multimodality on the D, P, and G matrices, which can be explored in the future work.
+
+Qualitative Study Table 4 shows an example of addressing the multi-modality problem. Deep-shallow NAT monotonically copies the encoder outputs and suffers from repetition and omission problems. AlignNART (FA) does not show the inconsistency problems thanks to the well-aligned decoder inputs, which significantly reduces the modality of the target distribution. We also conducted a case study on predicted alignments and their translations during re-scoring as shown in the Appendix.
+
+Ablation Study We conduct an analysis of alignment estimation by ablating one of the predictors
+
+ | WMT14 En-De |
| En→ | De→ |
| FlowSeq (w/o KD) | 18.6 | 23.4 |
| AXE (w/o KD) | 20.4 | 24.9 |
| Imputer (CTC, w/o KD) | 15.6 | - |
| CTC (w/o KD) | 18.2 | - |
| NAT (12-1, w/o KD) | 8.5 | 13.3 |
| NAT (12-1, w/ KD) | 18.9 | 23.4 |
| AlignNART (FA, w/o KD) | 20.7 | 24.0 |
| AlignNART (GZ, w/o KD) | 18.3 | 23.2 |
+
+during inference. We ablate each module in Aligner by replacing the predicted matrix with an identical matrix $I$ . The results in Table 5 indicate that each module in Aligner properly estimates the decomposed information in word alignments. However, there is an exception in GIZA++ where many-to-one mapping does not exist, resulting in performance equal to that without the grouping predictor. We observe that AlignNART achieves BLEU scores comparable to those of CTC-based models on $\mathrm{En}\leftrightarrow \mathrm{De}$ even with the ground truth word alignments of partial information.
+
+# 5.3 Analysis of Modality Reduction Effects
+
+To evaluate the modality reduction effects of AlignNART, we conducted experiments on two aspects: BLEU score and token repetition ratio. Table 6 shows the BLEU scores on WMT14 En-De. For $\mathrm{En}\rightarrow \mathrm{De}$ ,AligNART using fast align without KD achieves higher BLEU scores than previous models without KD and deep-shallow NAT with KD. The results indicate that our method is effective even without KD, which is known to decrease data complexity (Zhou et al., 2020a). On the other hand, alignments from GIZA++ without KD are more complex for AlignNART to learn, resulting in lower BLEU scores than deep-shallow NAT with KD.
+
+Ghazvininejad et al. (2020) measured the token repetition ratio as a proxy for measuring multimodality. The token repetition ratio represents the degree of the inconsistency problem. In Table 7, the token repetition ratio of AlignNART is less than that of the CMLM-base (Ghazvininejad et al., 2019) of 5 iterations, AXE, and GLAT. We also observe that the decline in the token repetition ratio from Aligner is significantly larger than that from KD. Combined with the results from Table 6, alignment
+
+Table 6: BLEU scores of non-iterative NART models on WMT14 En-De test set, with or without KD.
+
+ | WMT14 En-De |
| En→ | De→ |
| Gold test set | 0.04% | 0.03% |
| CMLM-base (5 iterations) | 0.72% | - |
| AXE | 1.41% | 1.03% |
| Imputer (CTC) | 0.17% | 0.23% |
| GLAT | 1.19% | 1.05% |
| NAT (12-1, w/o KD) | 33.94% | 27.78% |
| NAT (12-1, w/ KD) | 11.83% | 9.09% |
| AlignNART (GZ, w/o KD) | 0.76% | 1.33% |
| AlignNART (GZ, w/ KD) | 0.33% | 0.33% |
+
+Table 7: Token repetition ratio of NART models on WMT14 En-De test set.
+
+ | WMT14 En-De |
| En→ | De→ |
| NAT (12-1) | 18.9 | 23.4 |
| - Cross attention | 17.2 | 21.9 |
| AlgNART (GZ) | 26.4 | 30.4 |
| - Score filtering | 26.2 | 30.0 |
| - Cross attention | 26.1 | 29.9 |
| - 12-1 architecture | 24.9 | 29.1 |
+
+Table 8: Ablation results of deep-shallow NAT and AligNART (GZ) on WMT14 En-De test set.
+
+information adequately alleviates the token repetition issue even in the case where the BLEU score is lower than that of deep-shallow NAT with KD.
+
+# 5.4 Ablation Study
+
+We conduct several extensive experiments to analyze our method further as shown in Table 8 and 9. Each of our method consistently improves the performance of AligNART.
+
+Cross Attention As shown in Table 8, we ablate the cross attention module in the decoder to observe the relationship between aligned decoder inputs and alignment learning of the cross attention module. We train AligNART and deep-shallow NAT without a cross attention module for comparison. AligNART without the cross attention module has a smaller impact on the BLEU score than the deep-shallow NAT. The cross attention module is known to learn alignments between source and target tokens (Bahdanau et al., 2015), and the result implies that aligned decoder inputs significantly offload the role of the cross attention module.
+
+Deep-shallow Architecture Deep-shallow architecture heavily affects the BLEU scores of Alig-
+
+ | 0% | 1% | 5% | 10% | 20% |
| En→ | 26.2 | 26.1 | 26.4 | 26.2 | 26.2 |
| De→ | 30.0 | 30.2 | 30.4 | 30.4 | 30.1 |
+
+Table 9: Alignment score filtering ratio and BLEU scores on WMT14 En-De test set.
+
+NART as shown in Table 8. The results indicate that the deep encoder assists alignment estimation, whereas the shallow decoder with aligned inputs has a lower impact on performance degeneration.
+
+Alignment Score Filtering We investigate the trade-off between the alignment score filtering ratio and BLEU score using AligNART (GZ) presented in Table 9. Samples with low alignment scores are more likely to contain noise caused by distilled targets or an alignment tool. We observe that filtering out of $5\%$ of the samples improves the BLEU score in both the directions. Surprisingly, increasing the filtering ratio up to $20\%$ preserves the performance thanks to the noise filtering capability.
+
+# 6 Related Work
+
+# 6.1 Non-iterative NART
+
+After Gu et al. (2018) proposed NAT, non-iterative NART has been investigated in various directions to maximize translation speed while maintaining translation quality. Shao et al. (2019), Shao et al. (2020), and Ghazvininejad et al. (2020) address the limitations of conventional cross entropy based objectives that overly penalize consistent predictions. Lee et al. (2018), Ma et al. (2019), Shu et al. (2020), and Lee et al. (2020) introduce latent variables to model the complex dependencies between target tokens. Sahara et al. (2020) and Gu and Kong (2021) apply CTC loss to the NMT domain. Qian et al. (2021) provide target tokens to the decoder during training using the glancing sampling technique.
+
+# 6.2 Alignment in Parallel Generative Models
+
+In other domains, such as text-to-speech (Ren et al., 2019; Kim et al., 2020; Donahue et al., 2020), a common assumption is a monotonicity in the alignments between text and speech. Given this assumption, only a duration predictor is required to alleviate the length-mismatch problem between text and speech. On the other hand, modeling the alignment in the NMT domain is challenging since the alignment contains additional ordering and grouping information. Our method estimates an arbitrary alignment matrix using alignment decomposition.
+
+# 6.3 Improving NMT with Enhanced Information
+
+To alleviate the multi-modality problem of NART models, Gu et al. (2018), Akoury et al. (2019), Zhou et al. (2020b), Ran et al. (2021), and Liu et al. (2021) provide additional sentence information to the decoder.
+
+Alignment is considered as a major factor in machine translation (Li et al., 2007; Zhang et al., 2017). Alkhouli et al. (2018) decompose the ART model into alignment and lexical models. Song et al. (2020) use the predicted alignment in ART models to constrain vocabulary candidates during decoding. However, the alignment estimation in NART is much challenging since the information of decoding outputs is limited. In NART, Gu et al. (2018), Zhou et al. (2020b), and Ran et al. (2021) exploit partial information from the ground truth alignments. In contrast, we propose the alignment decomposition method for effective alignment estimation in NART where we leverage the complete alignment information.
+
+# 7 Conclusion and Future Work
+
+In this study, we leverage full alignment information to directly reduce the degree of the multimodality in non-iterative NART and propose an alignment decomposition method for alignment estimation. AligNART with GIZA++ shows performance comparable to that of the recent CTC-based implicit dependency modeling approach on WMT14 En-De and modality reduction capability. However, we observe that AligNART depends on the quality of the ground truth word alignments, which can be studied in the future work. Furthermore, we can study on the combination of AligNART and implicit dependency modeling methods.
+
+# Acknowledgement
+
+This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT) [2018R1A2B3001628], the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2021, AIRS Company in Hyundai & Kia Motor Company through HKMC-SNU AI Consortium Fund, and Kakao Enterprise.
+
+# References
+
+Nader Akoury, Kalpesh Krishna, and Mohit Iyyer. 2019. Syntactically supervised transformers for faster neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1269-1281, Florence, Italy. Association for Computational Linguistics.
+Tamer Alkhouli, Gabriel Bretschner, and Hermann Ney. 2018. On the alignment problem in multi-head attention-based neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 177-185, Brussels, Belgium. Association for Computational Linguistics.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.
+Jeff Donahue, Sander Dieleman, Mikolaj Binkowski, Erich Elsen, and Karen Simonyan. 2020. End-to-end adversarial text-to-speech. In International Conference on Learning Representations.
+Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics.
+Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy. 2020. Aligned cross entropy for non-autoregressive machine translation. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 3515-3523. PMLR.
+Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6112-6121, Hong Kong, China. Association for Computational Linguistics.
+Alex Graves, Santiago Fernández, Faustino J. Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Machine
+
+Learning, Proceedings of the Twenty-Third International Conference (ICML 2006), Pittsburgh, Pennsylvania, USA, June 25-29, 2006, volume 148 of ACM International Conference Proceeding Series, pages 369-376. ACM.
+Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. 2018. Nonautoregressive neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Open-Review.net.
+Jiatao Gu and Xiang Kong. 2021. Fully non-autoregressive neural machine translation: Tricks of the trade. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 120-133, Online. Association for Computational Linguistics.
+Junliang Guo, Linli Xu, and Enhong Chen. 2020. Jointly masked sequence-to-sequence model for non-autoregressive neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 376-385, Online. Association for Computational Linguistics.
+Eric Jones, Travis Oliphant, PEARU Peterson, et al. 2001. SciPy: Open source scientific tools for Python.
+Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020a. Non-autoregressive machine translation with disentangled context transformer. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 5144-5155. PMLR.
+Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah Smith. 2020b. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In International Conference on Learning Representations.
+Jaehyeon Kim, Sungwon Kim, Jungil Kong, and Sungroh Yoon. 2020. Glow-tts: A generative flow for text-to-speech via monotonic alignment search. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+Yoon Kim and Alexander M. Rush. 2016. Sequence-level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+
+Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173-1182, Brussels, Belgium. Association for Computational Linguistics.
+Jason Lee, Raphael Shu, and Kyunghyun Cho. 2020. Iterative refinement in the continuous space for non-autoregressive neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1006-1015, Online. Association for Computational Linguistics.
+Chi-Ho Li, Minghui Li, Dongdong Zhang, Mu Li, Ming Zhou, and Yi Guan. 2007. A probabilistic approach to syntax-based reordering for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 720-727, Prague, Czech Republic. Association for Computational Linguistics.
+Ye Liu, Yao Wan, Jianguo Zhang, Wenting Zhao, and Philip Yu. 2021. Enriching non-autoregressive transformer with syntactic and semantic structures for neural machine translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1235-1244, Online. Association for Computational Linguistics.
+Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. 2019. FlowSeq: Nonautoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4282-4292, Hong Kong, China. Association for Computational Linguistics.
+Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. 2021. Glancing transformer for non-autoregressive neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1993-2003, Online. Association for Computational Linguistics.
+
+Qiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2021. Guiding non-autoregressive neural machine translation decoding with reordering information. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13727-13735.
+Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. *Fastspeech: Fast, robust and controllable text to speech*. In *Advances in Neural Information Processing Systems* 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3165-3174.
+Chitwan Sahara, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive machine translation with latent alignments. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1098-1108, Online. Association for Computational Linguistics.
+Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, and Jie Zhou. 2019. Retrieving sequential information for non-autoregressive neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3013-3024, Florence, Italy. Association for Computational Linguistics.
+Chenze Shao, Jinchoo Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2020. Minimizing the bag-of-grams difference for non-autoregressive neural machine translation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 198-205. AAAI Press.
+Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2020. Latent-variable non-autoregressive neural machine translation with deterministic inference using a delta posterior. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8846-8853.
+Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, and Min Zhang. 2020. Alignment-enhanced transformer for constraining nmt with pre-specified translations. In AAI, pages 8886-8893.
+Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhi-Hong Deng. 2019. Fast structured decoding for sequence models. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3011-3020.
+Zhiqing Sun and Yiming Yang. 2020. An EM approach to non-autoregressive conditional sequence genera
+
+tion. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9249-9258. PMLR.
+
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+
+Jinchao Zhang, Mingxuan Wang, Qun Liu, and Jie Zhou. 2017. Incorporating word reordering knowledge into attention-based neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1524-1534, Vancouver, Canada. Association for Computational Linguistics.
+
+Chunting Zhou, Jiatao Gu, and Graham Neubig. 2020a. Understanding knowledge distillation in non-autoregressive machine translation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+
+Long Zhou, Jiajun Zhang, Yang Zhao, and Chengqing Zong. 2020b. Non-autoregressive neural machine translation with distortion model. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 403-415. Springer.
+
+# Appendix
+
+# A Mappings in Alignment
+
+In general, there are one-to-one, one-to-many, many-to-one, and many-to-many mappings excluding zero-fertility and spurious word cases (see Figure 2). Distortion and ReorderNAT cannot represent many-to-one, many-to-many, and spurious word cases. The grouping predictor in AligNART models many-to-one and many-to-many mappings. The addition of a spurious token, which is applied to AligNART (FA), enables us to address the spurious word case, which is explained in Section C.2. During the experiments, we observe that the introduction of a spurious token degrades the performance for GIZA++. We guess the reason of the degradation is that alignment matrix from GIZA++ contains more than two times as many empty rows as that of fast align on WMT14 En-De.
+
+# B Architecture of Aligner
+
+The duplication predictor and grouping predictor modules consist of a convolutional layer, ReLU ac
+
+
+one-to-one
+
+
+one-to-many
+
+
+spurious word
+
+
+many-to-one
+
+
+many-to-many
+
+
+zero-fertility
+Figure 2: Types of mapping in word alignments. Row and column correspond to the target and source tokens, respectively.
+
+tivation, layer normalization, dropout, and a projection layer, same as the phoneme duration predictor in FastSpeech (Ren et al., 2019), which is a parallel text-to-speech model.
+
+The permutation predictor in Aligner consists of three encoder layers: pre-network, query/ key network, and single-head attention module for the outputs. Note that the outputs of the pre-network are passed to the query and key networks. To prevent the predicted permutation matrix from being an identity matrix, we apply a gate function to the last attention module in the permutation predictor to modulate the probabilities of un-permuted and permuted cases. We formulate the output of gated attention as follows:
+
+$$
+g = \sigma (Q \cdot u) \tag {14}
+$$
+
+$$
+\bar {P} ^ {\text {p r e d}} = \operatorname {s o f t m a x} \left(M + Q K ^ {T}\right) \tag {15}
+$$
+
+$$
+P ^ {p r e d} = D _ {g} + (I - D _ {g}) \cdot \bar {P} ^ {p r e d}, \tag {16}
+$$
+
+where $\sigma$ is the sigmoid function and $Q / K$ is the output of the query/ key network, respectively. $g$ is the probability of an un-permuted case. $M$ is a diagonal mask matrix, where the values of the diagonal elements are $-inf$ . $I$ is an identical matrix and $D_g$ is a diagonal matrix with $g$ as the main diagonal.
+
+# C Alignment Processing
+
+# C.1 Word-to-subword Alignment
+
+To reduce the complexity of alignment, we further assume that the alignment process is conducted at the word-level. We decompose the alignment matrix into the source subword to source word matrix $S$ and the source word to target subword matrix $A^{ws}$ as depicted in Figure 3. Since $S$ is always given, $A^{ws}$ is the only target to be learned. First,
+
+
+Figure 3: Example of word-to-subword matrix decomposition technique. Row and column correspond to input and output tokens, respectively. $y_{i}$ denotes the $i$ -th subword of the target sentence. $x^{i}$ denotes the $i$ -th word of the source sentence and $x_{j}^{i}$ denotes the $j$ -th subword of the $i$ -th word of the source sentence.
+
+we derive the source subword to target subword matrix $A$ using the alignment tool. $A^{ws}$ is achieved by clipping the maximum value of $A \cdot S^{\top}$ to 1. $A^{ws}$ reduces the search space because of the assumption that source tokens duplicate, permute, and group at the word-level. However, there is a trade-off between the simplicity and resolution of information. The recovered source subword to target subword matrix $A^{ws} \cdot S$ loses the subword-level information as shown in the rightmost matrix in Figure 3.
+
+# C.2 Filling Null Rows in Alignment Matrix
+
+The output of the alignment tool usually contains empty rows which means that no aligned source token exists for certain target tokens. We select two strategies to fill the null rows: $(i)$ copy the alignment from the previous target token, or $(ii)$ introduce a special spurious token. For the second strategy, we concatenate a special spurious token at the end of the source sentence. If the current and previous target tokens belong to the same word, we follow $(i)$ . The remaining target tokens of the null alignment are aligned to the spurious token.
+
+# C.3 Details of Alignment Tool Configuration
+
+For fast align, we follow the default setting for forward/backward directions and obtain symmetrized alignment with the grow-diag-final-and option. We apply the word-to-subword alignment technique and spurious token strategy for null alignments. For GIZA++, we apply the word-to-subword alignment technique and copy the alignment from the previous target token for null alignment. We set the alignment score filtering ratio to $5\%$ .
+
+# D Case Study
+
+To analyze various alignments and their translations during re-scoring decoding, we conduct a
+
+case study on WMT14 De $\rightarrow$ En validation set as shown in Figure 4. The two translations have different orderings: the telescope's tasks and the tasks of the telescope. In this sample, we observe that AlignNART (i) can capture non-diagonal alignments, (ii) models multiple alignments, and (iii) translates corresponding to the given alignments.
+
+
+Figure 4: Translation and alignment estimation example on WMT14 De→En validation set. Tokens matched to the alignment matrix have same colors (blue and orange). The special token "_" stands for the subword tokenization.
+
+
+
+| Source | Eine der Aufgaben des Tel_eskOps : Es soll nach Licht von den ersten Ster_nen und Galax_ien nach dem Ur_kn_all suchen . |
| Reference | One of the tel_esc_ope 's tasks is to search for light from the first stars and galax_ies that emerged after the Big B_ang . |
| Alignments #1 | One of the tel_esc_ope 's tasks : it should search for light from the first stars and galax_ies after the Big B_ang . |
| Alignments #2 | One of the tasks of the tel_esc_ope : it should search for light from the first stars and galax_ies after the Big B_ang . |
\ No newline at end of file
diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/images.zip b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..910ca41711a03ca657cb0ff20c62daf04f689a77
--- /dev/null
+++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4171be8fdf5e776bbdbb87eba27f621a910d97dfa451358e82fd9f9b21914083
+size 671372
diff --git a/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/layout.json b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d4f72e2bd0fee85160632e6c9768d6c2a6d7e623
--- /dev/null
+++ b/alignartnonautoregressiveneuralmachinetranslationbyjointlylearningtoestimatealignmentandtranslate/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7cfa5263a49ac7093d70dae5ed5fec17038aca0fd98c9bde46edbecb8f4b746
+size 554713
diff --git a/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_content_list.json b/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..dfbd2b1f16fbc53d415923256c60cc61f39302ec
--- /dev/null
+++ b/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:88880f6aca20aa6d158721ddcb54dc2fb5acd47259b5c9fd85a9d959d546a4b5
+size 95033
diff --git a/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_model.json b/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..294a09c3fac1a00ff3aaf5d0e943c74a4d30d6a4
--- /dev/null
+++ b/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:93a74ec0469035f72720e4d9b5a396315cc479232b835510fef9645c9d4e6165
+size 114918
diff --git a/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_origin.pdf b/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cea278064ef61cc1d7a9c531ce055cb7b4cc2841
--- /dev/null
+++ b/aligningactionsacrossrecipegraphs/3819983e-ad35-470c-b977-add527a48e35_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d6db4f017f5f2174235b88b917eccd010d172ad722f2fd5cbeb182622d8083fe
+size 1288526
diff --git a/aligningactionsacrossrecipegraphs/full.md b/aligningactionsacrossrecipegraphs/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..72908479648512419df68f6c329b8a1861c9482d
--- /dev/null
+++ b/aligningactionsacrossrecipegraphs/full.md
@@ -0,0 +1,410 @@
+# Aligning Actions Across Recipe Graphs
+
+# Lucia Donatelli, Theresa Schmidt, Debanjali Biswas, Arne Kohn, Fangzhou Zhai, Alexander Koller
+
+Department of Language Science and Technology
+
+Saarland Informatics Campus
+
+Saarland University
+
+{donatelli,theresas,DBiswas,koehn,fzhai,koller}
+
+@coli.uni-saarland.de
+
+# Abstract
+
+Recipe texts are an idiosyncratic form of instructional language that pose unique challenges for automatic understanding. One challenge is that a cooking step in one recipe can be explained in another recipe in different words, at a different level of abstraction, or not at all. Previous work has annotated correspondences between recipe instructions at the sentence level, often glossing over important correspondences between cooking steps across recipes. We present a novel and fully-parsed English recipe corpus, ARA (Aligned Recipe Actions), which annotates correspondences between individual actions across similar recipes with the goal of capturing information implicit for accurate recipe understanding. We represent this information in the form of recipe graphs, and we train a neural model for predicting correspondences on ARA. We find that substantial gains in accuracy can be obtained by taking fine-grained structural information about the recipes into account.
+
+# 1 Introduction
+
+Cooking recipes are a type of instructional text that many people interact with in their everyday lives. A recipe explains step by step how to cook a certain dish, describing the actions a chef needs to perform as well as the ingredients and intermediate products of the cooking process. However, recipes for the same dish often differ in which cooking actions they describe explicitly, how they describe them, and in which order. For instance, in the three recipes in Fig. 1, the overall process of assembling a batter and making waffles is explained with different levels of detail: recipe (a) explains the process with three distinct cooking actions (in bold); recipe (b) with eight distinct actions; and recipe (c) with nineteen actions.
+
+(a) Beat eggs. Mix in remaining ingredients. Cook on hot waffle iron.
+(b) Preheat your waffle iron. In a large bowl, mix together the flour, salt, baking powder, and sugar. In another bowl, beat the eggs. Add the milk, butter, and vanilla to the eggs. Pour the liquid into the flour mixture and beat until blended. Ladle the batter into the waffle iron and cook until crisp and golden.
+(c) Sift together in a large mixing bowl flour, baking powder, salt, and sugar. In a jug, measure out milk. Separate eggs, placing egg whites in the bowl of standing mixer. Add yolks and vanilla essence to milk and whisk together. Pour over the flour mixture and very gently stir until about combined. Stir in the melted butter and continue mixing very gently until combined. Beat egg whites until stiff and slowly fold into batter. Spoon the batter into pre-heated waffle iron in batches and cook according to its directions. Remove immediately and serve with maple syrup and fruits.
+
+Figure 1: Three recipe texts for making waffles with cooking actions in bold. The recipes are ordered from least amount of detail (a) to most (c).
+
+The set of recipes in Fig. 1 provides more general information about how to make waffles than each individual recipe can. To our knowledge, only one previous work focuses on alignment of instructions across recipes to facilitate recipe interpretation on a dish level: Lin et al. (2020) (henceforth referred to as L'20) present the Microsoft Research Multimodal Aligned Recipe Corpus to align English recipe text instructions to each other and to video sequences. L'20 focus on alignment of instructions as sentences. Yet, as sentences can be quite long and contain multiple actions, defining instructions at the sentence level often glosses over the relationship between individual actions and excludes the
+
+complex event structure that makes recipe interpretation challenging and compelling. For example, in L'20, “Ladle the batter into the waffle iron and cook [...]” in Recipe (b) is aligned to “Spoon the batter into preheated waffle iron in batches and cook [...]” in Recipe (c). If aligned at the sentence level only, the action correspondence of (b)'s recipe-initial “Preheat” and (c)'s “preheated” much later in the recipe is not (and cannot be) accounted for. The relationship between the individual actions ((b) “Ladle,” (c) “Spoon”) as well as between both instances of “cook” is additionally obscured by coarse-grained sentence alignment.
+
+Aligning recipes at the action level (i.e. the bold items in Fig. 1) instead of the sentence level is practically useful to aggregate detailed information about how to cook a dish in different ways. This alignment would additionally offer greater insight into how recipe actions implicitly contain information about ingredients, tools, cooking processes, and other semantic information necessary for accurate recipe interpretation.
+
+In this paper, we make two contributions towards the goal of complete and accurate alignment of actions across similar recipes. First, we collect a novel corpus that aligns cooking actions across recipes, Aligned Recipe Actions (ARA), (Section 4), by crowdsourcing alignment annotations on top of the L'20 corpus for a select number of dishes. In order to determine the annotation possibilities, we develop a neural recipe parser to identify descriptions of actions and substances and arrange them in a recipe graph (Section 3). Second, we present an alignment model which identifies correspondences between actions across recipe graphs based on our parsed corpus (Section 5). Error analysis of both human and machine performance illustrates that, though complex, the task of aligning recipe actions is achievable with our methodology and can inform future work on aligning sets of instructions. Our corpus and code are publicly available.
+
+# 2 Background And Related Work
+
+Recipe text. The recipes in Fig. 1 illustrate some of the idiosyncracies of recipe text: all texts are written entirely in imperative mood ((a) "cook"; (b) "preheat"; (c) "sift"); definite noun phrases frequently drop their determiners ((a) "eggs"; (c) "milk," "egg whites"); many arguments are elided or left implicit ((b) "beat $\varnothing$ "; (c) "pour $\varnothing$ over");
+
+bare adjectives are used to describe desired end states ((b) “until crisp and golden”); and many anaphoric expressions refer to entities which were not explicitly introduced before ((b) “the liquid”; (c) “the flour mixture”). Accurate interpretation of single recipe instructions requires familiarity with situated food ingredients, knowledge of verb semantics to identify how each cooking step relates to the others, and general commonsense about the cooking environment and instructional syntax.
+
+Recipe graphs and corpora. Representing recipes as graphs is the dominant choice (Mori et al., 2014; Jermsurawong and Habash, 2015; Kiddon et al., 2015; Yamakata et al., 2016; Chen, 2017; Chang et al., 2018; Ozgen, 2019). Of relevance to this paper is the recipe corpus of Yamakata et al. (2020) (Y'20), which consists of 300 English recipes annotated with graphs as in Fig. 2. We train a recipe parser on this corpus (Section 3) and use the trained parser to identify actions in the L'20 corpus. As noted earlier, Lin et al. (2020) (L'20) created the Microsoft Research Multimodal Aligned Recipe Corpus of roughly 50k text recipes and 77k recipe videos across 4000 dishes in English. The recipes and video transcriptions were segmented into sentences (but not parsed), and 200 recipe pairs were manually annotated for correspondences between recipe sentences. L'20 cluster dishes based on exact match of recipe title; they perform sentence alignments on pairs of recipes using the unsupervised algorithm of Naim et al. (2014). We use L'20's dataset as a basis for our alignment task in Sections 4 and 5.
+
+Recipe parsing. Parsing recipes into graphs is usually comprised of two steps: (i) tagging mentions of substances and cooking steps, and (ii) linking these mentions with input and output edges. Recent work on English recipes has achieved F-Scores above 90 for identifying mentions (Chen, 2017) and F-Scores above 80 for adding the edges (Jermsurawong and Habash, 2015; Chen, 2017; Ozgen, 2019). Most of this work uses supervised learning based on hand-annotated recipe datasets. Using unsupervised methods, Kiddon et al. (2015) train a generative neural model on a large corpus of unannotated recipe texts and achieve an F-Score of 80 on predicting edges given gold information about the nodes; the output graphs are less detailed than ours. Ozgen (2019) achieves an F-Score of 75 on the same task and presents a subtask of creating action graphs similar to ours in Section 3.4.
+
+Event alignment. Our work shares an interest in modelling procedural knowledge with the detection and alignment of script events. Chambers and Jurafsky (2008, 2009) identified event types from text according to their predicate-argument structures and behavior in event chains via count-based statistics. We capture similar information in a crowdsourcing task reminiscent of Wanzare et al. (2016, 2017) to automatically align actions without all surface text (Regneri et al., 2010).
+
+# 3 Parsing Recipes into Graphs
+
+The main contribution of this paper is a corpus of action alignments between action graphs of cooking recipes. Basing our corpus on unannotated recipe texts from L'20, we are dependent on an accurate tagger and parser for pre-processing. The tagger identifies the alignable actions in a recipe, and the parser structures recipes into graph representations. For both tasks, we train neural models on the data of Yamakata et al. (2020) (Y'20); we set a new state of the art on this dataset. Finally, we distill the Y'20-style recipe graphs into more focused action graphs (Section 3.4) which the alignment model (Section 5) takes as input.
+
+# 3.1 Recipe Graphs
+
+The recipe graphs in the Y'20 dataset are directed acyclic graphs with node and edge labels (Fig. 2). Nodes represent entities of ten different types, such as ingredients, cooking tools, and various types of actions. Edges represent actions' predicate-argument structure, as well as part-whole relations and temporal information. The full graph shows the states of ingredients and tools throughout the cooking process; it can be read from top to bottom as inputs transformed into consecutive outputs.
+
+# 3.2 Tagging Recipes
+
+We split the parsing task into two steps. In the first step, we tag the tokens of a recipe with their respective node types. We implement the sequence tagger as a neural network (NN) with a two-layered BiLSTM encoder generating predictions in a CRF output layer:
+
+$$
+\vec {y} = C R F (B i L S T M ^ {(2)} (B i L S T M ^ {(1)} (\vec {x}))),
+$$
+
+where $\vec{y}$ are the predicted tags over the input sequence with the embedding $\vec{x}$ . For comparison, Y'20 employ BERT-NER for their 'recipe named
+
+
+Figure 2: Full graph for recipe (b) (Fig. 1) in the style of Y'20. Actions are displayed as diamonds, foods as circles, tools as rectangles, and all else as ellipses.
+
+entity (r-NE)' tagging task.
+
+Contrary to common expectation, we find that this tagger performs better with ELMo embeddings (Peters et al., 2018) than with BERT embeddings (Devlin et al., 2019) (Table 1). Trained and evaluated on Y'20's 300-r corpus, our tagger performs two points better than Y'20's tagger and reaches Y'20's inter-annotator agreement.
+
+# 3.3 Parsing Recipes
+
+In the second step, we predict the edges of the graph. We use the biaffine dependency parser by Dozat and Manning (2017), implemented by Gardner et al. (2018). The model consists of a biaffine classifier upon a three-layered BiLSTM with multilingual BERT embeddings. The model takes as input the tagged recipe text generated by the tagger and generates a dependency tree over the recipe. We use the parser to generate connected recipe graphs of full recipes, so we parse the entire recipe
+
+| Model | Corpus | Embedder | Precision | Recall | F-Score |
| IAA | 100-r by Y'20 | | 89.9 | 92.2 | 90.5 |
| Y'20 | 300-r by Y'20 | | 86.5 | 88.8 | 87.6 |
| Our tagger | 300-r by Y'20 | English ELMo | 89.9 ± 0.5 | 89.2 ± 0.4 | 89.6 ± 0.3 |
| Our tagger | 300-r by Y'20 | multilingual BERT | 88.7 ± 0.4 | 88.4 ± 0.1 | 88.5 ± 0.2 |
+
+Table 1: Recipe tagging performance compared to Y'20's performance and inter-annotator agreement (IAA).
+
+as a single "sentence".
+
+As this is a dependency tree parser, we remove edges from the recipe graphs in the training data to make them into trees: we preserve the edge that is listed first in the raw annotation data and ignore all other edges. This results in dependency edges that point from the inputs to the actions and from the actions to the outputs, such that the final dish only has incoming edges. We still evaluate against the complete recipe graphs in the test set.
+
+Our parsing results are presented in Table 2. We train the parser on the English 300-r corpus by Y'20. Our parser sets a new state of the art on this corpus, with an F-Score of 78.2 on gold-tagged evaluation data. Moreover, the combined tagger and parser achieve an F-Score of 72.3 on unannotated recipe text, compared to 43.3 of Y'20's own parser on automatically tagged recipe text. See Supplementary Material A for model hyperparameters and label-specific performance on actions.
+
+# 3.4 Action Graphs for Recipe Alignment
+
+To automatically align actions across recipe graphs, we abstract the output of our parser to action graphs, which only retain the action nodes from the full graphs: the full graph in Fig. 2 is transformed into the action graph in Fig. 3. We accomplish this by removing all non-action nodes. The paths between action nodes in the full graph become edges in the action graph. Similar to full recipe graphs, action graphs can be read from top to bottom as a temporal and sometimes causal sequence of actions. Each interior node has parent action nodes it is dependent on, as well as child nodes conditional upon it. We utilize this information in our automatic alignment model (Section 5).
+
+In creating our action graphs, we unify all Y'20 action types into a single type action: actions by chef, both continuous (e.g. "chop," "add") and discontinuous ("sift $_{a_i}$ ingredients together $_{a_{i+1}}$ ); actions by food ("until the butter melts $_{a_j}$ ) and actions by tool ("until skewer comes out $_{a_k}$ clean").7
+
+
+Figure 3: An example of how actions align across action graphs for the recipes in Fig. 1 (expert annotation).
+
+While most actions have surface realizations of exactly one verb token, actions can span up to four consecutive tokens (see Table 3).
+
+# 4 A Corpus of Recipe Action Alignments
+
+We collect a corpus of crowdsourced annotations of action alignments between recipes for a subset of L'20's corpus. Our Aligned Recipe Action (ARA) corpus makes two important contributions: (i) manual alignments, as opposed to the majority of L'20's automatically aligned corpus; (ii) alignments are at the action level, as opposed to the more coarse-grained sentence level.
+
+# 4.1 Data Preparation
+
+We draw the recipes for the crowdsourcing from the training data of L'20. We manually chose 10
+
+| Model | Corpus | Tag source | Precision | Recall | F-Score |
| IAA | 100-r by Y'20 | gold tags | 84.4 | 80.4 | 82.3 |
| Y'20 | 300-r by Y'20 | gold tags | 73.7 | 68.6 | 71.1 |
| Our parser | 300-r by Y'20 | gold tags | 80.4 ± 0.0 | 76.1 ± 0.0 | 78.2 ± 0.0 |
| Y'20 | 300-r by Y'20 | Y'20 tagger | 51.1 | 37.7 | 43.3 |
| Our parser | 300-r by Y'20 | our ELMo tagger | 74.4 ± 0.5 | 70.4 ± 1.0 | 72.3 ± 0.8 |
+
+Table 2: Recipe parsing performance compared to Y'20 and inter-annotator agreement (IAA).
+
+ | count | mean |
| dishes | 10 | |
| recipes | 110 | |
| recipes per dish | 11 | |
| total action pairs annotated | 1592 | |
| annotations per source action | at least 3 | |
| actions per recipe | 3-37 | 15.1 |
| actions per source recipe | 4-37 | 15.9 |
| actions per sentence | 0-9 | 1.7 |
| tokens per action | 1-4 | 1.2 |
+
+Table 3: Makeup of our ARA annotated corpus.
+
+dishes from the subset of dishes with exactly 11 pairwise-matched recipes. $^{8}$ We chose the 10 dishes and corresponding recipes for their variation and to ensure our work generalizes to new cuisines and recipes: they span different cuisines (Italian, Chinese, Indian, American, German) and dish types (appetizer, side, main, dessert). A full list of dishes we annotate is in Supplementary Material B. In all recipes, we detect actions with our tagger (Section 3.3) using ELMo embeddings and trained on the English data of Y'20.
+
+For each dish, we define ten pairs of recipes such that one recipe is the source recipe of an alignment to the next shorter target recipe of the same dish. We select these pairings by measuring the length of recipes in number of actions. Using this methodology, all recipes except for the longest and shortest recipes of each dish are annotated once as source recipe and once as target recipe, respectively. The pairing procedure is motivated by two rationales: 1. Long-to-Short. In order to limit annotator disagreement from 1:n alignments in our data, we always present the longer recipe as the source recipe as we expect 1:n alignments to be not as common if the source recipe is longer than the target recipe. 2. Transitive closure. An alignment from a source recipe to a secondary target recipe can be approximated by the alignments between (i) the source recipe and the target recipe and (ii) the alignments between the target recipe and the secondary target recipe.
+
+# 4.2 Data Collection
+
+We obtain the alignments in the corpus in a multi-step process: For every source action $a$ , we initially ask three crowd-workers to vote for the correct target action. If we find a unique plurality vote for a target action $b$ , $a$ is aligned to $b$ . If there is no agreement between the crowd-workers, we iteratively ask more crowd-workers to select a target action, until we have a unique plurality vote for the target. This approach optimizes resource spending as it takes into account that some alignments are very easy (needing few votes) whereas others are harder, resulting in more noise in the votes and therefore needing more votes to obtain a reliable annotation. In extreme cases, where we did not obtain a plurality even after five or six votes, we deferred to an expert annotator: 190 out of 1592 source actions did not receive a unique plurality target; these were adjudicated by two of the authors.
+
+Crowd-sourcing setup. We implemented our crowdsourcing experiment with Lingoturk (Pusse et al., 2016). Participants were hired through Prolific. In total we hired 250 participants, each of whom answered on average 32 questions. Each participant was paid 1.47 GBP; the average hourly payment was 8.82 GBP.
+
+A participant's task was to align the set of actions in a source recipe to its target recipe. Importantly, this task is not a simple verb matching task. Participants were instructed to read both recipes in their entirety and were then presented with two source actions at a time in the order in which they appear in the recipe.[10] For each source action, the participant was asked to choose one action from the target recipe that it "best corresponds to", or "None of these"; the entire set of actions in the target recipe was available for selection. Both recipes were dis
+
+played at all times with all actions bolded and in unique colors for ease of identification. Each participant did this for two recipe pairs of differing length and for different dishes, such that the total length of each experiment was roughly comparable. See Supplementary Material B for more details on the annotation setup.
+
+Annotator agreement. To quantitatively assess agreement between our annotators, we compute how often the votes by the participants agreed with the alignment we obtained (i.e., the probability of a participant's vote to align with the plurality). A distribution of the support behind the plurality vote is seen in Fig. 4. While some questions are easy and receive $100\%$ agreement, many of them only receive a plurality from $50\%$ of the answers. For some questions, disagreement between annotators was so high that the most-chosen answer was only chosen by $20\%$ of the annotators. In such cases, we collected a high number of votes from different participants to obtain a plurality.
+
+Overall, $69.3\%$ of the target selections by our annotators agreed with the annotation in the dataset (selected by plurality vote). This measure is not an upper bound for system performance because incorrect annotations by an annotator are only reflected in the dataset if by chance several annotators chose the same incorrect target. Otherwise, the incorrect decision by an annotator will be remedied by the requirement of having a plurality vote from at least three different annotators. It is also not an upper bound for human performance because some annotators were more reliable than others and we report the average over all annotators.
+
+IAA measures. Due to the data collection design (a large group of people answering question with variable answer sets and a skewed but unknown prior distribution over the answers), common inter-annotator metrics such as Krippendorff's $\alpha$ or Cohen's $\kappa$ are not applicable: These measures require either a fixed set of possible answers for all questions or knowledge of the prior distribution of the answers, or a fixed number of annotators (or a combination of these requirements). Thus, computing such a reliability metric would require making (incorrect) assumptions of the data and render the resulting number uninterpretable.
+
+Human performance and baseline. To obtain a human accuracy score, a subset of the authors manually annotated ten recipe pairs and obtained
+
+
+Figure 4: For each question we computed how many answers agreed with the plurality vote. The plot shows the distribution of questions over this agreement score.
+
+a $79\%$ accuracy with respect to the gold standard. The majority baseline for the annotations is $31.3\%$ (always choosing "None").
+
+Corpus statistics. Details about ARA, our full annotated corpus, are in Table 3. On average, 16 action pairs are collected for each recipe pair. Notably, this average number of actions per recipe (15.1) is almost double the average number of 8 sentences per dish in L'20, further motivating our task of annotating fine-grained actions to collect more detailed recipe information. Of particular interest for our task, $70\%$ of the recipes have re-occurring action words (e.g. "add") that do not signal re-occurring actions based on the surrounding recipe context. There are 366 unique actions distributed over 1659 action instances. The majority of recipes have two repeated verbs; the highest repetition is 5 identical verbs in one recipe, which occurs in two recipes.
+
+# 4.3 Disagreement Analysis
+
+Qualitative analysis reveals that inter-annotator disagreement has several sources (actions bolded). We expect $10\%$ of actions to be mistagged (Table 1); typos in the original L'20 corpus are a special case of this ("from" misspelled as "form"). Several recipes consist of more than 20 actions and contain multiple identical actions in their surface form (see paragraph Corpus statistics). Though these actions appear in different colors on the interface, it is easy to forget which color corresponds to which action and subsequently misalign it. This can also cause annotators to simply choose the most similar superficial action without considering context: for example, "mix" aligned to "mash" regardless of
+
+surrounding ingredients or stage of recipe.
+
+Even given the full recipe context, linguistic peculiarities of recipe text make deciding whether actions between two recipes correspond difficult. We discuss several cases of this and quantify their frequency based on 100 of the questions that initially received no majority (percentages in parentheses). Notably, some of these categories overlap.[11]
+
+One-to-many alignments (6%). Our Long-to-Short principle (Section 4.1) pays off, although we still find cases where one source action can align to two distinct target actions. We see this result with infinitive actions: "set aside to $\text{cool}_{a_i}$ ," as one action, can be aligned to either action in "transfer $a_j$ to $\text{cool}_{a_{j+1}}$ ." We also see this with prepositional result phrases: "stir $a_k$ until combined $a_{k+1}$ ". Additional manual analysis shows missed cases are minimal enough to not impact downstream performance.
+
+Many-to-one alignments (26%). The reverse case of one-to-many alignments causes disagreement in whether individual actions that comprise a more complex action should be aligned. This happens frequently in baking recipes, where recipes often differ in when and how they combine wet and dry ingredients. For example in Fig. 3, "Add" adds the milk, butter, and vanilla to eggs in (b), while the same action in (c) adds egg yolks and vanilla essence to milk. Though these high-level actions may sequentially align across recipes, the different ingredients that act as arguments can impact whether they are judged to correspond. Conflated actions contribute to this phenomenon: "Add dry ingredients alternately with wet ingredients"; "Let cool in pan or on wire rack."
+
+Implicit actions (24%). Implicit actions come in many forms in recipe texts and may cause confusion as to whether an action should be aligned to the implicit step or not aligned at all. We see this with nouns that imply actions: "return to continue cooking" versus "return to cooker." We also see actions that must be inferred from their surrounding actions: "Spoon onto baking tray. Take out after 8 minutes." to connote "Bake." Finally, ingredients themselves can imply actions for the chef ("crushed garlic"), or not ("canned tomatoes").
+
+Let and light verbs $(22\%)$ . Light causative verbs such as let and allow are frequent sources of disagreement. Part of this disagreement stems
+
+from our action tagger: while sequences such as "let $\text{rest}_{a_i}$ " and "allow to $\text{stand}_{a_j}$ " are tagged as one action, "allow $a_k$ to $\text{return}_{a_{k+1}}$ " is tagged as two. If a noun intervenes, such as "let $a_l$ dough $\text{rest}_{a_{l+1}}$ ", one action can become two.[12] Disagreement can then arise as to which of the two actions carries the semantic weight and should be aligned. By- phrases also introduce confusing subevent structure: "create layers by putting pasta..." versus "layer pasta."
+
+Negative and conditional events (2%). Some recipes include negative events: "Avoid over mixing" or "without kneading." Conditional events may also cause disagreement based on whether an annotator judges them as real or not: "If refrigerated..."; "When you are ready to bake..."
+
+No best alignment (17%). Though some actions have no best alignment, the crowdsourcing task may bias participants to choose an answer.
+
+The sources of disagreement we find in ARA illustrate the need for analyzing recipe text at the fine-grained level of actions. In refining the sentence-level alignments of L'20 to the cooking action level, we find that judging when and how specific actions "correspond" is a complex task that requires intricate knowledge of the cooking domain.
+
+# 5 Automatic Alignment of Actions
+
+We develop two automatic alignment models that mimic our crowdsourcing task; these models align action nodes between our parsed action graphs (Section 3.4). We evaluate both models on the crowdsourced alignment dataset (Section 4): one using features solely from the actions to be aligned, and one incorporating parent and child action nodes into the alignment decision.
+
+# 5.1 Alignment Model Structure
+
+We treat alignment as an assignment problem: for each action in the source recipe $R_{1}$ , we independently decide which action in the target recipe $R_{2}$ it should align to. Alternatively, a source action may have no adequate alignment and be unaligned. We
+
+use a two-block architecture for this classification task.
+
+Encoder The Encoder generates an encoding vector $enc(i)$ for each action $a(i)$ of a recipe. We re-tokenize each recipe with the BERT tokenizer and obtain token embeddings $emb(j)$ from BERT. For each action $a(i)$ we track the list of tokens $t(a(i))$ that correspond to this action. We then obtain $enc(i)$ for the two versions of our alignment model in the following way.
+
+In the base model, we run an LSTM over the embeddings to generate the representations for each action, such as:
+
+$$
+\begin{array}{l} e n c _ {-} b (i) = L S T M ^ {s e q} ([ e m b (j) \mid j \in t (a (i)) ]) \\ e n c (i) = e n c _ {-} b (i) \\ \end{array}
+$$
+
+The extended model incorporates structural information about the recipe graph. We extract the child and parent action nodes for each source action. We then combine the base encoding of each action with the base encodings of its children and parents. As each action can have multiple parents and children, we run an $LSTM^p$ over the parent base encodings for all parents $p(a(i))$ for an action $a(i)$ and an $LSTM^c$ over the child base encodings for all child nodes $c(a(i))$ . We obtain $\text{enc\_ext}(i)$ by concatenating $\text{enc\_b}(i)$ with the outputs of these two LSTMs:
+
+$$
+\begin{array}{l} e n c _ {-} p (i) = L S T M ^ {p} ([ \text {e n c} _ {-} b (p) \mid p \in p (a (i) ]) ] \\ e n c _ {-} c (i) = L S T M ^ {c} ([ \text {e n c} _ {-} b (c) \mid c \in c (a (i)) ]) \\ e n c _ {-} e x t (i) = \left[ e n c _ {-} b (i); e n c _ {-} c (i); e n c _ {-} p (i) \right] \\ e n c (i) = e n c _ {-} e x t (i) \\ \end{array}
+$$
+
+Whenever the parent or child list is empty, we replace the LSTM output with a trained embedding representing the empty child / parent list.
+
+Scorer The Scorer predicts the alignment target for an action using one-versus-all classification. Given a source action $a_1(i) \in R_1$ , we compute scores $s(enc(a_1(i)), enc(a_2(j)))$ for the alignment to every target action (including the "None" target) $a_2(j) \in R_2 \cup \text{none}$ . For both the base and the extended model, the encoding of the "None" alignment target is a trained vector of the same size as the action encoding. We compute $s(enc(a_1(i)), enc(a_2(j)))$ using a multi-layer perceptron with two hidden layers and the element-wise product $enc(a_1(i)) \odot enc(a_2(j))$ as input.
+
+| Model Name | Accuracy |
| Human Upper Bound | 79.0 |
| Sequential Order | 16.5 |
| Cosine Similarity | 41.5 |
| Common Action Pairs | 52.1 |
| Our Alignment Model (base) | 66.3 |
| Our Alignment Model (extended) | 72.4 |
+
+Table 4: Performance comparison of the three baseline models, alignment model (base), and alignment model (extended) on the Action Alignment Corpus.
+
+Training and evaluation We train both models using cross-entropy loss, with updates only on incorrect predictions to avoid overfitting. We use 10-fold cross validation on the ten different dishes, with one dish serving as test dish, such that the aligner is always evaluated on an unknown domain.
+
+# 5.2 Experiment Results
+
+We implement three baselines for comparison: (i) sequential ordering of alignments, such that $a_1(i)$ from recipe $R_1$ will be aligned to $a_2(i)$ from recipe $R_2$ ; (ii) cosine similarity scores between the BERT embeddings of action pairs as the scorer; and (iii) common action pair frequencies with 10-fold cross-validation (Table 4). We further compare our model to the human upper bound for alignment accuracy as discussed in Section 4.2. Our base alignment model outperforms all the baselines, but we observe a substantial gain in accuracy in the extended alignment model, illustrating the importance of structural information about the recipe in aligning actions. The extended model still does not reach human performance, illustrating the difficulty of the task.
+
+The extended alignment model achieves an accuracy of $69.8\%$ on aligning actions of the longer recipe to actions of the shorter recipe. This suggests that our Long-to-Short approach to data collection yields valid data in both directions.
+
+As a point of comparison, L'20 achieve an F-Score of 54.6 for text-to-text alignments and 70.3 for text-to-video alignment at the sentence level, suggesting that alignments of actions may be harder to annotate than alignments of entire sentences, but easier to align automatically.
+
+# 6 Conclusion & Future Work
+
+In this paper, we have made two contributions: (i) ARA, a novel corpus of human-annotated alignments between corresponding cooking actions, and (ii) a first neural model to align actions across
+
+recipe graphs. We find that incorporating structural information about recipes improves the accuracy of the neural model, highlighting the usefulness of recipe graphs and of recipe parsers.
+
+Compared to previous work, our corpus and model represent alignments at the level of individual actions and not of entire sentences. In refining the sentence-level alignments of L'20 to the cooking action level, we find that judging when and how specific actions "correspond" is a complex task that requires intricate knowledge of the cooking domain. Alternatively, the complexity of recipe interpretation can be framed as a matter of recognizing nuances in how meaning is construed in recipe text given the genre and its preferred syntactic constructions (Langacker, 1993; Trott et al., 2020).
+
+Looking ahead, our work lays a foundation for research which automatically aggregates multiple recipe graphs for the same dish, identifying common and distinct parts of the different recipes. This opens up a variety of applications in the cooking domain, including dialogue systems which can explain a recipe at different levels of abstraction.
+
+# Acknowledgments
+
+Partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 232722074 - SFB 1102.
+
+# References
+
+Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-08: HLT, pages 789-797, Columbus, Ohio. Association for Computational Linguistics.
+Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 602-610, Suntec, Singapore. Association for Computational Linguistics.
+Minsuk Chang, Leonore V. Guillain, Hyeungshik Jung, Vivian M. Hare, Juho Kim, and Maneesh Agrawala. 2018. Recipescape: An interactive tool for analyzing cooking instructions at scale. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, April 21-26, 2018, page 451. ACM.
+Yuzhe Chen. 2017. A statistical machine learning approach to generating graph structures from food recipes. Master's thesis, Brandeis University.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open-Review.net.
+Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1-6, Melbourne, Australia. Association for Computational Linguistics.
+JermsaK Jermsurawong and Nizar Habash. 2015. Predicting the structure of cooking recipes. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 781-786, Lisbon, Portugal. Association for Computational Linguistics.
+Chloe Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi. 2015. Mise en place: Unsupervised interpretation of instructional recipes. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
+Ronald W. Langacker. 1993. Universals of construal. In Annual Meeting of the Berkeley Linguistics Society, volume 19, pages 447-463.
+Angela Lin, Sudha Rao, Asli Celikyilmaz, Elnaz Nouri, Chris Brockett, Debadeepta Dey, and Bill Dolan. 2020. A recipe for creating multimodal aligned datasets for sequential tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4871-4884, Online. Association for Computational Linguistics.
+Shinsuke Mori, Hirokuni Maeta, Tetsuro Sasada, Koichiro Yoshino, Atsushi Hashimoto, Takuya Funatomi, and Yoko Yamakata. 2014. Flowgraph2text: Automatic sentence skeleton compilation for procedural text generation. In Proceedings of the 8th International Conference on Natural Language Generation (INLG).
+Iftekhar Naim, Young Chol Song, Qiguang Liu, Henry A. Kautz, Jiebo Luo, and Daniel Gildea. 2014. Unsupervised alignment of natural language instructions with video segments. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Quebec City, Quebec, Canada, pages 1558-1564. AAAI Press.
+
+Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
+
+Florian Pusse, Asad Sayeed, and Vera Demberg. 2016. LingoTurk: managing crowdsourced tasks for psycholinguistics. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 57-61, San Diego, California. Association for Computational Linguistics.
+
+Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 979-988, Uppsala, Sweden. Association for Computational Linguistics.
+
+Sean Trott, Tiago Timponi Torrent, Nancy Chang, and Nathan Schneider. 2020. (re) construing meaning in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5170-5184, Online. Association for Computational Linguistics.
+
+Lilian Wanzare, Alessandra Zarcone, Stefan Thater, and Manfred Pinkal. 2017. Inducing script structure from crowdsourced event descriptions via semisupervised clustering. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 1-11, Valencia, Spain. Association for Computational Linguistics.
+
+Lilian D. A. Wanzare, Alessandra Zarcone, Stefan Thater, and Manfred Pinkal. 2016. A crowdsourced database of event sequence descriptions for the acquisition of high-quality script knowledge. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3494-3501, Portoorž, Slovenia. European Language Resources Association (ELRA).
+
+Yoko Yamakata, Shinji Imahori, Hirokuni Maeta, and Shinsuke Mori. 2016. A method for extracting major workflow composed of ingredients, tools, and actions from cooking procedural text. In 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW).
+
+Yoko Yamakata, Shinsuke Mori, and John Carroll. 2020. English recipe flow graph corpus. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5187-5194, Marseille, France. European Language Resources Association.
+
+Mehmet Özgen. 2019. Tagging and action graph generation for recipes. Msc thesis, Hacettepe University.
+
+# A Tagger and Parser Evaluation
+
+# A.1 Methodology
+
+All our measures are computed as averages of the mean values of four individually trained instances
+
+of our model and are reported together with the respective standard deviations. For the evaluation of parsing automatically tagged recipe text (Table 2 in the paper), the recipes are tagged by four individually trained instances of our tagger (with ELMo embeddings) before they are parsed by four individually trained instances of the parser.
+
+While Y'20 cross-validate over the whole 300-r corpus, we train on a subset of 240 recipes and evaluate on a subset of 30 recipes. Each subset is randomly chosen from the full corpus such that the proportions of 100-r to 200-r are preserved as 1:2.
+
+# A.2 Tagger
+
+For hyper-parameters, see Table 5. The hyperparameters were fine-tuned separately for the pretrained embeddings (ELMo and BERT, respectively).
+
+| Hyper-parameter | ELMo | BERT |
| Hidden size BiLSTM | 50 | 200 |
| Layers BiLSTM | 2 | 2 |
| Dropout BiLSTM | 0.5 | 0.5 |
| Dropout CRF | 0.5 | 0.5 |
| Regularization method | L2 (α = 0.5) | L2 (α = 0.1) |
| Optimization method | Adam | Adam |
| Learning rate | 0.0075 | 0.001 |
| Gradient norm | 10.0 | 10.0 |
| Training epochs | 50 | 50 |
+
+For reference, we display an overview over the labels defined by Y'20 in Table 6. The label-specific performance for action sequences is itemized in detail in Table 7.
+
+# A.3 Parser
+
+We did not perform any fine-tuning on the parser. The hyper-parameters are reported in Table 8.
+
+Table 5: Hyper-parameters for the tagger after finetuning on the German data set. The number of epochs is an observation.
+
+| Hyper-parameter | Value |
| Tag embedding dim | 100 |
| Hidden size BiLSTM | 400 |
| Layers BiLSTM | 3 |
| Input dropout | 0.3 |
| BiLSTM dropout | 0.3 |
| Classifier dropout | 0.3 |
| Optimization method | Dense Sparse Adam (betas (0.9, 0.9)) |
| Gradient norm | 5 |
| Training epochs | 20 ± 10 |
+
+Table 8: Hyper-parameters for the parser - no finetuning performed. The number of training epochs is an observation.
+
+| Label | Meaning | Explanation |
| F | Food | Eatable; also intermediate products |
| T | Tool | Knife, container, etc. |
| D | Duration | Duration of cooking |
| Q | Quantity | Quantity of food |
| Ac | Action by chef | Verb representing a chef's action |
| Ac2 | Discontinuous Ac (English only) | Second, non-contiguous part of a single action by chef |
| Af | Action by food | Verb representing action of a food |
| At | Action by tool (English only) | Verb representing a tool's action |
| Sf | Food state | Food's initial or intermediate state |
| St | Tool state | Tool's initial or intermediate state |
+
+Table 6: Y'20 labels for the tagging task.
+
+| Model | Data | Label | #Instances | Precision | Recall | F-Score |
| Y'20 | Y'20 | {Ac, Ac2, Af, At} | | 88.7 | 89.3 | 89.0 |
| Our model ('21) | Y'20 | {Ac, Ac2, Af, At} | | 92.0 ± 1.6 | 88.4 ± 1.8 | 90.1 ± 1.7 |
| Y'20 | Y'20 | Ac | 4977 | 92.3 | 93.1 | 92.7 |
| Our model ('21) | Y'20 | Ac | 483 | 94.6 ± 1.1 | 91.7 ± 1.6 | 93.1 ± 1.2 |
| Y'20 | Y'20 | Ac2 | 178 | 43.8 | 46.8 | 45.3 |
| Our model ('21) | Y'20 | Ac2 | 16 | 69.2 ± 9.0 | 68.8 ± 10.2 | 68.1 ± 3.2 |
| Y'20 | Y'20 | Af | 255 | 51.4 | 50.4 | 50.9 |
| Our model ('21) | Y'20 | Af | 32 | 64.8 ±9.9 | 47.7 ± 7.8 | 54.9 ± 8.7 |
| Y'20 | Y'20 | At | 15 | 60.0 | 10.0 | 17.1 |
| Our model ('21) | Y'20 | At | 2 | 100 ± 0.0 | 100 ± 0.0 | 100 ± 0.0 |
+
+The set of edges labels as determined by Y'20 are displayed in Table 9.
+
+# B Crowdsourcing
+
+# B.1 Dishes Annotated
+
+(1) Baked Ziti; (2) Blueberry Banana Bread; (3) Cauliflower Mash; (4) Chewy Chocolate Chip Cookies; (5) Garam Masala; (6) Homemade Pizza Dough; (7) Orange Chicken; (8) Pumpkin Chocolate Chip Bread; (9) Slow Cooker Chicken Tortilla Soup; (10) Waffles.
+
+# B.2 Experiment Platform
+
+Figure 5 gives a screenshot of the instruction page, and Figure 6 gives a screenshot of the question page.
+
+# C Alignment Model Training Details
+
+In order to re-tokenize and generate token embeddings for the recipes, we utilized the bert-uncased-base BERT model with embedding dimension of 768. We trained both alignment models (base and extended) for 10-folds with 1 dish in the test set and 9 dishes for training. In each fold, the model runs for 40 epochs with a train/dev split of 8/1 dishes respectively. We use Adam as the optimizer
+
+with a 0.0001 learning rate and Cross Entropy loss as the loss function during training. For aligner's hyper-parameters, see Table 10.
+
+Table 7: Label-specific performance for action sequences; comparison of our model and that of Y'20. The values in lines 1 and 2 are micro-averaged over the four individual action labels.
+
+| Hyper-parameter | Value |
| BERT embedding dim | 768 |
| LSTM(seq) hidden dim | 768 |
| LSTM(p) hidden dim | 768 |
| LSTM(c) hidden dim | 768 |
| MLP layer 1 output dim | 128 |
| MLP layer 2 output dim | 32 |
| MLP layer 3 output dim | 1 |
+
+Table 10: Hyper-parameters for the Alignment model.
+
+| Label | Meaning | Explanation |
| Agent | Subject | Relationship with actions (Ac or Af) |
| Targ | Direct object | Relationship with actions (Ac or Af) |
| Dest | Indirect object (container) | Relationship with actions (Ac or Af) |
| t-comp | tool complement | Tool used in an action |
| F-comp | Food complement | Food used as a tool |
| F-eq | Food equality | Identical food |
| F-part-of | Food part-of | Refer to a part of a food |
| F-set | Food set | Refer to a set of foods |
| T-eq | Tool equality | Identical tool |
| T-part-of | Tool part-of | Refer to a part of a tool |
| A-eq | Action equality | Identical action (Ac, Af) |
| V-tm | Head verb for timing, etc. | |
| other-mod | Other relationships | |
+
+Table 9: Y'20 edge labels for the parsing task.
+
+# Instructions
+
+Welcome to our experiment! By participating in this experiment you will contribute to research that automatically analyze recipes. We are counting on you to tell us whether actions in different recipes essentially refer to the same step in cooking.
+
+You will need to align two pairs of recipes. For each pair, we will show you the two recipes for a same dish; the actions that you need to pay attention to will be displayed in colors. We will need you to decide whether an action in the first recipe corresponds to one in the second. Here is an example.
+
+# Please read the following recipes carefully and answer the questions below.
+
+Dish: Nostraline Olives
+
+Recipe 1
+
+Blend all ingredients together in a mixer until desired consistency . Adjust to taste !
+
+Recipe 2
+
+Cooks Note : Nostrine olives are a type of black olive from the Tuscany region of Italy . Any high - quality brined black olive such as nicoise or kalamata can be used in their place . Mash the anchovies with the garlic in a mortar and pestle . Mix in the olives olive oil parsley lemon juice and capers . If desired serve thetapenade on toasted bread rubbed with raw garlic a drizzle of olive oil and a slice of beautiful heirloom tomato .
+
+Question: Which action below from the second recipe does Blend from the first recipe correspond to best?
+
+Mix
+
+$\bigcirc$ raw
+
+None of these
+
+drizzle
+
+In this example, we are interested in the action Blend from recipe 1. Having read both recipes, we see that Blend includes the activities of both Mash and Mix from recipe 2. No option in the list is a perfect fit, but we should choose Mix as it still captures contains a considerable overlap with Blend.
+
+# Legal Information
+
+This experiment is being conducted as part of ongoing research at University. If you have any questions or comments about the study, please contact us. You must be at least 18 years old to participate. Your participation in this research is voluntary. There are no risks or benefits to participating in this study. You may decline to answer any or all of the following questions. You may decline further participation, at any time, without adverse consequences. All data will be anonymized prior to analysis. If you agree to participate, please click on 'Next'.
+
+Next
+
+Figure 5: A screen shot of the instruction page.
+
+Please read the following recipes carefully and answer the questions below.
+
+Dish:
+
+# Recipe1
+
+1 ) Preheat the oven to 200C / Gas 2 ) Bring a large pot of water to a boil salt generously and boil the pasta until al dente tender but still slightly firm . Drain.3 ) In a large bowl toss the cooked pasta with the marinara sauce cubed mozzarella pieces 30 g of the Parmesan black pepper and the pepper flakes . Transfer the pasta to an oiled 23 cm by 33 cm baking dish . Cover the top of the pasta with the sliced mozzarella and sprinkle with the remaining Parmesan . Bake until lightly browned and hot about 30 minutes . Serve immediately . For the quick marinara sauce:2 tbsp extra - virgin olive oil/4 medium onion diced 3 cloves garlic chopped 800 g whole peeled canned tomatoes in puree roughly choppedSprig of fresh thyme Sprig of fresh basil 12 g saltFreshly ground black pepper1 ) Heat the oil in a medium saucepan over medium - high heat . Saute the onion and garlic stirring until lightly browned about 3 minutes . Add the tomatoes and the herb sprigs and bring to a boil . Lower the heat and simmer covered for 10 minutes . 2 ) Remove and discard the herb sprigs . Stir in the salt and season with pepper to taste . Use now or store covered in the refrigerator for up to 3 days or freeze for up to 2 months .
+
+# Recipe2
+
+Preheat the oven to $200^{\circ}\mathrm{C}$ . Bring a large pot of water to a boil salt generously and boil the pasta until al dente tender but still slightly firm. Drain. Toss the cooked pasta with the marinara sauce cubed mozzarella half the Parmesan cheese black pepper and pepper flakes. Transfer the pasta to an oiled 9 by 13-inch baking dish. Cover the top of the pasta with the sliced mozzarella and sprinkle with the remaining Parmesan. Bake until lightly browned and hot about 30 minutes. Heat the oil in a medium saucepan over medium - high heat . Cook the sausage until beginning to brown about 3 minutes . Add the onion and garlic stirring until lightly browned about 3 minutes more . Add the tomatoes and the herb sprigs and bring to a boil . Lower the heat and simmer covered for 10 minutes . Remove and discard the herb sprigs . Stir in the salt and season with pepper to taste . Use now or store covered in the refrigerator for up to 3 days or freeze for up to 2 months .
+
+Which action below from the second recipe does freeze from the first recipe correspond to best?
+
+Preheat
+Bring
+$\bigcirc$ boil
+$\bigcirc$ boil
+Drain
+Toss
+cooked
+$\bigcirc$ cubed
+Transfer
+Oiled
+Cover
+sliced
+$\bigcirc$ sprinkle
+Bake
+Heat
+Cook
+$\bigcirc$ brown
+Add
+O stirring
+browned
+Add
+$\bigcirc$ bring to a boil
+Lower
+simmer
+$\bigcirc$ covered
+$\bigcirc$ Remove
+$\bigcirc$ discard
+O Stir
+$\bigcirc$ season
+to taste
+Use
+store covered
+$\bigcirc$ freeze
+None of these
+
+Next
+
+Figure 6: A screen shot of the experiment interface.
\ No newline at end of file
diff --git a/aligningactionsacrossrecipegraphs/images.zip b/aligningactionsacrossrecipegraphs/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..678d9c07db1e72a175c9369d639a7e1ad6fd0a45
--- /dev/null
+++ b/aligningactionsacrossrecipegraphs/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:616c46ec5ea8f31843dc4f4eb89ed5f36677dad8ad9ab4d7d86811679beb9405
+size 527054
diff --git a/aligningactionsacrossrecipegraphs/layout.json b/aligningactionsacrossrecipegraphs/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9e097eded7a4ddc3122a32a845d3644cca612971
--- /dev/null
+++ b/aligningactionsacrossrecipegraphs/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9a7fff124bfc8aae0eddbdfa588832f915f2c985733c657326352467f637186f
+size 435356
diff --git a/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_content_list.json b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a9dd8e331d9b326393dc48f04e1a13b3e8c32b3e
--- /dev/null
+++ b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52351275cc2ba686380c3b8b4865483d8518d05ec1dadacac4b1c48a0cffff20
+size 59405
diff --git a/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_model.json b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1ba27a7d43c7083192dfc1c731d136faac27ca99
--- /dev/null
+++ b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a359c3fe6ac443623b2f09beb8ba3403e84185b0c177ca67f02c6a2fa6d1ea9
+size 74237
diff --git a/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_origin.pdf b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9c5dfe997dce80a6fe9d4a393d8c7fdc833a26ba
--- /dev/null
+++ b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/a630dbd0-4360-4381-9fc6-879276aa9a91_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91a3719f3763b6956d3d0510211fd6718d16f13479290473d02f75f626c1f5e3
+size 484586
diff --git a/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/full.md b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c6573e465b286b2a5c8344c59b7e51878479e3dd
--- /dev/null
+++ b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/full.md
@@ -0,0 +1,256 @@
+# Aligning Cross-lingual Sentence Representations with Dual Momentum Contrast
+
+Liang Wang and Wei Zhao and Jingming Liu
+
+Yuanfudao AI Lab, Beijing, China
+
+{wangliang01,zhaowei01,liujm}@yuanfudao.com
+
+# Abstract
+
+In this paper, we propose to align sentence representations from different languages into a unified embedding space, where semantic similarities (both cross-lingual and monolingual) can be computed with a simple dot product. Pre-trained language models are finetuned with the translation ranking task. Existing work (Feng et al., 2020) uses sentences within the same batch as negatives, which can suffer from the issue of easy negatives. We adapt MoCo (He et al., 2020) to further improve the quality of alignment. As the experimental results show, the sentence representations produced by our model achieve the new state-of-the-art on several tasks, including Tatoeba en-zh similarity search (Artetxe and Schwenk, 2019b), BUCC en-zh bitext mining, and semantic textual similarity on 7 datasets.
+
+# 1 Introduction
+
+Pre-trained language models like BERT (Devlin et al., 2019) and GPT (Radford and Narasimhan, 2018) have achieved phenomenal successes on a wide range of NLP tasks. However, sentence representations for different languages are not very well aligned, even for pre-trained multilingual models such as mBERT (Pires et al., 2019; Wang et al., 2020). This issue is more prominent for language pairs from different families (e.g., English versus Chinese). Also, previous work (Li et al., 2020) has shown that the out-of-box BERT embeddings perform poorly on monolingual semantic textual similarity (STS) tasks.
+
+There are two general goals for sentence representation learning: cross-lingual representations should be aligned, which is a crucial step for tasks like bitext mining (Artetxe and Schwenk, 2019a), unsupervised machine translation (Lample et al., 2018b), and zero-shot cross-lingual transfer (Hu et al., 2020) etc. Another goal is to induce a metric space, where semantic similarities can be com
+
+puted with simple functions (e.g., dot product on $L_{2}$ -normalized representations).
+
+Translation ranking (Feng et al., 2020; Yang et al., 2020) can serve as a surrogate task to align sentence representations. Intuitively speaking, parallel sentences should have similar representations and are therefore ranked higher, while non-parallel sentences should have dissimilar representations. Models are typically trained with in-batch negatives, which need a large batch size to alleviate the easy negatives issue (Chen et al., 2020a). Feng et al. (2020) use cross-accelerator negative sampling to enlarge the batch size to 2048 with 32 TPU cores. Such a solution is hardware-intensive and still struggles to scale.
+
+Momentum Contrast (MoCo) (He et al., 2020) decouples the batch size and the number of negatives by maintaining a large memory queue and a momentum encoder. MoCo requires that queries and keys lie in a shared input space. In self-supervised vision representation learning, both queries and keys are transformed image patches. However, for translation ranking task, the queries and keys come from different input spaces. In this paper, we present dual momentum contrast to solve this issue. Dual momentum contrast maintains two memory queues and two momentum encoders for each language. It combines two contrastive losses by performing bidirectional matching.
+
+We conduct experiments on the English-Chinese language pair. Language models that are separately pre-trained for English and Chinese are fine-tuned using translation ranking task with dual momentum contrast. To demonstrate the improved quality of the aligned sentence representations, we report state-of-the-art results on both cross-lingual and monolingual evaluation datasets: Tatoeba similarity search dataset (accuracy $95.9\% \rightarrow 97.4\%$ ), BUCC 2018 bitext mining dataset (f1 score $92.27\% \rightarrow 93.66\%$ ), and 7 English STS datasets (average Spearman's correlation
+
+$77.07\% \rightarrow 78.95\%$ . We also carry out several ablation studies to help understand the learning dynamics of our proposed model.
+
+# 2 Method
+
+
+Figure 1: Illustration of dual momentum contrast. $sg$ denotes "stop gradient". $x$ and $y$ are sentences from two different languages.
+
+Dual Momentum Contrast is a variant of the MoCo proposed by He et al. (2020). Our method fits into the bigger picture of contrastive learning for self-supervised representation learning (LeKhac et al., 2020). Given a collection of parallel sentences $\{x_{i},y_{i}\}_{i = 1}^{n}$ , as illustrated in Figure 1, we first encode each sentence using language-specific BERT models (base encoder), then apply mean pooling on the last-layer outputs and $L_{2}$ normalization to get the representation vector $\mathbf{h}_{x_i},\mathbf{h}_{y_i}\in R^{768}$ .
+
+Each BERT encoder has a momentum encoder, whose parameters $\theta$ are updated by exponential moving average of the base encoder as follows:
+
+$$
+\boldsymbol {\theta} _ {t} \leftarrow m \boldsymbol {\theta} _ {t - 1} + (1 - m) \boldsymbol {\theta} _ {\text {b a s e}} \tag {1}
+$$
+
+Where $t$ is the iteration step. Two memory queues are maintained for each language to store $K$ vectors encoded by the corresponding momentum encoder from most recent batches. The oldest vectors are replaced with the vectors from the current batch upon each optimization step. The momentum coefficient $m \in [0,1]$ is usually very close to 1 (e.g., 0.999) to make sure the vectors in the memory queue are consistent across batches. $K$ can be very large ( $>10^{5}$ ) to provide enough negative samples for learning robust representations.
+
+To train the encoders, we use the InfoNCE loss
+
+(Oord et al., 2018):
+
+$$
+\mathrm {L} (x, y) = - \log \frac {\exp \left(\mathbf {h} _ {x} \cdot \mathbf {h} _ {y} / \tau\right)}{\sum_ {i = 0} ^ {K} \exp \left(\mathbf {h} _ {x} \cdot \mathbf {h} _ {y _ {i}} / \tau\right)} \tag {2}
+$$
+
+$\tau$ is a temperature hyperparameter. Intuitively, Equation 2 is a $(\mathbf{K} + 1)$ -way softmax classification, where the translation sentence $y = y_{0}$ is the positive, and the negatives are those in the memory queue $\{y_i\}_{i=1}^K$ . Note that the gradients do not backpropagate through momentum encoders nor the memory queues.
+
+Symmetrically, we can get $\mathrm{L}(y,x)$ . The final loss function is the sum:
+
+$$
+\min \mathrm {L} (x, y) + \mathrm {L} (y, x) \tag {3}
+$$
+
+After the training is done, we throw away the momentum encoders and the memory queues, and only keep the base encoders to compute the sentence representations. In the following, our model is referred to as MoCo-BERT.
+
+Application Given a sentence pair $(x_{i},y_{j})$ from different languages, we can compute cross-lingual semantic similarity by taking dot product of $L_{2}$ -normalized representations $\mathbf{h}_{x_i}\cdot \mathbf{h}_{y_j}$ . It is equivalent to cosine similarity, and closely related to the Euclidean distance.
+
+Our model can also be used to compute monolingual semantic similarity. Given a sentence pair $(x_{i}, x_{j})$ from the same language, assume $y_{j}$ is the translation of $x_{j}$ , if the model is well trained, the representations of $x_{j}$ and $y_{j}$ should be close to each other: $\mathbf{h}_{x_{j}} \approx \mathbf{h}_{y_{j}}$ . Therefore, we have $\mathbf{h}_{x_{i}} \cdot \mathbf{h}_{x_{j}} \approx \mathbf{h}_{x_{i}} \cdot \mathbf{h}_{y_{j}}$ , the latter one is cross-lingual similarity which is what our model is explicitly optimizing for.
+
+# 3 Experiments
+
+# 3.1 Setup
+
+Data Our training data consists of English-Chinese corpora from UNCorpus $^{1}$ , Tatoeba, News Commentary $^{2}$ , and corpora provided by CWMT 2018 $^{3}$ . All parallel sentences that appear in the evaluation datasets are excluded. We sample 5M sentences to make the training cost manageable.
+
+
+3http://www.cipsc.org.cn/cwmt/2018/
+
+Hyperparameters The encoders are initialized with bert-base-uncased (English) for fair comparison, and RoBERTa-wwm-ext $^4$ (Chinese version). Using better pre-trained language models is orthogonal to our contribution. Following Reimers and Gurevych (2019), sentence representation is computed by the mean pooling of the final layer's outputs. Memory queue size is 409600, temperature $\tau$ is 0.04, and the momentum coefficient is 0.999. We use AdamW optimizer with maximum learning rate $4\times 10^{-5}$ and cosine decay. Models are trained with batch size 1024 for 15 epochs on 4 V100 GPUs. Please checkout the Appendix A for more details about data and hyperparameters.
+
+# 3.2 Cross-lingual Evaluation
+
+| Model | Accuracy |
| mBERTbase (Hu et al., 2020) | 71.6% |
| LASER (Artetxe and Schwenk, 2019b) | 95.9% |
| VECO (Luo et al., 2020) | 82.7% |
| SBERTbase-p† | 95.0% |
| MoCo-BERTbase (zh→en) | 97.4% |
| MoCo-BERTbase (en→zh) | 96.6% |
+
+Table 1: Accuracy on the test set of Tatoeba en-zh language pair. †: Reimers and Gurevych (2020).
+
+| Model | F1 |
| mBERTbase (Hu et al., 2020) | 50.0% |
| LASER (Artetxe and Schwenk, 2019b) | 92.27% |
| VECO (Luo et al., 2020) | 78.5% |
| SBERTbase-p† | 87.8% |
| LaBSE (Feng et al., 2020) | 89.0% |
| MoCo-BERTbase | 93.66% |
+
+Table 2: F1 score on the en-zh test set of BUCC 2018 dataset. †: Reimers and Gurevych (2020).
+
+Tatoeba cross-lingual similarity search Introduced by Artetxe and Schwenk (2019b), Tatoeba corpus consists of 1000 English-aligned sentence pairs. We find the nearest neighbor for each sentence in the other language using cosine similarity. Results for both forward and backward directions are listed in Table 1. MoCo-BERT achieves an accuracy of $97.4\%$ .
+
+BUCC 2018 bitext mining aims to identify parallel sentences from a collection of sentences in two languages (Zweigenbaum et al., 2018). Following Artetxe and Schwenk (2019a), we adopt the
+
+margin-based scoring by considering the average cosine similarity of $k$ nearest neighbors ( $k = 3$ in our experiments):
+
+$$
+\begin{array}{l} \operatorname {s i m} (x, y) = \operatorname {m a r g i n} (\cos (x, y), \\ \sum_ {z \in \mathrm {N N} _ {k} (x)} \frac {\cos (x , z)}{2 k} + \sum_ {z \in \mathrm {N N} _ {k} (y)} \frac {\cos (y , z)}{2 k}) \tag {4} \\ \end{array}
+$$
+
+We use the distance margin function: $\mathrm{margin}(a,b) = a - b$ , which performs slightly better than the ratio margin function (Artetxe and Schwenk, 2019a). All sentence pairs with scores larger than threshold $\lambda$ are identified as parallel. $\lambda$ is searched based on the validation set. The F1 score of our system is $93.66\%$ , as shown in Table 2.
+
+# 3.3 Monolingual STS Evaluation
+
+We evaluate the performance of MoCo-BERT for STS without training on any labeled STS data, following the procedure by Reimers and Gurevych (2019). All results are based on $\mathrm{BERT}_{\mathrm{base}}$ . Given a pair of English sentences, the semantic similarity is computed with a simple dot product. We also report the results using labeled natural language inference (NLI) data. A two-layer MLP with 256 hidden units and a 3-way classification head is added on top of the sentence representations. The training set of SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) are used for multitask training. See Appendix B for the detailed setup.
+
+As pointed out by Gao et al. (2021), existing works follow inconsistent evaluation protocols, and thus may cause unfair comparison. We report results for both "weighted mean" (wmean) and "all" settings (Gao et al., 2021) in Table 3 and 8 respectively.
+
+When training on translation ranking task only, MoCo-BERT improves the average correlation from 67.67 to 76.50 (+8.83). With labeled NLI supervision, MoCo-BERT+NLI advances state-of-the-art from 77.07 to 78.95 (+1.88).
+
+# 3.4 Model Analysis
+
+We conduct a series of experiments to better understand the behavior of MoCo-BERT. Unless explicitly mentioned, we use a memory queue size 204800 for efficiency.
+
+Memory queue size One primary motivation of MoCo is to introduce more negatives to improve
+
+| Model | STS-12 | STS-13 | STS-14 | STS-15 | STS-16 | STS-B | SICK-R | Avg |
| w/o labeled NLI supervision |
| Avg GloVe† | 55.14 | 70.66 | 59.73 | 68.25 | 63.66 | 58.02 | 53.76 | 61.32 |
| BERTbase [CLS]† | 20.16 | 30.01 | 20.09 | 36.88 | 38.08 | 16.05 | 42.63 | 29.19 |
| BERTbase-flow | 59.54 | 64.69 | 64.66 | 72.92 | 71.84 | 58.56 | 65.44 | 65.38 |
| IS-BERTbase | 56.77 | 69.24 | 61.21 | 75.23 | 70.16 | 69.21 | 64.25 | 66.58 |
| BERTbase-whitening♣ | 61.46 | 66.71 | 66.17 | 74.82 | 72.10 | 67.51 | 64.90 | 67.67 |
| MoCo-BERTbase | 68.85 | 77.52 | 75.85 | 83.14 | 80.15 | 77.50 | 72.48 | 76.50 |
| w/ labeled NLI supervision |
| InferSent | 52.86 | 66.75 | 62.15 | 72.77 | 66.87 | 68.03 | 65.65 | 65.01 |
| SBERTbase-NLI† | 68.70 | 74.37 | 74.73 | 79.65 | 75.21 | 77.63 | 74.84 | 75.02 |
| BERTbase-flow | 67.75 | 76.73 | 75.53 | 80.63 | 77.58 | 79.10 | 78.03 | 76.48 |
| BERTbase-whitening♣ | 69.87 | 77.11 | 76.13 | 82.73 | 78.08 | 79.16 | 76.44 | 77.07 |
| MoCo-BERTbase+NLI | 71.66 | 79.42 | 76.37 | 84.08 | 80.81 | 82.15 | 78.19 | 78.95 |
+
+
+Figure 2: Average Spearman's correlation across 7 STS datasets for different memory queue sizes. The performance does not seem to saturate with queue size as large as $409k$ . We do not run experiments $>409k$ as it reaches the GPU memory limit.
+
+the quality of the learned representations. In Figure 2, as expected, the performance consistently increases as the memory queue becomes larger. For visual representation learning, the performance usually saturates with queue size $\sim$ 65536 (He et al., 2020), but the ceiling is much higher in our case. Also notice that the model can still reach 72.03 with a small batch size 256, which might be because the encoders have already been pre-trained with MLM.
+
+Temperature A lower temperature $\tau$ in InfoNCE
+
+Table 3: Spearman's correlation for 7 STS datasets downloaded from SentEval (Conneau and Kiela, 2018). We report "weighted mean" (wmean) from SentEval toolkit. Baseline systems include $\mathrm{BERT}_{\mathrm{base}}$ -flow (Li et al., 2020), IS-BERTbase (Zhang et al., 2020), $\mathrm{BERT}_{\mathrm{base}}$ -whitening♣ (Su et al., 2021), and InferSent (Conneau et al., 2017). †: from Reimers and Gurevych (2019).
+
+| Temperature | 0.01 | 0.04 | 0.07 | 0.1 |
| STS Avg | 74.80 | 76.20 | 74.23 | 69.81 |
| BUCC F1 | 90.76 | 93.14 | 90.42 | 77.04 |
+
+loss makes the model focus more on the hard negative examples, but it also risks over-fitting label noises. Table 4 shows that $\tau$ could dramatically affect downstream performance, with $\tau = 0.04$ getting the best results on both STS and BUCC bitext mining tasks. The optimal $\tau$ is likely to be task-specific.
+
+Table 4: Performance of our proposed MoCo-BERT under different temperatures.
+
+| Model | STS Avg | BUCC F1 |
| MoCo-BERT | 76.20 | 93.14 |
| w/o momentum | -0.01 | 0.00 |
+
+Table 5: Ablation results for momentum update mechanism. w/o momentum shares the parameters between the momentum encoder and the base encoder.
+
+Momentum Update We also empirically verify if the momentum update mechanism is really necessary. Momentum update provides a more consistent matching target but also complicates the training procedure. In Table 5, without momentum update, the model simply fails to converge with the training loss oscillating back and forth. The resulting Spearman's correlation is virtually the same as random predictions.
+
+| Pooling | STS Avg | BUCC F1 |
| mean pooling | 76.20 | 93.14 |
| max pooling | 75.90 | 92.78 |
| [CLS] | 75.97 | 92.47 |
+
+Table 6: Performance comparison between different pooling mechanisms for MoCo-BERT.
+
+Pooling mechanism Though the standard practices of fine-tuning BERT (Devlin et al., 2019) directly use hidden states from [CLS] token, Reimers and Gurevych (2019); Li et al. (2020) have shown that pooling mechanisms matter for downstream STS tasks. We experiment with mean pooling, max pooling, and [CLS] embedding, with results listed in Table 6. Consistent with Reimers and Gurevych (2019), mean pooling has a slight but pretty much negligible advantage over other methods.
+
+In Appendix C, we also showcase some visualization and sentence retrieval results.
+
+# 4 Related Work
+
+Multilingual representation learning aims to jointly model multiple languages. Such representations are crucial for multilingual neural machine translation (Aharoni et al., 2019), zero-shot cross-lingual transfer (Artetxe and Schwenk, 2019b), and cross-lingual semantic retrieval (Yang et al., 2020) etc. Multilingual BERT (Pires et al., 2019) simply pre-trains on the concatenation of monolingual corpora and shows good generalization for tasks like cross-lingual text classification (Hu et al., 2020). Another line of work explicitly aligns representations from language-specific models, either unsupervised (Lample et al., 2018a) or supervised (Reimers and Gurevych, 2020; Feng et al., 2020).
+
+Contrastive learning works by pulling positive instances closer and pushing negatives far apart. It has achieved great successes in self-supervised vision representation learning, including SimCLR (Chen et al., 2020a), MoCo (He et al., 2020; Chen et al., 2020b), BYOL (Grill et al., 2020), CLIP (Radford et al., 2021) etc. Recent efforts introduced contrastive learning into various NLP tasks (Xiong et al., 2020; Giorgi et al., 2020; Chi et al., 2021; Gunel et al., 2020). Concurrent to our work, SimCSE (Gao et al., 2021) uses dropout and hard negatives from NLI datasets for contrastive
+
+sentence similarity learning, Sentence-T5 (Ni et al., 2021) outperforms SimCSE by scaling to larger models, and xMoCo (Yang et al., 2021) adopts a similar variant of MoCo for open-domain question answering.
+
+Semantic textual similarity is a long-standing NLP task. Early approaches (Seco et al., 2004; Budanitsky and Hirst, 2001) use lexical resources such as WordNet to measure the similarity of texts. A series of SemEval shared tasks (Agirre et al., 2012, 2014) provide a suite of benchmark datasets that is now widely used for evaluation. Since obtaining large amounts of high-quality STS training data is non-trivial, most STS models are based on weak supervision data, including conversations (Yang et al., 2018), NLI (Conneau et al., 2017; Reimers and Gurevych, 2019), and QA pairs (Ni et al., 2021).
+
+# 5 Conclusion
+
+This paper proposes a novel method that aims to solve the easy negatives issue to better align cross-lingual sentence representations. Extensive experiments on multiple cross-lingual and monolingual evaluation datasets show the superiority of the resulting representations. For future work, we would like to explore other contrastive learning methods (Grill et al., 2020; Xiong et al., 2020), and experiment with more downstream tasks including paraphrase mining, text clustering, and bilingual lexicon induction etc.
+
+# Acknowledgements
+
+We would like to thank three anonymous reviewers for their valuable comments, and EMNLP 2021 organizers for their efforts. We also want to thank Yueya He for useful suggestions on an early draft of this paper.
+
+# References
+
+Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 81-91.
+Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012:
+
+The First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385-393.
+Roee Aharoni, Melvin Johnson, and Orhan First. 2019. Massively multilingual neural machine translation. In NAACL-HLT.
+Mikel Artetxe and Holger Schwenk. 2019a. Margin-based parallel corpus mining with multilingual sentence embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3197-3203.
+Mikel Artetxe and Holger Schwenk. 2019b. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.
+Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP.
+Alexander Budanitsky and Graeme Hirst. 2001. Semantic distance in wordnet: An experimental, application-oriented evaluation of five measures. In Workshop on WordNet and other lexical resources, volume 2, pages 2-2.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR.
+Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. 2020b. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297.
+Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, He-Yan Huang, and Ming Zhou. 2021. Infoxlm: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576-3588.
+Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. ArXiv, abs/1803.05449.
+Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of
+
+deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language-agnostic bert sentence embedding. arXiv preprint arXiv:2007.01852.
+Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821.
+John M Giorgi, Osvald Nitski, Gary D Bader, and Bo Wang. 2020. Declutr: Deep contrastive learning for unsupervised textual representations. arXiv preprint arXiv:2006.03659.
+Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. 2020. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733.
+Belize Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2020. Supervised contrastive learning for pre-trained language model fine-tuning. In International Conference on Learning Representations.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738.
+J. Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and M. Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. ArXiv, abs/2003.11080.
+Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018a. Word translation without parallel data. In International Conference on Learning Representations.
+Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. ArXiv, abs/1804.07755.
+Phuc H Le-Khac, Graham Healy, and Alan F Smeaton. 2020. Contrastive representation learning: A framework and review. IEEE Access.
+Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. arXiv preprint arXiv:2011.05864.
+
+Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, and Luo Si. 2020. Veco: Variable encoder-decoder pre-training for cross-lingual understanding and generation. arXiv preprint arXiv:2010.16046.
+L. V. D. Maaten and Geoffrey E. Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579-2605.
+Jianmo Ni, Noah Constant, Ji Ma, Keith B Hall, Daniel Cer, Yinfei Yang, et al. 2021. Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models. arXiv preprint arXiv:2108.08877.
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
+T. Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? ArXiv, abs/1906.01502.
+A. Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pre-training.
+Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020.
+Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP/IJCNLP.
+Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512-4525.
+Nuno Seco, Tony Veale, and Jer Hayes. 2004. An intrinsic information content metric for semantic similarity in wordnet. In *Ecai*, volume 16, page 1089.
+Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for better semantics and faster retrieval. ArXiv, abs/2103.15316.
+Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and J. Carbonell. 2020. Crosslingual alignment vs joint training: A comparative study and a simple unified framework. ArXiv, abs/1910.04708.
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122.
+
+Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808.
+Nan Yang, Furu Wei, Binxing Jiao, Daxin Jiang, and Linjun Yang. 2021. xmoco: Cross momentum contrastive learning for open-domain question answering.
+Yinfei Yang, Daniel Matthew Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, G. Ábrego, Steve Yuan, C. Tar, Yun-Hsuan Sung, B. Strope, and R. Kurzweil. 2020. Multilingual universal sentence encoder for semantic retrieval. In ACL.
+Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning semantic textual similarity from conversations. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 164-174.
+Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, and Lidong Bing. 2020. An unsupervised sentence embedding method by mutual information maximization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1601-1610.
+Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2018. Overview of the third bucc shared task: Spotting parallel sentences in comparable corpora. In Proceedings of 11th Workshop on Building and Using Comparable Corpora, pages 39-42.
+
+# A Details on Training Data and Hyperparameters
+
+| Dataset | # of sents | # of sampled |
| Tatoeba | 46k | 46k |
| News Commentary | 320k | 320k |
| UNCorpus | 16M | 1M |
| CWMT-neu2017 | 2M | 2M |
| CWMT-casia2015 | 1M | 1M |
| CWMT-casict2015 | 2M | 1M |
+
+Table 7: List of parallel corpora used. # of sampled are randomly sampled subset from the corresponding dataset to make the training cost manageable.Duplicates are removed during preprocess.
+
+We list all the parallel corpora used by this paper in Table 7. Hyperparameters are available in Table 9. We start with the default hyperparameters from MoCo (He et al., 2020) and use grid search to find the optimal values for several hyperparameters. The specific search ranges are $\{10^{-5}, 2 \times 10^{-5}$ ,
+
+| Model | STS-12 | STS-13 | STS-14 | STS-15 | STS-16 | STS-B | SICK-R | Avg |
| w/o labeled NLI supervision |
| BERTbase-flow | 58.40 | 67.10 | 60.85 | 75.16 | 71.22 | 68.66 | 64.47 | 66.55 |
| BERTbase-whitening | 57.83 | 66.90 | 60.90 | 75.08 | 71.31 | 68.24 | 63.73 | 66.28 |
| MoCo-BERTbase | 70.99 | 76.51 | 73.17 | 82.09 | 78.32 | 77.50 | 72.48 | 75.87 |
| w/ labeled NLI supervision |
| SBERTbase-NLI | 70.97 | 76.53 | 73.19 | 79.09 | 74.30 | 77.03 | 72.91 | 74.89 |
| BERTbase-flow | 69.78 | 77.27 | 74.35 | 82.01 | 77.46 | 79.12 | 76.21 | 76.60 |
| BERTbase-whitening | 69.65 | 77.57 | 74.66 | 82.27 | 78.39 | 79.52 | 76.91 | 77.00 |
| MoCo-BERTbase+NLI | 76.07 | 78.33 | 74.51 | 84.19 | 78.74 | 82.15 | 78.19 | 78.88 |
+
+Table 8: Spearman's correlation for 7 STS datasets under the "all" evaluation setting (Gao et al., 2021). We use the official script from SimCSE.
+
+| Hyperparameter | value |
| # of epochs | 15 |
| # of GPUs | 4 |
| queue size | 409k |
| temperature τ | 0.04 |
| momentum coefficient | 0.999 |
| learning rate | 4 × 10-5 |
| gradient clip | 10 |
| warmup steps | 400 |
| batch size | 1024 |
| dropout | 0.1 |
| weight decay | 10-4 |
| pooling | mean |
+
+Table 9: Hyperparameters for our proposed model.
+
+$4 \times 10^{-5}$ for learning rate, $\{102k, 204k, 409k\}$ for queue size, $\{0.01, 0.04, 0.07, 0.1\}$ for temperature, and $\{0.9999, 0.999, 0.99\}$ for momentum coefficient. The entire training process takes approximately 15 hours with 4 V100 GPUs and automatic mixed precision support from PyTorch.
+
+# B Multi-task with NLI
+
+Given a premise $x_{p}$ and a hypothesis $x_{h}$ , the sentence representations are computed as stated in the paper. Then, a two-layer MLP with 256 hidden units, ReLU activation, and a 3-way classification head is added on top of the sentence representations. Dropout 0.1 is applied to the hidden units. The loss function $\mathrm{L}_{\mathrm{nli}}(x_p,x_h)$ is simply the cross-entropy between gold label and softmax outputs. The model is jointly optimized with the following:
+
+$$
+\min \mathrm {L} (x, y) + \mathrm {L} (y, x) + \alpha \mathrm {L} _ {\mathrm {n l i}} \left(x _ {p}, x _ {h}\right) \tag {5}
+$$
+
+Where $\alpha$ is used to balance different training objectives, we set $\alpha = 0.1$ empirically. The batch size
+
+for NLI loss is 128. The training set is the union of SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) dataset (~1M sentence pairs).
+
+# C Visualization of Sentence Representations
+
+To visualize the learned sentence representations, we use t-SNE (Maaten and Hinton, 2008) for dimensionality reduction. In Figure 3, we can see the representations of parallel sentences are very close, indicating that our proposed model is successful at aligning cross-lingual representations.
+
+In Table 10, we illustrate the results of monolingual sentence retrieval. Most top-ranked sentences indeed share similar semantics with the given query, this paves the way for potential applications like paraphrase mining.
+
+
+Figure 3: t-SNE visualization of the representations of 15 random parallel sentences from Tatoeba test set. For visualization purpose, if two points are too close, we move them a little bit far apart. Enlarge the graph for better views.
+
+| query: I am willing to devote my life to education career. |
| 0.853 | He dedicated his life to the cause of education. |
| 0.776 | He devoted his whole life to education. |
| 0.764 | She has dedicated herself to the cause of education. |
| query: The Committee resumed consideration of the item. |
| 0.928 | The Committee continued consideration of the item. |
| 0.843 | The Committee resumed its consideration of this agenda item. |
| 0.686 | The Committee began its consideration of the item. |
| query: There are a great many books on the bookshelf. |
| 0.837 | There are many books on the bookcase. |
| 0.690 | There is a heap of books on the table. |
| 0.655 | The bookshelf is crowded with books on different subjects. |
| query: Everyone has the privilege to be tried by a jury. |
| 0.718 | They have the right to have their case heard by a jury. |
| 0.647 | Every defendant charged with a felony has a right to be charged by the Grand Jury. |
| 0.580 | Everyone has the right to be educated. |
+
+Table 10: Examples of sentence retrieval using learned representations. Given a query, we use cosine similarity to retrieve the 3 nearest neighbors (excluding exact match). The first column is the cosine similarity score between the query and retrieved sentences. The corpus is 1M random English sentences from the training data.
\ No newline at end of file
diff --git a/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/images.zip b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f968a9449f520a8cf471467a58db80a3b41e7a42
--- /dev/null
+++ b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4cdf8ec478623e4ba072713676b4408fbd4d297621eef91642b8ca6f4a785da0
+size 583328
diff --git a/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/layout.json b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..027ce2de7710a9818bf0078dddf8d2cef9d95cd2
--- /dev/null
+++ b/aligningcrosslingualsentencerepresentationswithdualmomentumcontrast/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd385b8ca02b3012e72d9c533caa6a82c2cee0e387ae6c31a59dd2a88060f7b5
+size 298801
diff --git a/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_content_list.json b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..73bc66f2579fdfe327cb5bf0331f7f66cdce7661
--- /dev/null
+++ b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:359a00cdb398418db47cad960122f526f718ef5ec253e60cefc51405e4d3d461
+size 91793
diff --git a/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_model.json b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a021b686bf5f26639ed2228645bb7b03b0a327d9
--- /dev/null
+++ b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6e6dd18a0cf189b1f84a1043f72e5a8d11c75344cafa6dc3d491e51765aed7f9
+size 107663
diff --git a/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_origin.pdf b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4d0a9cb87ad544f9dca4d421d86eca27fe9ddd4d
--- /dev/null
+++ b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/c8d03de3-d76c-46a1-b4b4-344903867d11_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:684fbffc184b3caa08c879df7535c08c39f5cda759ac8472ad843f1c9f210a8c
+size 1563748
diff --git a/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/full.md b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f89c5ea17478cc243bc936cdc38a983b51b4b7da
--- /dev/null
+++ b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/full.md
@@ -0,0 +1,413 @@
+# Aligning Multidimensional Worldviews and Discovering Ideological Differences
+
+Jeremiah Milbauer
+
+Language Technologies Institute, Carnegie Mellon University
+
+jmilbaue@cs.cmu.edu
+
+Adarsh Mathew
+
+Knowledge Lab, University of Chicago
+
+adarshm@uchicago.edu
+
+James Evans
+
+Knowledge Lab / Sociology, University of Chicago
+
+jevans@uchicago.edu
+
+# Abstract
+
+The Internet is home to thousands of communities, each with their own unique worldview and associated ideological differences. With new communities constantly emerging and serving as ideological birthplaces, battlegrounds, and bunkers, it is critical to develop a framework for understanding worldviews and ideological distinction. Most existing work, however, takes a predetermined view based on political polarization: the "right vs. left" dichotomy of U.S. politics. In reality, both political polarization – and worldviews more broadly – transcend one-dimensional difference, and deserve a more complete analysis. Extending the ability of word embedding models to capture the semantic and cultural characteristics of their training corpora, we propose a novel method for discovering the multifaceted ideological and worldview characteristics of communities. Using over 1B comments collected from the largest communities on Reddit.com representing $40\%$ of Reddit activity, we demonstrate the efficacy of this approach to uncover complex ideological differences across multiple axes of polarization.
+
+# 1 Introduction and Motivation
+
+"The limits of my language mean the limits of my world"
+
+Tractatus Logico-Philosophicus, 1921, Ludwig
+
+Wittgenstein
+
+Media choice, social networking platforms, and collaborative filtering on the internet have enabled individuals to enter "echo chambers" that reflect shared worldviews (Sunstein, 2018; Mutz, 2006; Bishop, 2009). The internet also publicly reveals these communities and their communication for analysts of language, culture and interaction at unprecedented scale. Despite the abundance of such data, however, analysis of worldviews and ideological difference has been dominated by considerations of "polarization" (Boxell et al., 2017; Bail
+
+
+Figure 1: By training the model to align "candidate", "politics", and "corrupt," a hypothesis alignment $f(\text{"trump"}|C_1) \approx f(\text{"clinton"}|C_2)$ emerges.
+
+et al., 2018), which impoverishes the comparison of ideologies by reducing them to pairs separated along a singular dimension.
+
+Here, we draw inspiration from the approach of interpretive anthropology and the focus of cognitive anthropology to represent, investigate and compare worldviews from community discourse. In the Interpretation of Cultures, Geertz rendered culture as "a system of inherited conceptions expressed in symbolic forms by means of which men communicate, perpetuate, and develop their knowledge about and attitudes toward life" (Geertz et al., 1973). Combined with cognitive anthropology's concern with how implicit knowledge changes the way people perceive and relate to the world (d'Andrade, 1995), this motivates assessment of worldviews through modern pre-trained natural language models that render words (Mikolov et al., 2013b; Pennington et al., 2014) and phrases (Devlin et al., 2019; Radford et al., 2019) in relation to one another as a function of their proximity in discourse. When pre-trained on the discourse of distinctive communities, these models have begun
+
+to enable a highly resolved evaluation of expressed worldviews – symbol systems that reveal shared patterns of attention and association (Kang and Evans, 2020).
+
+Based in the premise that the language a community uses carries markers of the culture of that community (Webson et al., 2020), recent work has demonstrated the ability of trained embedding models to uncover cultural values (Garg et al., 2018; Xie et al., 2019; Kozlowski et al., 2019). However, these models are limited by requiring significant researcher input to query the model for insights. Recent work has also demonstrated the potential to embed communities themselves Waller and Anderson (2021), but has not extended to the level of a word-level understanding of community worldview.
+
+We instead model community language as a specific instance of an ideological dialect (or, an "ideolect")1. Using a similar approach to Khud-aBukhsh et al. (2021), which identified single-axis polarized political "languages" on YouTube, we introduce a new method for unsupervised cultural analysis based on multilingual embedding alignment. Our method provides high-accuracy alignment, is the first to analyze multiple facets of ideological polarization, and readily enables analysis in a large multi-community setting – which we demonstrate by identifying multiple axes of ideological differences on Reddit.
+
+As an additional contribution, we publish a Github repository with all the code necessary to replicate this work and apply our methods in new settings. This repository also includes tables of results that were too long to reasonably include in this paper.
+
+# 2 Unsupervised Cultural Analysis
+
+In this section, we summarize previous approaches to the analysis of cultural values through word embedding models.
+
+# 2.1 The Queried Approach
+
+Early work on cultural analysis through word embeddings observed that the cultural values of a community or society are embedded within the text produced by that community or society, and are discoverable by word embedding (Garg et al., 2018). These values can then be queried by measuring the distance between wordpairs.
+
+Measuring stereotypes By pre-selecting a set of "entity" words, and "value" words, researchers can measure how attitudes towards the selected entities differ over time, or across communities. This approach works by training an embedding model on a text corpus, and then computing the similarity between each query word and each value word. Garg et al. (2018) use this approach, with occupations comprising the entities and gender or ethnic categories as the values. Each pair thus represents the strength of a particular cultural value or stereotype. However, this approach is limited in that it is only able to discover the specific stereotypes queried by the researchers; it is unable to discover cultural values on its own.
+
+Axes of polarization Another approach to unsupervised cultural analysis is introduced by Kozlowski et al. (2019). In this method, two words representing the opposite poles of a particular cultural value (such as "rich" and "poor") are selected. Entity words, such as the names of different sports, are then projected to an axis drawn between the polar words.
+
+Although such models have the capacity to render worldviews as high dimensional spaces, research typically compares representations only selectively in terms of a modest set of keywords queried and compared between models. In these cases, the keywords are typically manually selected according to a predetermined notion of which words may exhibit polarization, and compared with words that are pre-selected to encode cultural values, essentially producing a cultural relatedness score for a given (Entity, Value) pair in some corpus:
+
+$$
+\text {E n t i t y , V a l u e} \rightarrow \text {S c o r e} _ {C}
+$$
+
+This method can then be used to identify differences between corpora:
+
+$$
+\text {E n t i t y}, \text {V a l u e} \rightarrow \text {S c o r e} _ {C _ {1}} - \text {S c o r e} _ {C _ {2}}
+$$
+
+# 2.2 Toward less supervision
+
+Xie et al. (2019) make progress on this issue by introducing the use of the Moral Foundations Dictionary (Graham et al., 2009) to approximate moral categories. Using a trained embedding model, they assign each word to its nearest cluster of Moral Foundations Words, and measure differences in terms of a word's movement between clusters across distinct corpora. This approach has two key advantages: it does not presuppose the relevant cultural values (the Moral Foundations Dictionary is designed to be comprehensive), and it allows the moral categories to be specific to each corpus's embedding.
+
+With this approach, we are now able to evaluate the relevance of each word to the moral differences between communities $C_1$ , $C_2$ , as:
+
+$$
+C _ {1}, C _ {2} \rightarrow \{\text {M o r a l D i f f e r e n c e (w)} | \forall w \in V \}
+$$
+
+This set of scored words thus represents the cultural differences between two communities. However, it too is limited in expressivity by the reliance on the Moral Foundations Dictionary's list of moral categories.
+
+# 2.3 Aligning Ideological Dialects
+
+Rather than rely on the previous query-value paradigm, we achieve fully unsupervised cultural analysis through the use of multilingual embedding alignment. We explicitly model corpus-specific ideological dialects using techniques designed for multilingual word embedding alignment, to learn a translation function $\mathcal{F}$ from each embedding to the joint space. Then, for any two corpora, each word has an alignment score:
+
+$$
+\operatorname {A l i g n m e n t S c o r e} (\mathrm {w}) = d \left(\mathcal {F} \left(w \mid C _ {1}\right), \mathcal {F} \left(w \mid C _ {2}\right)\right)
+$$
+
+This ultimately yields a similar set of scores to the Moral-Foundations approach:
+
+$$
+C _ {1}, C _ {2} \rightarrow \{\text {A l i g n m e n t S c o r e (w)} | \forall w \in V \}
+$$
+
+with two important benefits: our model requires no moral supervision, and can discover more than just moral differences. By contrasting semantic models per se, we automatically discover ideological differences in a multi-community corpus.
+
+Additionally, for a given word $w_{1}$ in $C_1$ , we can compute a the nearest image $w_{2}$ in $C_2$ , such that
+
+$$
+w _ {2} = \arg \min _ {w} d \left(\mathcal {F} \left(w _ {1} \mid C _ {1}\right), \mathcal {F} \left(w \mid C _ {2}\right)\right)
+$$
+
+This represents the hypothesis of a conceptual substitution between communities, yielding a high-resolution comparison of the worldviews, ideologies, and cultural differences between two communities, without any supervision. Figure 1 illustrates this idea in the context of a conservative political community (bottom panel, in red) and a liberal one (top panel, in blue). Worldviews are seen anchored by the words "corrupt", "politics", and "candidate", and an alignment between the semantics of "clinton" and "trump" emerges.
+
+# 3 Data
+
+Reddit serves as the primary source of data for this project. The platform is structured as a collection of peer-driven communities called "subreddits," ostensibly self-regulated by norms decided upon by members of the subreddit and enforced by moderators. All users are anonymous, can be a part of multiple subreddits, and are free to create their own. As such, user comments serve as a rich source of conversation and discourse across varied interests and topics, organized into communities of self-selected individuals.
+
+The structure of Reddit lends itself to a community-focused analysis of language, with the site's use of self-enforced boundaries allowing us to observe discourse across groups without having to define the notion of a group ourselves. Instead, we rely on every user's own choice about where they wish to engage, and where to post their comments. This multi-community setting has been exploited in the past by researchers, with Tan and Lee (2015) exploring the contours of multi-community engagement and the widening of interests via a user's exploration of different subreddits over time. Rajadesingan et al. (2020) explore the norms of interaction dictated and enforced by multiple "toxic" subreddits, showcasing how self-selection and pre-entry learning play a key role in sustaining these norms. Kumar et al. (2018) explicitly study negative mobilizations between different subreddits as conflict, finding that they tend to occur between communities that are highly similar in content.
+
+We use data from Reddit for the period 2016-2019, and select 32 subreddits from the largest communities to study, representing between 30 and $40\%$ of the site's monthly activity. We rely on the Reddit dumps ingested by Pushshift as described in Baumgartner et al. (2020), which we accessed in January of 2020. These dumps contain comment
+
+| Year | Comments | Tokens | GB |
| 2016 | 242.65 M | 7032.87 M | 34.14 |
| 2017 | 256.76 M | 7429.59 M | 36.08 |
| 2018 | 266.12 M | 7707.22 M | 37.37 |
| 2019 | 288.36 M | 7977.69 M | 38.84 |
+
+Table 1: Number of comments, tokens, and gigabytes in the dataset
+
+activity across all of Reddit for each month. Given the delay in ingesting activity across all subreddits, some comments and users can be deleted before ingestion occurs. Additionally, users are given the opportunity to have their data not ingested by submitting an opt-out request. Although the Pushshift dataset includes the users' usernames, we scrub all information aside from the actual text of the post before even any pre-processing occurs. When discussing a specific community, we refer to it as "r/[community name]," as is customary on Reddit.
+
+Table 1 contains information about the size of our dataset after preprocessing.
+
+# 4 Modeling and Aligning Ideological Dialects
+
+We conceive of the alignment procedure as a matching of "conceptual anchors," designed to align the worldview of two communities. If two communities, $\mathcal{C}_a$ and $\mathcal{C}_b$ , have identical worldviews, we expect that structural relations between words will be preserved across the community boundary. However, if they have different worldviews, we would expect that the words central to that conflict would not align well, even when anchoring words are well-aligned.
+
+On notation For a community called $a$ , we typically use $\mathcal{C}_a$ to indicate the "language" of the community, $V_{a}$ for its vocabulary, $A$ to represent an embedding matrix trained on $\mathcal{C}_a$ , and $A_{i}$ to represent the word embedding of a word $w_{i}$ in $\mathcal{C}_a$ .
+
+# 4.1 Foundations
+
+In order to align and compare community-specific models, we turn to the literature on multilingual word embeddings. Broadly speaking, these works aim to learn a single embedding space in which synonymous words in different languages have the same embedding. Approaches to this problem vary, but typically either rely on training with parallel corpora in multiple languages, or aligning embeddings with the help of a multilingual lexicon. In our
+
+case, all data collected from different Reddit communities is in English – so we automatically have a complete parallel lexicon. As such, we choose to use the lexicon approach to align our different “ideolects.” Furthermore, this approach allows our work to be immediately useful to computational social scientists currently using out-of-the-box word embedding algorithms for cultural analysis.
+
+Most common is the bilingual case; given two languages, $\mathcal{L}_a$ and $\mathcal{L}_b$ , we use a bilingual lexicon to learn two transformation functions: $f_{a\rightarrow c}$ and $f_{b\rightarrow c}$ , such that for every word $i\in \mathcal{L}_a$ and every word $j\in \mathcal{L}_b$ , $f_{a\rightarrow c}(\operatorname {Emb}(i)) = f_{b\rightarrow c}(\operatorname {Emb}(j))$ . In this bilingual case, it is possible to set $c$ to $b$ , and essentially learn a single transformation from one space to the other. In the multilingual case, one can learn a latent space into which all languages are projected, chose one language as the target for all the other languages' transformations, or learn direct pairwise bilingual transformations.
+
+In this work, we adapt approaches (Ammar et al., 2016; Mikolov et al., 2013a) developed for multilingual alignment to the cultural analysis use-case.
+
+After aligning worldviews with this approach, we then evaluate multiple dimensions of ideological difference by computing misalignment scores across different topics.
+
+# 4.2 Pre-processing
+
+We treat the posted comments of each community, in each year, as its own corpus. For each community-corpus in the dataset, we tokenize each of the comments posted in the community (without stemming or lemmatization), remove formatting tokens, reduce hyperlinks to just their surface forms, and make all characters lower case. We then run a basic phrase detection algorithm (Mikolov et al., 2013b), implemented in Gensim (Rehurek and Sojka, 2011), to detect common bigrams in each community.
+
+# 4.3 Training Word Embeddings
+
+We begin by training a word embedding model for each independent community. Here, we use Gensim's implementation of the Skip-gram model (Mikolov et al., 2013b). We train embeddings in both 100 and 300 dimensions; the following experiments were conducted with 100-dimensional embeddings. Given that the data is from a long-tailed forum community on the Internet, we use a maximum vocabulary of 30,000 words. In order to promote the stability of the embedding for
+
+each community-corpus, we over-sample sentences from smaller communities.
+
+# 4.4 Anchors
+
+In order to train an alignment between two embedding spaces, we must first construct a "bilingual" lexicon to anchor the alignment. All text in our corpora is in English, so we can easily construct an lexicon of size $N = |V|$ , using the entire shared vocabulary of two trained embeddings as the anchoring words. However, it should be noted that the goal of our embedding alignment should not be maximum accuracy. We intend to use the trained alignment as a tool for cultural analysis by exploring the misaligned words; so we should not attempt to achieve a perfect map.
+
+We experiment with three distinct approaches to construct the bilingual lexicon. The first approach uses the entire shared vocabulary to anchor the alignment. The second approach uses a large set of stopwords – the most frequent 5000 words across the combined corpora. The third approach uses a smaller set of stopwords – 1000.
+
+# 4.5 Topic Modeling
+
+In order to identify topic areas within which to measure misalignment, we implement a topic assignment procedure, inspired by the success of a simple embedding-based approach for Twitter data in Demszky et al. (2019). We learn word clusters using an embedding model trained on the union of the communities, with the scikit-learn (Pedregosa et al., 2011) implementation of KMeans++ (Arthur and Vassilvitskii, 2006). We then treat each word cluster as a topic.
+
+To validate these topics, we compute the core topics for each community by assigning each comment a topic label, and calculating the association of each topic with each community. For each topic $t$ and community $\mathcal{C}$ :
+
+$$
+\operatorname {S c o r e} (t, \mathcal {C}) = \frac {P (t , \mathcal {C})}{P (t) P (\mathcal {C})} = \frac {P (t | \mathcal {C})}{P (t)}
+$$
+
+Using these scores, we rank the topics of each community. Table 2 includes examples of some top topics for r/gaming, r/politics, and r/askmen – popular groups that discuss gaming, politics, and mens' issues respectively.
+
+Figure 4 in the Appendix includes a full comparison of topics similarities across communities. An interesting observation from this validation is that communities for which we hypothesize a strong
+
+| r/gaming | Rank 1 | doom, halo, zelda, ... |
| Rank 2 | os, hulu, apple, ... |
| Rank 3 | graphics, 480, nvidia, ... |
| r/politics | Rank 3 | states, president, ... |
| Rank 4 | fdr, communist, ... |
| Rank 5 | sent, emails, scandals, ... |
| r/askmen | Rank 1 | puberty, sex, tinder, ... |
| Rank 3 | younger, minded, ... |
| Rank 4 | old, lover, spouse, ... |
+
+Table 2: Randomly sampled words from top topics for a small selection of subreddits. Topics consisting primarily of administrative and moderation messages are omitted.
+
+ideological disagreement (such as r/politics and r/the_donald), there is a strong similarity in topic distribution.
+
+# 4.6 Alignment
+
+Once anchoring words have been selected (either by using all words, stop words, or non-salient topic words), we can train an alignment between embedding spaces. We choose to treat alignment as a linear transformation, $\mathcal{T}_{a\to b}\in \mathbb{R}^{d\times d}$ from one $d$ -dimensional vector space $A$ to another, $B$ , so $A\cdot \mathcal{T}_{a\rightarrow b} = B$ . This allows the learned transformation to be both compositional and invertible:
+
+$$
+A \cdot \mathcal {T} _ {a \rightarrow b} \cdot \mathcal {T} _ {b \rightarrow c} = C
+$$
+
+$$
+B \cdot \mathcal {T} _ {a \rightarrow b} ^ {- 1} = A
+$$
+
+These properties are important when creating multilingual embeddings, especially for low-resource languages. When there is no bilingual lexicon for a pair $\{\mathcal{L}_a,\to \mathcal{L}_b\}$ , we can still learn transformations between them by passing through a high-resource $\{\mathcal{L}_c$ like English:
+
+$$
+A \cdot \mathcal {T} _ {a \rightarrow c} \cdot \mathcal {T} _ {c \rightarrow b} = B
+$$
+
+In our case, because the linear transformation is an isomorphism, we also think of our work as an extension of the idea of analogies in Mikolov et al. (2013b), but at the community level.
+
+This compositionality also allows us to reduce the number of alignments to train, which is useful when performing experiments at scale. For a set of N communities, describing the entire set requires $N^2$ alignments. By relying on compositionality, we need only train $N$ transformations: one for each community and the high-resource community. In our dataset, r/AskReddit is the highest resource
+
+community, and thus the most appropriate analog to English in the multilingual setting.
+
+We consider three techniques for alignment: MultiCCA, developed by Ammar et al. (2016), a linear equation solver, and an SVD-based approach described in Smith et al. (2017). We experiment using each of these approaches to select linear projections, different anchoring set size. In each case, we begin with two word embedding models trained on different community corpora, $\mathcal{C}_a$ and $\mathcal{C}_b$ , each with their own vocabulary $V_a$ and $V_b$ . We then construct the set of potential anchoring words, $D^{a,b} \subset V^{a,b}$ , where $V^{a,b} = V_a \cap V_b$ . Our first anchoring strategy uses all words in $D^{a,b}$ , our second strategy uses only the 1000 most frequent words $(D_{1000}^{a,b})$ , and our third strategy uses the 5000 most frequent words $(D_{5000}^{a,b})$ . We then construct two training matrices: $A'$ and $B'$ , where $A_i' = \mathrm{Emb}_a(D_i)$ and $B_i' = \mathrm{Emb}_b(D_i)$ . We then train the alignment using $A'$ and $B'$ , then evaluate.
+
+| Mode | N | Acc@1 | Acc@5 | Acc@10 |
| ALL | ~30k | 0.6937 | 0.8335 | 0.8709 |
| SW | 1000 | 0.6583 | 0.8037 | 0.8450 |
| 5000 | 0.6934 | 0.8306 | 0.8679 |
+
+Table 3: Performance of Least-squares alignments for the year 2016, measuring alignment to the shared space.
+
+| Mode | N | Acc@1 | Acc@5 | Acc@10 |
| ALL | ~30k | 0.7284 | 0.8591 | 0.8924 |
| SW | 1000 | 0.6234 | 0.7735 | 0.8174 |
| 5000 | 0.6984 | 0.8352 | 0.8718 |
+
+Table 4: Performance of MultiCCA alignments for the year 2016, measuring alignment to the shared space.
+
+| Mode | N | Acc@1 | Acc@5 | Acc@10 |
| ALL | ~30k | 0.4980 | 0.6462 | 0.7091 |
| SW | 1000 | 0.4798 | 0.6434 | 0.6991 |
| 5000 | 0.5203 | 0.6841 | 0.7374 |
+
+Table 5: Performance of SVD alignments for the year 2016, measuring pairwise alignment.
+
+MultiCCA For communities $\mathcal{C}_a$ and $\mathcal{C}_b$ , Multi-CCA seeks to learn two projections to latent space $\mathcal{C}$ : $\mathcal{T}_{a\to c}$ and $\mathcal{T}_{b\to c}$ , in order to maximize the correlation of $A\cdot \mathcal{T}_{a\to c}$ and $B\cdot \mathcal{T}_{b\to c}$ . From these projections, we then recover the projection of interest $\mathcal{T}_{a\to b}$ :
+
+$$
+\mathcal {T} _ {a \rightarrow b} = \mathcal {T} _ {a \rightarrow c} \cdot \mathcal {T} _ {b \rightarrow c} ^ {- 1}
+$$
+
+We implement this approach using scikit-learn's cross_decomposition.CCA module. (Pedregosa et al., 2011)
+
+Linear Equation Solver For $A$ and $B$ , the linear equation solver aims to learn $\mathcal{T}_{a\rightarrow b}$ by solving the equation: $A\cdot \mathcal{T}_{a\rightarrow b} = B$ . We use NumPy's Least-squares linear equation solver, `linalg.lstsq`. (Harris et al., 2020)
+
+Singular Value Decomposition This method is employed by KhudaBukhsh et al. (2021) (albeit with many fewer anchoring words), and is described in Smith et al. (2017). Alignment is trained directly between community pairs, rather than between each community and a shared space. For this method, the projection is learned by solving $U\Sigma V^T = A^T B$ , setting $\mathcal{T}_{a\rightarrow b} = UV^T$ . We use NumPy's linalg.svd. (Harris et al., 2020)
+
+Evaluation For each pair of communities $C_a$ and $C_b$ , and each word $w_i \in V_a, V_b$ , we translate $w_i$ from $C_a$ to $C_b$ . Each $w_i$ has an embedding $A_i$ learned from $C_a$ , an embedding $B_i$ learned from $C_b$ , and an image $B_i'$ under alignment, where $B_i' = A_i \mathcal{T}_{a \to b}$ . We then find the $N$ nearest-neighbors of $B_i'$ in $V_b$ , using cosine similarity. Acc@N is the proportion of $N$ -nearest-neighbor sets that contain $w_i$ . Tables 3, 4, and 5 contain the results of this evaluation for the year 2016, macro-averaged over each projection learned. Other years are included in the appendix.
+
+Discussion As might have been anticipated, the anchoring method that uses all available words is the most accurate. We also notice a trend of decreasing accuracy from 2016 to 2019, despite the increase in dataset size and therefore embedding stability. This suggests growing semantic differences between Reddit communities over time. For future experiments and evaluation, we use the 5000-anchor MultiCCA approach, which we found to empirically provide alignment accuracy without exposing the model to all of the data.
+
+# 4.7 Comparison with Previous Methods
+
+Unsupervised cultural analysis of this kind is an extremely recent development in the literature. However, previous methods can be adapted to provide a baseline for comparison. For the following comparisons, we select for analysis two communities with both a high degree of moral polarization, and a known axis of polarization: r/politics and
+
+r/the_donald. These communities are highly politically polarized. We perform the comparison with data from the year 2017.
+
+We initially perform a comparison with an approach described by Xie et al. (2019), which identifies changes in moral semantics across corpora. We use the technique to generate a set of misaligned words by identifying words that move from a positive to a negative moral category (and vice versa) between communities. We then rank the words by degree of movement. This method retrieves political words (defined as words falling into political topic clusters) with a 0.2247 MAP.
+
+For both our method and the method described by KhudaBukhsh et al. (2021), we follow the procedure for anchoring and training an alignment. For KhudaBukhsh et al. (2021), this means using SVD with NLTK stopwords (Bird et al., 2009). We then sort the misaligned wordpairs by degree of alignment, and classify a wordpair as political if either of the misaligned words is in one of the political clusters. KhudaBukhsh et al. (2021) achieves 0.3076 MAP; our method achieves 0.3318 MAP.
+
+# 5 Exploring Worldview and Ideology
+
+In this section, we use our method to perform a number of sociolinguistic explorations.
+
+# 5.1 Worldview Misalignment
+
+We begin by using the learned projection/alignment to identify “misaligned” words in a political context.
+
+We say that a word is "aligned" when the nearest image of a word $w_{i}$ from $C_1$ is itself:
+
+$$
+i = \arg \min _ {j} d (A _ {i} \cdot \mathcal {T} _ {a \rightarrow b}, B _ {j})
+$$
+
+And "misaligned" when it is not:
+
+$$
+i \neq \operatorname * {a r g m i n} _ {j} d (A _ {i} \cdot \mathcal {T} _ {a \to b}, B _ {j})
+$$
+
+We anticipate the words that will ultimately misalign are either words with low quality embeddings (owing to low frequency in the corpus) or words with very polarized meanings across communities.
+
+Our first experiment, analyzing two politically misaligned corpora, is a typical area of inquiry. (KhudaBukhsh et al., 2021; Xie et al., 2019; Webson et al., 2020) We select r/politics $(\mathcal{C}_a)$ , a general-purpose political discussion board with a strong
+
+liberal tendency, and r/the_donald $(\mathcal{C}_b)$ , a Trump-supporting and aggressively conservative community well known as a breeding grounds for conspiracy theories, including PizzaGate (Kang, 2016). We begin by finding the vocabulary of shared words between r/politics and r/the_donald, and use our alignment algorithm to "translate" each word from r/politics to r/the_donald. Using MultiCCA, and r/askreddit $(\mathcal{C}_c)$ as the "high-resource" language, the translation is formulated as:
+
+$$
+\mathcal {T} _ {a \rightarrow x} \cdot \mathcal {T} _ {\text {s h a r e d} \rightarrow x} ^ {- 1} \cdot \mathcal {T} _ {\text {s h a r e d} \rightarrow y} \cdot \mathcal {T} _ {b \rightarrow y} ^ {- 1}
+$$
+
+Using this matrix transformation, we project all shared words from $\mathcal{C}_a$ to $\mathcal{C}_b$ . We also repeat this process in reverse.
+
+Querying this model for political words, we find a number of interesting misalignments, including the words which directly define the known axis of polarization: "democrat" and "republican." Table 6 contains a sample of misalignments from r/politics to r/the_donald. This demonstrates the ability of our method to identify the nature of polarization between two communities without any presuppositions about the communities.
+
+# 5.2 Conceptual Reflections
+
+While the approach described in section 5.1 is able to identify misaligned words and "translate" across the cultural boundary, we also consider another procedure: using the trained embedding alignments to identify the antonyms that describe an axis of semantic reflection between two communities. We use a predetermined set of antonym pairs from Miller (1995), and identify all instances where a word $w$ in $\mathcal{C}_a$ maps to its antonym in $\mathcal{C}_b$ .
+
+We apply this approach to the community pair of r/askwomen and r/askmen, forums that discuss womens' and mens' issues, respectively. Table 7 contains top identified antonyms pairs.
+
+Although the list is not exhaustive, we see that the antonym approach quickly identifies the gender axis between the two communities. A weakness of this approach is that many words, such as names and other proper nouns, may not be included in a predetermined set of antonyms.
+
+# 5.3 Conceptual Homomorphism
+
+There may exist two distinct communities of speakers that have similar worldviews and conceptual structures, but do not talk about the same things. A good example of this are the two communities
+
+| r/politics | r/the_donald | Alignment |
| democrat | republican | 0.8562 |
| republican | democrat | 0.8501 |
| leftwing | rightwing | 0.8307 |
| socialized_medicine | universal_healthcare | 0.8041 |
| magas | libtards | 0.6578 |
+
+| r/the_donald | r/politics | Alignment |
| republican | democrat | 0.8570 |
| democrat | republican | 0.8527 |
| prolife | prochoice | 0.8435 |
| foxnews | cnn | 0.7960 |
| pocahontas | elizabeth_warren | 0.6694 |
+
+Table 6: Selected words from r/politics and their nearest image under alignment in r/the_donald (left); selected words from r/the_donald and their nearest image under alignment in r/politics (right). Degree of alignment measured in cosine similarity.
+
+| r/askwomen | r/askmen | Alignment |
| son | daughter | 0.7675 |
| daughter | son | 0.7621 |
| husband | wife | 0.7503 |
| father | mother | 0.7445 |
| brother | sister | 0.7145 |
| girlfriend | boyfriend | 0.7032 |
| wife | husband | 0.6941 |
| boyfriend | girlfriend | 0.6708 |
| uncle | aunt | 0.6314 |
+
+r/dota2 and r/leagueoflegends. Both of these communities are discussion boards centered around a "MOBA" (Multiplayer Online Battle Arena) video game, and both video games share a great deal of similarity. However, r/dota2 players and r/leagueoflegends players often see each other as rivals or enemies. By using our alignment technique, we demonstrate a use-case for bridging the conceptual gap between two similar communities and finding conceptual homomorphisms.
+
+By aligning the embeddings of two communities $\mathcal{C}_a$ and $\mathcal{C}_b$ , we can project words that are in $\mathcal{V}_a$ , but not $\mathcal{V}_b$ , from $A$ to $B$ , learning a semantic representation for an out-of-vocabulary word unknown to $\mathcal{C}_b$ . This projection yields $C_b$ 's equivalent of $C_a$ 's unique word. This is similar to unsupervised translation.
+
+We then use the projection learned between r/leagueoflegends and r/dota2 to estimate the nearest word within the r/dota2 space for a small set of query words unique to r/leagueoflegends. Table 8 contains some examples of the projections, and Figure 2 provides an additional illustration of the success of the technique in identifying crosscommunity semantic analogs.
+
+Table 7: Words in r/askwomen that align to their antonym when projected to r/askmen. Degree of alignment measured in cosine similarity.
+
+| r/LeagueOfLegends | r/Dota2 | Alignment |
| /r/summonerschool | /r/learndota2 | 0.8420 |
| op.gg | dotabuff | 0.8396 |
| rito | volvo | 0.8378 |
| riot | valve | 0.8003 |
| aatrox | bloodseeker | 0.6473 |
+
+Table 8: Selected words from r/leagueoflegends and their nearest image in r/dota2. Alignment is measured in cosine similarity.
+
+
+Figure 2: These are not the same! The character on the left, "Aatrox" from League of Legends, projects to the character on the right, "Bloodseeker" from Dota 2.
+
+# 5.4 Large-scale Analysis
+
+Finally, we perform a large-scale analysis across all top Reddit communities. Using the topic clusters described in section 4.5, we compute the number of misalignments for each topic cluster.
+
+We are then able to produce pairwise misalignment scores for each pair of communities with respect to each topic cluster, uncovering the multidimensional ideological misalignment across Reddit. These comparisons are numerous; we include two here. Figure 3 demonstrates the degree of misalignment with respect to two political subcategories, corresponding to "Economics" and "Authority".
+
+Despite low KL-divergence in topic distributions for political communities, as shown in Figure 4, they demonstrate strong misalignment on the "Economics" topic. The difference demonstrates our method's ability to resolve specific types of polarization across specific ideological categories, as opposed to previous work that treats political polarization as a single-dimensional problem. Additional topic misalignments are included in 5.
+
+
+Figure 3: Misalignment frequency within the "Economics" cluster (top), and the "Authority" cluster (bottom). Color corresponds to the relative intensity of misalignment, and the white squares outline political communities.
+
+While significant, an analysis of Reddit communities is only a fraction of what this approach is capable of. Unlike previous methods that rely on calculating all pairwise alignments, the compositional nature of the MutliCCA approach we propose only requires learning the alignment between each community's ideological dialect and a central high-resource community. As such, the
+
+training time scales linearly with the number of communities analyzed, which makes the study of the potentially large number of ideological communities much more tractable.
+
+# 6 Conclusion
+
+In this paper we have demonstrated a novel technique for unsupervised cultural analysis by building upon existing work treating word embeddings as tools to explore worldview, as well as work on multilingual embedding alignment. We have shown that our formulation is flexible, and able to operate effectively in a complex multi-community setting.
+
+We have also demonstrated a number of useful applications of the worldview discovery procedure, from the automatic identification of axes of polarization, to the identification of out-of-vocabulary words with similar semantics, to the large-scale analysis of an online social community with multiple dimensions of ideological polarization.
+
+# 6.1 Future Directions
+
+A key application of this method is in unsupervised cultural analysis, which would allow researchers to explore culture at scale, without using a manual value-querying process that imputes their own beliefs and values into the process. Such advancements may also enable more sophisticated explorations of Internet conflict. With a high-dimensional estimate of ideology for a user and their body of comments, research on Internet conflict can extend beyond high-temperature "confrontation" alone. This would enable analysts to identify and respect "legitimate" conflict—conflict that emerges not from trolling or a clash of moods and personalities (Cheng et al., 2017), but a clash of underlying worldviews.
+
+We believe our method also extends well to the study of academia itself, i.e. the science of science. An unsupervised method to identify terms that translate well into adjacent scientific fields/approaches would make cross- and interdisciplinary studies easier, providing a ready lexicon of ideas which best relate to what you already know. It could also allow us to examine how ideas fare when they are imported into fields adjacent or distant to their point of origin. Even more broadly, our approach could be used to generalize search that takes into account different perspectives on—and different phrasings for—similar underlying concepts and issues.
+
+# 7 Broader Impacts and Ethical Considerations
+
+We recognize the significant impact that modern natural language processing technology can have on society, and the potential for its abuse. This paper lays the groundwork for a large-scale unsupervised approach to the analysis of culture, which could ultimately lead to technologies capable of effectively forecasting conflict and radicalization in online speech. In the wrong hands, that might inspire information operations that could have a chilling effect on online speech.
+
+But we are optimistic about the future of this approach to cultural (mis)alignment. As demonstrated, it can be used to identify not only disagreement, but where there is undiscovered potential for agreement. We began this paper with a quote: "The limits of my language are the limits of my world." We hope that by building on this technique to reveal both similarities and differences in community worldviews, we can someday expand the limits of everyone's worldview by facilitating mutual understanding, finding ways to resolve ideological tension, and make new knowledge easier to transmit and receive.
+
+# Acknowledgements
+
+We would like to thank members of Knowledge Lab, University of Chicago for helpful comments and suggestions. This work is funded in part by DARPA via grants RA-19-01 and HR00111820006 and AFOSR via grant FA9550-19-1-0354.
+
+# References
+
+Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings.
+David Arthur and Sergei Vassilvitskii. 2006. k-means++: The advantages of careful seeding. Technical report, Stanford.
+Christopher A Bail, Lisa P Argyle, Taylor W Brown, John P Bumpus, Haohan Chen, MB Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37):9216-9221.
+Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. In Proceedings of the international AAAI conference on web and social media, volume 14, pages 830-839.
+
+Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. "O'Reilly Media, Inc."
+Bill Bishop. 2009. The Big Sort: Why the Clustering of Like-minded America is Tearing Us Apart. Houghton Mifflin Harcourt.
+Levi Boxell, Matthew Gentzkow, and Jesse M Shapiro. 2017. Greater internet use is not associated with faster growth in political polarization among us demographic groups. Proceedings of the National Academy of Sciences, 114(40):10612-10617.
+Justin Cheng, Michael Bernstein, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2017. Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW '17, pages 1217-1230. Association for Computing Machinery.
+Roy G d'Andrade. 1995. The development of cognitive anthropology. Cambridge University Press.
+Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Jurafsky. 2019. Analyzing polarization in social media: Method and application to tweets on 21 mass shootings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2970-3005, Minneapolis, Minnesota. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT* (1).
+Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635-E3644.
+Clifford Geertz, ifford Geertz, and Basic Books. 1973. The Interpretation Of Cultures. Basic Books.
+Jesse Graham, Jonathan Haidt, and Brian A Nosek. 2009. Liberals and conservatives rely on different sets of moral foundations. Journal of personality and social psychology, 96(5):1029.
+Charles R. Harris, K. Jarrod Millman, Stefan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fern'andez del R'10, Mark Wiebe, Pearu Peterson, Pierre G'erard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. 2020. Array programming with NumPy. Nature, 585(7825):357-362.
+
+Cecilia Kang. 2016. Fake news onslaught targets pizzeria as nest of child-trafficking. The New York Times.
+Donghyun Kang and James Evans. 2020. Against method: Exploding the boundary between qualitative and quantitative studies of science. Quantitative Science Studies, 1(3):930-944.
+Ashiqur R. KhudaBukhsh, Rupak Sarkar, Mark S. Kamlet, and Tom Mitchell. 2021. We Don't Speak the Same Language: Interpreting Polarization through Machine Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17):14893-14901.
+Austin C Kozlowski, Matt Taddy, and James A Evans. 2019. The geometry of culture: Analyzing the meanings of class through word embeddings. American Sociological Review, 84(5):905-949.
+Srijan Kumar, William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2018. Community Interaction and Conflict on the Web. In Proceedings of the 2018 World Wide Web Conference, WWW '18, pages 933-943. International World Wide Web Conferences Steering Committee.
+Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119. Curran Associates Inc.
+George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.
+Diana C Mutz. 2006. Hearing the Other Side: Deliberative Versus Participatory Democracy. Cambridge University Press.
+F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. Association for Computational Linguistics.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+
+Ashwin Rajadesingan, Paul Resnick, and Ceren Budak. 2020. Quick, community-specific learning: How distinctive toxicity norms are maintained in political subreddits. In Proceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 557-568.
+Radim Rehurek and Petr Sojka. 2011. Gensim-python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic, 3(2).
+Samuel L Smith, DHP Turban, S Hamblin, and NY Hammerla. 2017. Offline bilingualword vectors. Orthogonal Transformations.
+Cass R Sunstein. 2018. # Republic: Divided democracy in the age of social media. Princeton University Press.
+Chenhao Tan and Lillian Lee. 2015. All Who Wander: On the Prevalence and Characteristics of Multicomunity Engagement. In Proceedings of the 24th International Conference on World Wide Web, WWW '15, pages 1056-1066, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
+Isaac Waller and Ashton Anderson. 2021. Quantifying social organization and political polarization in online platforms.
+Albert Webson, Zhizhong Chen, Carsten Eickhoff, and Ellie Pavlick. 2020. Are "Undocumented Workers" the Same as "Illegal Aliens"? Disentangling Denotation and Connotation in Vector Spaces. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4090-4105. Association for Computational Linguistics.
+Jing Yi Xie, Renato Ferreira Pinto Junior, Graeme Hirst, and Yang Xu. 2019. Text-based inference of moral sentiment change. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4654-4663. Association for Computational Linguistics.
+
+# A Appendix
+
+ | Linear Equation solver | MultiCCA |
| Year | Mode | N | Acc@1 | Acc@5 | Acc@10 | Acc@1 | Acc@5 | Acc@10 |
| 2016 | ALL | ~30k | 0.6937 | 0.8335 | 0.8709 | 0.7284 | 0.8591 | 0.8924 |
| SW | 1000 | 0.6583 | 0.8037 | 0.8450 | 0.6234 | 0.7735 | 0.8174 |
| 5000 | 0.6934 | 0.8306 | 0.8679 | 0.6984 | 0.8352 | 0.8718 |
| 2017 | ALL | ~30k | 0.6720 | 0.8118 | 0.8510 | 0.7092 | 0.8412 | 0.8761 |
| SW | 1000 | 0.6393 | 0.7829 | 0.8251 | 0.6055 | 0.7532 | 0.7973 |
| 5000 | 0.6737 | 0.8099 | 0.8484 | 0.6793 | 0.8163 | 0.8542 |
| 2018 | ALL | ~30k | 0.6645 | 0.8013 | 0.8402 | 0.6850 | 0.8145 | 0.8509 |
| SW | 1000 | 0.6336 | 0.7730 | 0.8143 | 0.5992 | 0.7425 | 0.7857 |
| 5000 | 0.6670 | 0.8003 | 0.8385 | 0.6719 | 0.8057 | 0.8436 |
| 2019 | ALL | ~30k | 0.6370 | 0.7809 | 0.8230 | 0.6818 | 0.8165 | 0.8543 |
| SW | 1000 | 0.6030 | 0.7481 | 0.7923 | 0.5701 | 0.7185 | 0.7467 |
| 5000 | 0.6401 | 0.7787 | 0.8202 | 0.6505 | 0.7884 | 0.8291 |
+
+Table 9: Performance comparison of yearly alignments by Linear Equation solver & MultiCCA, evaluated based on projection to the shared space.
+
+ | 2016 | 2017 |
| Mode | N | Acc@1 | Acc@5 | Acc@10 | Acc@1 | Acc@5 | Acc@10 |
| ALL | ~30k | 0.4980 | 0.6462 | 0.7091 | 0.5064 | 0.6683 | 0.7221 |
| SW | 1000 | 0.4789 | 0.6434 | 0.6991 | 0.4562 | 0.6130 | 0.6680 |
| 5000 | 0.5203 | 0.6841 | 0.7374 | 0.4948 | 0.6531 | 0.7063 |
+
+ | 2018 | 2019 |
| Mode | N | Acc@1 | Acc@5 | Acc@10 | Acc@1 | Acc@5 | Acc@10 |
| ALL | ~30k | 0.4980 | 0.6562 | 0.7091 | 0.4804 | 0.6382 | 0.6922 |
| SW | 1000 | 0.4478 | 0.5993 | 0.6526 | 0.4291 | 0.5792 | 0.6332 |
| 5000 | 0.4849 | 0.6388 | 0.6909 | 0.4675 | 0.6200 | 0.6729 |
+
+Table 10: Performance of yearly alignments by SVD, evaluated for community pairs.
+
+
+Figure 4: $\mathcal{D}_{KL}(P(t|\mathcal{C}_a)||P(t|\mathcal{C}_b))$ , where $\mathcal{C}_a$ is labeled on the y-axis, and $\mathcal{C}_b$ is on the x-axis.
+
+
+
+
+
+
+Figure 5: Community misalignment with respect to Government cluster (top left), Conflict cluster (top right), sex cluster (bottom left), religion cluster (bottom right).
+
+
\ No newline at end of file
diff --git a/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/images.zip b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7ee69209c9da166502e35e3b235519bb25a98809
--- /dev/null
+++ b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7312ac73c9354217bb053c8fc6fb0ff8f04cac005cdc3291c3c971ac9a3a1d42
+size 881685
diff --git a/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/layout.json b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..262409549e7b80da8f01c89856d375308a1635ce
--- /dev/null
+++ b/aligningmultidimensionalworldviewsanddiscoveringideologicaldifferences/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3be3b0fd4a6baa5ec41636aba5c444ec8eaf7bedcac13317e6db8f2b99fbbdc0
+size 443623
diff --git a/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_content_list.json b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9505fa46573c35b37c4df3e74f63ef456ac6a83e
--- /dev/null
+++ b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:20c71b45e06d97b9836000c06c4f17fde6717eb6ec29d002dcae3553d7905b5a
+size 120668
diff --git a/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_model.json b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8a04fc4401df27b16c244e81e9d1cb6853425046
--- /dev/null
+++ b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e87e9adc170dbb3489a374cc879008038129c022718fbfdcad23f7227561946
+size 139359
diff --git a/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_origin.pdf b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4a707b0409f8620bf0746a0467c2dfd759373fc9
--- /dev/null
+++ b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/51427ffb-f88f-4962-9627-21f276b95cb4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba78aa35097fa13dda27c2ab0ac609cdd2b1c73280d43d52361d0e651173941d
+size 3186380
diff --git a/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/full.md b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf2935f07c9d20c436ccda9c0b94af84949f21c4
--- /dev/null
+++ b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/full.md
@@ -0,0 +1,470 @@
+# All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality
+
+William Timkey and Marten van Schijndel
+
+Department of Linguistics
+
+Cornell University
+
+{wpt25|mv443}@cornell.edu
+
+# Abstract
+
+Similarity measures are a vital tool for understanding how language models represent and process language. Standard representational similarity measures such as cosine similarity and Euclidean distance have been successfully used in static word embedding models to understand how words cluster in semantic space. Recently, these measures have been applied to embeddings from contextualized models such as BERT and GPT-2. In this work, we call into question the informativity of such measures for contextualized language models. We find that a small number of rogue dimensions, often just 1-3, dominate these measures. Moreover, we find a striking mismatch between the dimensions that dominate similarity measures and those which are important to the behavior of the model. We show that simple postprocessing techniques such as standardization are able to correct for rogue dimensions and reveal underlying representational quality. We argue that accounting for rogue dimensions is essential for any similarity-based analysis of contextual language models.
+
+# 1 Introduction
+
+By mapping words into continuous vector spaces, we can reason about human language in geometric terms. For example, the cosine similarity of pairs of word embeddings in Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) shows a robust correlation with human similarity judgments, and embeddings cluster into natural semantic classes in Euclidean space (Baroni et al., 2014; Wang et al., 2019). In recent years, static embeddings have given way to their contextual counterparts, with language models based on the transformer architecture (Vaswani et al., 2017) such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2020), XLNet (Yang et al., 2019) and GPT-2 (Radford et al., 2019) achieving state of the art results on many language understanding tasks. Despite their
+
+success, relatively little is known about how these models represent and process language. Recent work has employed measures such as cosine similarity and Euclidean distance to contextual representations with unclear and counterintuitive results. For example, similarity/distance measures in BERT are extremely sensitive to word position, leading to inconsistent results on evaluation benchmarks (Mickus et al., 2020; May et al., 2019). Additionally, representational quality appears to degrade severely in later layers of each network, with the final layers of BERT, RoBERTa, GPT-2 and XLNet showing little to no correlation with the semantic similarity/relatedness judgments of humans (Bommasani et al., 2020).
+
+Recent work which probes the representational geometry of contextualized embedding spaces using cosine similarity has found that contextual embeddings have several counterintuitive properties (Ethayarajh, 2019). For example: 1) Word representations are highly anisotropic: randomly sampled words tend to be highly similar to one another when measured by cosine similarity. In the final layer of GPT-2 for example, any two words are almost perfectly similar. 2) Embeddings have extremely low self-similarity: In later layers of transformer-based language models, random words are almost as similar to one another as instances the same word in different contexts.
+
+In this work, we critically examine the informativity of standard similarity/distance measures (particularly cosine similarity and Euclidean distance) in contextual embedding spaces. We find that these measures are often dominated by 1-5 dimensions across all the contextual language models we tested, regardless of the specific pretraining objective. It is this small subset of dimensions which drive anisotropy, low self-similarity, and the apparent drop in representational quality in later layers. These dimensions, which we refer to as rogue dimensions are centered far from the origin and have
+
+disproportionately high variance. The presence of rogue dimensions can cause cosine similarity and Euclidean distance to rely on less than $1\%$ of the embedding space. Moreover, we find that the rogue dimensions which dominate cosine similarity do not likewise dominate model behavior, and show a strong correlation with absolute position and punctuation.
+
+Finally, we show that these dimensions can be accounted for using a trivially simple transformation of the embedding space: standardization. Once applied, cosine similarity more closely reflects human word similarity judgments, and we see that representational quality is preserved across all layers rather than degrading/becoming task-specific. Taken together, we argue that accounting for rogue dimensions is essential when evaluating representational similarity in transformer language models. $^{1}$
+
+# 2 Background
+
+Standard measures such as cosine similarity or Euclidean distance in contextual embedding spaces have been used in a wide range of applications: to understand how the representational similarity of word embedding spaces corresponds to human semantic similarity/relatedness judgments (Bommasani et al., 2020; Vulic et al., 2020; Chronis and Erk, 2020; A. Rodriguez and Merlo, 2020), human brain activation patterns/cross-model similarity (Abnar et al., 2019), syntax structure (Chrupaña and Alishahi, 2019), semantic shift (Martinc et al., 2020), compositionality/idomaticity of word vectors (Garcia et al., 2021), polysemy (Soler and Apidianaki, 2021), context-sensitivity (Reif et al., 2019), social bias (May et al., 2019; Bommasani et al., 2020), changes to the embedding space during fine-tuning (Merchant et al., 2020), and as an evaluation metric for text generation (Zhang et al., 2020).
+
+However, a number of works have questioned the appropriateness of cosine similarity. Schnabel et al. (2015) found that static embedding models encode a substantial degree of word frequency information, which leads to a frequency bias in cosine similarity. May et al. (2019) questioned the adequacy of cosine similarity in sentence encoders after finding contextual discrepancies in bias measures. Perhaps most relevant to the present work is Zhelezniak et al. (2019) which treats individual word embed
+
+dings as statistical samples, shows the equivalence of cosine similarity and Pearson correlation, and notes that Pearson correlation (and therefore cosine similarity) is highly sensitive to outlier dimensions. They further suggest the use of non-parametric rank correlation measures such as Spearman's $\rho$ , which is robust to outliers. Our work investigates the sensitivity of cosine similarity to outlier dimensions in contextual models, and further characterizes the behavioral correlates of these outliers.
+
+Our goal in this work was not causal explanation of degenerate embedding spaces or post-processing for task performance gains, but rather to empirically motivate trivially simple transformations to enable effective interpretability research with existing metrics. However, we refer interested readers to Gao et al. (2019) who studied degeneration toward anisotropy in machine translation. Similarly, Li et al. (2020) suggested a learned transformation of transformer embedding spaces which resulted in increased performance on semantic textual similarity tasks.
+
+# 3 Rogue Dimensions and Representational Geometry
+
+# 3.1 Anisotropy
+
+In this section, we investigate how each dimension of the embedding space contributes to anisotropy, defined by Ethayarajh (2019) as the expected cosine similarity of randomly sampled token pairs. They showed that contextual embedding spaces are highly anisotropic, meaning that the contextual representations of any two tokens are expected to be highly similar to one another. We investigate this counterintuitive property by decomposing the cosine similarity computation by dimension, and show that the cosine similarity of any two tokens is dominated by a small subset of rogue dimensions. We conclude that anisotropy is not a global property of the entire embedding space, but is instead driven by a small number of idiosyncratic dimensions.
+
+# 3.1.1 Setup
+
+Ethayarajh (2019) defines the anisotropy in layer $\ell$ of model $f$ as the expected cosine similarity of any pair of words in a corpus. This can be approximated as $\hat{A}(f_{\ell})$ from a sample $S$ of $n$ random token pairs from a corpus $\mathcal{O}$ . $S = \{\{x_1, y_1\}, \ldots, \{x_n, y_n\}\} \sim \mathcal{O}$ :
+
+$$
+\hat {A} \left(f _ {\ell}\right) = \frac {1}{n} \cdot \sum_ {\left\{x _ {\alpha}, y _ {\alpha} \right\} \in S} \cos \left(f _ {\ell} \left(x _ {\alpha}\right), f _ {\ell} \left(y _ {\alpha}\right)\right) \tag {1}
+$$
+
+The cosine similarity, between two vectors $u$ and $v$ of dimensionality $d$ is defined as
+
+$$
+\cos (u, v) = \frac {u \cdot v}{\| u \| \| v \|} = \sum_ {i = 1} ^ {d} \frac {u _ {i} v _ {i}}{\| u \| \| v \|} \tag {2}
+$$
+
+Expressing cosine similarity as a summation over $d$ dimensions, we can define a function $CC_{i}(u,\nu)$ which gives contribution of dimension $i$ to the total cosine similarity of $u$ and $\nu$ as:
+
+$$
+C C _ {i} (u, v) = \frac {u _ {i} v _ {i}}{\| u \| \| v \|} \tag {3}
+$$
+
+From this, we define $CC(f_{\ell}^{i})$ , the contribution of dimension $i$ to $\hat{A}(f_{\ell})$ as:
+
+$$
+C C \left(f _ {\ell} ^ {i}\right) = \frac {1}{n} \cdot \sum_ {\left\{x _ {\alpha}, y _ {\alpha} \right\} \in S} C C _ {i} \left(f _ {\ell} \left(x _ {\alpha}\right), f _ {\ell} \left(y _ {\alpha}\right)\right) \tag {4}
+$$
+
+Note that $\sum_{i}^{d} CC(f_{\ell}^{i}) = \hat{A}(f_{\ell})$ . From the mean cosine contribution by dimension, we can determine how much each dimension contributes to the total anisotropy. If $CC(f_{\ell}^{1}) \approx CC(f_{\ell}^{2}) \approx \ldots \approx CC(f_{\ell}^{d})$ then we conclude that anisotropy is a global property of the embedding space; no one dimension drives the expected cosine similarity of any two embeddings. By contrast, if $CC(f_{\ell}^{i}) >> \sum_{j \neq i}^{d} CC(f_{\ell}^{j})$ then we conclude that dimension $i$ dominates the cosine similarity computation.
+
+# 3.1.2 Experiment
+
+We compute the average cosine similarity contribution, $CC(f_{\ell}^{i})$ , for each dimension in all layers of BERT, RoBERTa, GPT-2, and XLNet. We then normalize by the total expected cosine similarity $\hat{A}(f_{\ell})$ to get the proportion of the total expected cosine similarity contributed by each dimension. All models are of dimensionality $d = 768$ and have 12 layers, plus one static embedding layer. We also include two 300 dimensional non-contextual models, Word2Vec and GloVe, for comparison. Our corpus $\mathcal{O}$ is an 85k token sample of random articles from English Wikipedia. All input sequences
+
+| Model | Layer | 1 | 2 | 3 | A(fl) |
| GPT-2 | 11 | 0.275 | 0.269 | 0.265 | 0.640 |
| 12 | 0.763 | 0.131 | 0.078 | 0.885 |
| BERT | 10 | 0.817 | 0.004 | 0.003 | 0.396 |
| 11 | 0.884 | 0.003 | 0.002 | 0.506 |
| RoBERTa | 7 | 0.726 | 0.193 | 0.032 | 0.705 |
| 12 | 0.663 | 0.262 | 0.020 | 0.745 |
| XLNet | 10 | 0.990 | 0.000 | 0.000 | 0.887 |
| 11 | 0.996 | 0.001 | 0.000 | 0.981 |
| Word2Vec | | 0.031 | 0.023 | 0.023 | 0.130 |
| GloVe | | 0.105 | 0.096 | 0.095 | 0.104 |
+
+Table 1: Proportion of total expected cosine similarity, $CC(f_{\ell}^{i}) / \hat{A}(f_{\ell})$ , contributed by each of the top 3 dimensions in the two most anisotropic layers of each model, along with the anisotropy estimate $\hat{A}(f_{\ell})$ for the given layer. Results for all layers can be found in Table 4 of the appendix.
+
+consisted of 128 tokens. From the resulting representations we take a random sample $S$ of 500k token pairs. For each model, we report the three dimensions with the largest cosine contributions in the two most anisotropic layers, as well as the overall anisotropy $\hat{A} (f_{\ell})$ .
+
+# 3.1.3 Results and Discussion
+
+Results are summarized in Table 1. The static models Word2Vec and GloVe are relatively isotropic and are not dominated by any single dimension. Across all transformer models tested, a small subset of rogue dimensions dominate the cosine similarity computation, especially in the more anisotropic final layers. Perhaps the most striking case is layers 10 and 11 of XLNet, where a single dimension contributes more than $99\%$ of the expected cosine similarity between randomly sampled tokens.
+
+The dimensions which drive anisotropy are centered far from the origin relative to other dimensions. For example, the top contributing dimension in the final layer of XLNet $(i = 667)$ has a mean activation of $\mathbb{E}[x_{12}^{667}] = 180.0$ , while the expected activation of all other dimensions is $\mathbb{E}[x_{12}^{i\neq 667}] = -0.084$ with standard deviation $\sigma [x_{12}^{i\neq 667}] = 0.77$ .
+
+One implication of anisotropy is that the embeddings occupy a narrow cone in the embedding space, as the angle between any two word embeddings is very small. However, if anisotropy is driven by a single dimension (or a small subset of dimensions), we can conclude that the cone lies along a single axis or within a low dimensional subspace, rather than being a global property across
+
+all dimensions. $^{5}$ We conclude from this analysis that the anisotropy of the embedding space is an artifact of cosine similarity's high sensitivity to a small set of outlier dimensions and is not a global property of the space. $^{6}$
+
+# 3.2 Informativity of Similarity Measures
+
+In the previous section, we found that anisotropy is driven by a small subset of dimensions. In this section, we investigate whether standard similarity measures are still informed by the entire embedding space, or if variability in the measure is also driven by a small subset of dimensions.
+
+For example, it could be the case that some dimension $i$ has a large, but roughly constant activation across all tokens, meaning $\mathbb{E}[CC(f_{\ell}^{i})]$ will be large, but $Var[CC(f_{\ell}^{i})]$ will be near zero. In this case, we would be adding a large constant to cosine similarity, making Anisotropy $(f_{\ell})$ large but not changing $Var[\cos (f_{\ell}(x),f_{\ell}(y)]$ . In this case, the average cosine similarity would be driven toward 1.0 by dimension $i$ , but any changes in cosine similarity would be driven by the rest of the embedding space, not dimension $i$ , meaning cosine similarity would provide information about the entire representation space, rather than a single dimension. Conversely, dimension $i$ may have mean activation near zero, but extremely large variance across tokens. In this case, dimension $i$ would not appear to make the space anisotropic, but would still drive variability in cosine similarity. Ultimately, we're not interested in where the representation space is centered, but whether changes in a similarity measure reflect changes in the entire embedding space.
+
+In this section we uncover which dimensions drive the variability of cosine similarity.7 Paralleling our findings in Section 3.1 we find that the token pairs which are similar/dissimilar to one another completely change when we remove just 1-5 dominant dimensions from the embedding space.
+
+# 3.2.1 Setup
+
+Let $f_{\ell}(x):X\to \mathbb{R}^{d}$ , be the function which maps a token $x$ to its representation in layer $l$ of model
+
+| Model | Layer | k=1 | k=3 | k=5 |
| GPT-2 | 0 | 0.999 | 0.996 | 0.996 |
| 11 | 0.967 | 0.352 | 0.352 |
| 12 | 0.819 | 0.232 | 0.232 |
| BERT | 0 | 0.999 | 0.997 | 0.997 |
| 11 | 0.046 | 0.048 | 0.048 |
| 12 | 0.213 | 0.214 | 0.214 |
| RoBERTa | 0 | 0.810 | 0.770 | 0.770 |
| 11 | 0.591 | 0.319 | 0.319 |
| 12 | 0.566 | 0.301 | 0.301 |
| XLNet | 0 | 0.999 | 0.996 | 0.996 |
| 11 | 0.124 | 0.150 | 0.150 |
| 12 | 0.028 | 0.024 | 0.024 |
| Word2vec | | 0.998 | 0.993 | 0.988 |
| GloVe | | 0.987 | 0.954 | 0.930 |
+
+Table 2: Proportion of variance in cosine similarity $r^2$ explained by cosine similarity when the top $k$ dimensions, measured by $CC(f_{\ell}^{i})$ , are removed. Layer 0 is the static embedding layer. Results for all layers can be found in Table 5 of the Appendix.
+
+$f$ .Let $f_{\ell}^{\prime}(x):X\to \mathbb{R}^{d - k}$ be the function which maps token $x$ to its representation with top k dimensions (measured by contribution to cosine similarity) removed. Let $C(S) = \cos_{x,y\in S}(f_{\ell}(x),f_{\ell}(y))$ and $C^\prime (S) = \cos_{x,y\in S}(f_\ell '(x),f_\ell '(y))$ . In this analysis, we compute:
+
+$$
+r = \operatorname {C o r r} [ C (S), C ^ {\prime} (S) ] \tag {5}
+$$
+
+This is the Pearson correlation between the cosine similarities in the entire embedding space and those similarities when $k$ dimensions are removed. In our analysis we report $r^2$ which corresponds to the proportion of variance in $C(S)$ explained by $C'(S)$ . For example, if we were to set $k = 1$ , and the observed $r^2$ is large, then cosine similarity in the full embedding space is still well explained by the remaining $d - 1$ dimensions. By contrast, if $r^2$ is small, then the variance of cosine similarity in the embedding space can not be well explained by the bottom $d - 1$ dimensions, and thus a single dimension drives variability in cosine similarity.
+
+# 3.2.2 Experiment
+
+For this experiment, we compute $r^2 = \text{Corr}[C(x,y), C'(x,y)]^2$ for all layers of all models, using the same set of token representations as in Section 3.1. We remove the top $k = \{1,3,5\}$ dimensions, where dimensions are ranked by $CC(f_{\ell}^{i})$ , the cosine similarity contribution of dimension $i$ in layer $l$ . We report results for the first layer and the final two layers. Results for all layers can be found in Table 5 of the Appendix.
+
+# 3.2.3 Results
+
+Results are summarized in Table 2. We find that in the static embedding models and the earlier layers of each contextual model, no single dimension or subset of dimensions drives the variability in cosine similarity. By contrast, in later layers, the variability of cosine similarity is driven by just 1-5 dimensions. In the extreme cases of XLNet-12 and BERT-11, when we remove just a single dimension from the embedding space, almost none of the variance in cosine similarity can be explained by cosine similarity in the $d - 1$ dimensional subspace. $(r^2 = 0.028$ and 0.046 respectively) This means that the token pairs which are similar to one another in the full embedding space are drastically different from the pairs which are similar when just a handful of dimensions are removed.
+
+While similarity measures should reflect properties of the entire embedding space, we have shown that this is not the case with cosine similarity in contextualized embedding spaces. Not only do a small subset of dimensions in later layers drive the cosine similarity of randomly sampled words toward 1.0, but this subset also drives the variability of the measure. This result effectively renders cosine similarity a measure over 1-5 rogue dimensions rather than the entire embedding space.
+
+# 4 Rogue Dimensions and Model Behavior
+
+In this section, we address the question of whether the dimensions which dominate cosine similarity likewise dominate model behavior. Specifically, if similarity measures are dominated by only a few dimensions, as shown in the previous sections, then those dimensions should be the only ones the model actually uses, otherwise, the measures only reflect a small subset of what the model is doing. We find that dimensions which dominate cosine similarity do not likewise dominate model behavior.
+
+# 4.1 Behavioral Influence of Individual Dimensions
+
+We measure the influence of individual dimensions on model behavior through an ablation study in the style of Morcos et al. (2018). The idea of neuron
+
+ablation studies is to examine how the performance of a network changes when a neuron is clamped to a fixed value, typically zero. In our study, we measure how much the language modeling distribution changes when dimension $i$ of layer $\ell$ is fixed to zero.
+
+# 4.2 Setup
+
+Let $P_{f}(s)$ be the original language modeling distribution of model $f$ for some input $s$ sampled from corpus $\mathcal{O}$ . We measure how the distribution changes after ablation using KL divergence between the ablated model distribution and the unaltered reference distribution. We use KL divergence, rather than typical measures of importance in feature ablation such as accuracy or perplexity because we are interested in how much the prediction distributions change rather than performance on some task. Our measure of the importance of dimension $i$ in layer $\ell$ of model $f$ is the mean KL divergence between the two distributions across our corpus, where $S$ is a set of $n$ inputs to the model.
+
+$$
+I (i, \ell , f) = \frac {1}{n} \sum_ {s \in S} ^ {n} D _ {K L} \left[ P _ {f} (s) \| P _ {f} (s | f _ {\ell} ^ {i} (s) = 0) \right] \tag {6}
+$$
+
+# 4.3 Experiment
+
+To measure the importance of each dimension to model behavior, we compute $I(i,\ell ,f)$ for the last 4 layers of each model over 10k distributions. Since the autoregressive models (GPT-2, XLNet) give a language modeling distribution over all tokens in the input, we use a corpus of 10k tokens from English Wikipedia. In the auto-encoder models (BERT, RoBERTa), we mask $15\%$ of tokens and use a corpus of 150k tokens, for a total of 10k language modeling distributions. We plot the relative behavioral influence of each dimension against its contribution to cosine similarity, measured by $CC(f_{\ell}^{i})$ , (each is normalized to sum to 1).
+
+# 4.4 Results
+
+Figure 1 displays the results for the final layer of each model. $^{10}$ In all models, we see that the dimensions which dominate cosine similarity do not likewise dominate model behavior. The mismatch is less drastic in BERT's final layer, but
+
+
+xlnet-base-cased layer # 12
+
+
+gpt2 layer # 12
+bert-base-cased layer # 12
+
+
+roberta-base layer # 12
+
+
+Figure 1: Relative contribution of each dimension to cosine similarity, $CC(f_{\ell}^{i})$ , (top) paired with its relative influence on model behavior, $I(i,\ell,f)$ (bottom). The top and bottom portions of the plots each have 768 bars, one for each dimension in layer 12. The width of the bars corresponds to their relative contribution to each metric. For example, three dimensions (yellow, red, light yellow) dominate cosine similarity in GPT-2, but when we trace those dimensions to the bottom half of the plot, they appear to vanish, meaning their relative influence on model behavior is negligible. While this mismatch is less pronounced for BERT, it is particularly extreme in XLNet, where a single dimension dominates cosine similarity, but is effectively meaningless to the pretraining objective.
+
+is quite severe in final XLNet and GPT-2, where removing the dimensions which dominate cosine similarity does not lead to substantial changes in the language modeling distribution.
+
+While ablating rogue dimensions often alters the language modeling distribution more than ablating non-rogue dimensions, we emphasize that there is not a one-to-one correspondence between a dimension's influence on cosine similarity and its influence on language modeling behavior. In the case of XLNet and GPT-2, removing dimensions which dominate cosine similarity leads to only vanishingly small changes to the behavior of the model.
+
+# 4.5 Behavioral Correlates of Rogue Dimensions
+
+We now turn to the related question of whether rogue dimensions actually capture linguistically meaningful information. Because rogue dimensions dominate representational similarity measures, these measures will be heavily biased toward whatever information these dimensions capture. To explore their behavioral correlates, we plotted the distribution of the values for rogue dimensions.
+
+We show in Figure 2 that rogue dimensions often have highly type/position specific activation patterns. Rogue dimensions in all models are particularly sensitive to instances of the ".." token and/or position 0 of the input. For example, in later 2-11 of GPT-2 and RoBERTa, the mean cosine similarity
+
+of any two tokens in position 0 is greater than .99, while the mean similarity of tokens not in position 0 is .623 and .564 respectively.
+
+While the transformer language models we have tested have all been shown to capture a rich range of linguistic phenomena, this linguistic knowledge may be obscured by rogue dimensions. The following section empirically evaluates this hypothesis.
+
+# 5 Postprocessing and Representational Quality
+
+While we have shown that the representational geometry of contextualized embeddings makes cosine similarity uninformative, there are several simple postprocessing methods which can correct for this. In this section we outline three such methods: standardization, all-but-the-top (Mu and Viswanath, 2018), and ranking (via Spearman correlation). We evaluate representational quality of the postprocessed embeddings on several word similarity/relatedness datasets and show that the underlying representational quality is obscured by the rogue dimensions. When we correct for rogue dimensions, correlation with human similarity judgments improves across the board. We also find that representational quality is preserved across all layers, rather than giving way to degraded/task specific representations as argued in previous work.
+
+
+Figure 2: Distribution of values in the dimension with the highest variance in layer 11 of each model across a sample of 10k tokens from English Wikipedia. Each color corresponds to a specific type/position. The orange distribution is tokens which occur in position zero, the blue distribution is instances of the ". ." token, and green is instances of all other tokens. Results for all layers can be found in Figures 8 and 9 of the appendix.
+
+
+
+
+
+
+
+# 5.1 Postprocessing
+
+Standardization: We have observed that a small subset of dimensions with means far from zero and high variance completely dominate cosine similarity. A straightforward way to adjust for this is to subtract the mean vector and divide each dimension by its standard deviation, such that each dimension has $\mu_{i} = 0$ and $\sigma_{i} = 1$ . Concretely, given some corpus of length $|\mathcal{O}|$ containing word representations $x \in \mathbb{R}^d$ , we compute the mean vector $\mu \in \mathbb{R}^d$
+
+$$
+\mu = \frac {1}{| \mathcal {O} |} \cdot \sum_ {x \in \mathcal {O}} x \tag {7}
+$$
+
+as well as the standard deviation in each dimension $\sigma \in \mathbb{R}^d$
+
+$$
+\sigma = \sqrt {\frac {1}{| \mathcal {O} |} \cdot \sum_ {x \in \mathcal {O}} (x - \mu) ^ {2}} \tag {8}
+$$
+
+Our new standardized representation for each word vector $(z)$ becomes the z-score in each dimension.
+
+$$
+z = \frac {x - \mu}{\sigma} \tag {9}
+$$
+
+All-but-the-top: Following from similar observations (a nonzero common mean vector and a small number of dominant directions) in static embedding models, Mu and Viswanath (2018) proposed subtracting the common mean vector and eliminating the top few principle components (they suggested the top $\frac{d}{100}$ ), which should capture the variance of the rogue dimensions in the model11 and make the space more isotropic.
+
+Spearman's $\rho$ : Zhelezniak et al. (2019) treat word embeddings as $d$ observations from an $|\mathcal{O}|$ -variate distribution, and use Pearson correlation as
+
+a measure of similarity. They propose the use of non-parametric rank correlation coefficients, such as Spearman's $\rho$ when embeddings depart from normality. Spearman correlation is just Pearson correlation but between the ranks of embeddings, rather than their values. Thus Spearman correlation can also be thought of as a postprocessing technique, where instead of standardizing the space or removing the top components, we simply transform embeddings as $x' = \text{rank}(x)$ . Spearman's $\rho$ is robust to outliers and thus will not be dominated by the rogue dimensions of contextual language models. Unlike standardization and all-but-the-top, Spearman correlation requires no computations over the entire corpus. While rank-based similarity measures will not be dominated by rogue dimensions, rogue dimensions will tend to occupy the top or bottom ranks.
+
+# 5.2 Representational Quality
+
+While we have shown that cosine similarity is dominated by a small subset of dimensions, a remaining question is whether adjusting for these dimensions makes similarity measures more informative. In particular, we evaluate whether the cosine similarities between word pairs align more closely with human similarity judgments after post-processing. We evaluate this using 4 word similarity/relatedness judgment datasets: RG65 (Rubenstein and Goodenough, 1965), WS353 (Agirre et al., 2009), SIMLEX999 (Hill et al., 2015) and SIMVERB3500 (Gerz et al., 2016). Examples in these datasets consist of a pair of words and a corresponding similarity rating averaged over several human annotators. Because the similarity judgments were designed to evaluate static embeddings, we use the context-aggregation strategy of Bommasani
+
+et al. (2020) to produce static representations. $^{12}$
+
+For each model, we report the Spearman correlation between the model similarities and human-similarity judgments, averaged across all 4 datasets.13 We report the correlation for cosine similarities of the original embeddings, as well as for postprocessed embeddings using four strategies: standardization, all-but-the-top (removing the top 7 components), only subtracting the mean (the step common to both strategies) and Spearman correlation.
+
+# 5.3 Results
+
+Results are summarized in Figure 3. Our key findings are:
+
+Postprocessing aligns the embedding space more closely to human similarity judgments across almost all layers of all models. We found that standardization was the most successful post-processing method, showing consistent improvement over the original embeddings in all but the early layers of BERT.
+
+All-but-the-top was generally effective, though the resulting final layer of RoBERTa and GPT-2 exhibited poor correlation with human judgements, similar to the original embeddings. In pilot analyses, we found that all-but-the-top is highly dependent on the number of components removed, a hyperparameter, $D$ , which Mu and Viswanath (2018) suggest should be $\frac{d}{100}$ . Just removing the first principle component in RoBERTa yielded a stronger correlation, but all-but-the top did not significantly improve correlation with human judgements in the final layer of GPT-2 for any choice of $D$ .
+
+Simply subtracting the mean vector also yielded substantial gains in most models with the exception of the final layers of GPT-2 and XLNet. The rogue dimensions in the last layer of these two models have exceptionally high variance. While subtracting the mean made the space more isotropic as measured by cosine similarity, it did not reduce the variance of each dimension. We found, particularly in the final layer of GPT-2 and XLNet that 1-3 dimensions drive the variability of cosine similarity, and this was still the case when the mean vector
+
+was subtracted.
+
+Converting embeddings into ranks (Spearman correlation) also resulted in significantly stronger correlations with human judgments in all layers of all models, though the correlation was often weaker than standardization or all-but-the-top.
+
+Representational quality is preserved across all layers. Previous work has suggested that the final layers of transformer language models are highly task-specific. Liu et al. (2019) showed that the middle layers of BERT outperform the final layers on language understanding tasks. Using a cosine-similarity based text-generation evaluation metric, Zhang et al. (2020) showed a sharp drop in correlation to human judgements of machine translation quality in final layers of various transformer language models. Similarly, Davis and van Schijndel (2020) used Representational Similarity Analysis (RSA) with Pearson correlation $^{14}$ and found that intermediate layers of GPT-2 and TransformerXL encode human-like implicit causality biases which are subsequently obscured in final layers.
+
+Our findings suggest that linguistic representational quality (in this case lexical semantics) is actually preserved in the final layers but is obscured by a small handful of rogue dimensions. After simple postprocessing, later layers of the model correlate just as well, if not better than intermediate layers with human similarity judgments. This finding reaffirms the need to carefully consider the representational geometry of a model before drawing conclusions about layerwise representational quality, and the general linguistic knowledge these models encode.
+
+# 6 Discussion and Future Work
+
+Perhaps the most important direction for future work is designing and implementing language models which do not develop rogue dimensions in the first place. Gao et al. (2019) introduce a cosine-regularization term during pretraining which improved the performance of transformer models on machine translation. Perhaps BERT or GPT models could similarly benefit from such regularization.
+
+A prerequisite for designing models without rogue dimensions is understanding how these dimensions arise over time. Contemporaneous work from Bis et al. (2021) provides a useful characterization of how degenerate representations may be
+
+
+Figure 3: Average correlation (Spearman's $\rho$ ) with human judgments in the four word similarity datasets, with and without postprocessing.
+
+learned, which largely focuses on token frequency, while Kovaleva et al. (2021) provide a characterization of how outliers impact model performance, attributing much of the problem to scaling factors in layer normalization, and Luo et al. (2021) make observations about the contribution of positional embeddings. In the present work, we observe strong correlations with specific tokens and positions. Unifying these accounts is an important task for future work. With the recent release of the MultiBERT checkpoints (Sellam et al., 2021), future work can uncover whether rogue dimensions are a coincidental property of some models, or whether they are a requisite for good performance. The MultiBERTs may also elucidate how these dimensions emerge during pretraining. While we empirically motivate a trivially simple transformation which corrects for rogue dimensions, we believe the most fruitful direction for future work is to build models whose representations require no post-hoc transformations. This would result in more interpretable embedding spaces and may additionally lead to models with better performance.
+
+# 7 Conclusion
+
+In this work, we showed that similarity measures in contextual language models are largely reflective of a small number of rogue dimensions, not the entire embedding space. Consequently, a few dimensions can drastically change the conclusions we draw about the linguistic phenomena a model actually captures. We showed that the previously observed anisotropy in contextual models is essentially an artifact of rogue dimensions and is not a global property of the entire embedding space. We also showed that variability in similarity is driven by just 1-5 dimensions of the embedding space. In
+
+many cases, removing just a single dimension completely changed which token pairs were similar to one another. However, we found that model behavior was not driven by these rogue dimensions, and that these dimensions seem to handle a small subset of a model's linguistic abilities, such as punctuation and positional information. In summary, standard similarity measures such as cosine similarity and Euclidean distance are not informative measures of how contextual language models represent and process language. We argue that measures of similarity in contextual language models must account for rogue dimensions using techniques such as standardization. These techniques should not just be viewed as avenues to improve downstream performance, but as prerequisites for any analysis involving representational similarity.
+
+# Acknowledgements
+
+We would like to thank Maria Antoniak, Valts Blukis, Forrest Davis, Liye Fu, Ge Gao, Tianze Shi, Ana Smith, Karen Zhou, members of the Cornell NLP Group and the Computational Psycholinguistics Discussions research group (C.Psyd) for their valuable feedback on earlier drafts of this work. We additionally thank Rishi Bommasani for productive, stimulating discussion. Finally, we thank the reviewers and area chairs for their detailed and insightful feedback.
+
+# References
+
+Maria A. Rodriguez and Paola Merlo. 2020. Word associations and the distance properties of context-aware word embeddings. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 376-385, Online. Association for Computational Linguistics.
+
+Samira Abnar, Lisa Beinborn, Rochelle Choenni, and Willem Zuidema. 2019. Blackbox meets blackbox: Representational similarity & stability analysis of neural language models and brains. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 191-203, Florence, Italy. Association for Computational Linguistics.
+Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Paşa, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19-27, Boulder, Colorado. Association for Computational Linguistics.
+Sebastian Bach, Alexander Binder, Gregoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE, 10(7):1-46.
+Marco Baroni, Georgiana Dinu, and German Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238-247, Baltimore, Maryland. Association for Computational Linguistics.
+Daniel Bis, Maksim Podkorytov, and Xiuwen Liu. 2021. Too much in common: Shifting of embeddings in transformer language models and its implications. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5117-5130, Online. Association for Computational Linguistics.
+Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4758-4781, Online. Association for Computational Linguistics.
+Xingyu Cai, Jiaji Huang, Yuchen Bian, and Kenneth Church. 2021. Isotropy in the contextual embedding space: Clusters and manifolds. In International Conference on Learning Representations.
+Gabriella Chronis and Katrin Erk. 2020. When is a bishop not like a rook? when it's like a rabbi! multiprototype BERT embeddings for estimating semantic relationships. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 227-244, Online. Association for Computational Linguistics.
+
+Grzegorz Chrupała and Afra Alishahi. 2019. Correlating neural and symbolic representations of language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2952-2962, Florence, Italy. Association for Computational Linguistics.
+Forrest Davis and Marten van Schijndel. 2020. Discourse structure interacts with reference but not syntax in neural language models. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 396-407, Online. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics.
+Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tieyan Liu. 2019. Representation degeneration problem in training natural language generation models. In International Conference on Learning Representations.
+Marcos Garcia, Tiago Kramer Vieira, Carolina Scarton, Marco Idiart, and Aline Villavicencio. 2021. Probing for idiomaticity in vector space models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3551-3564, Online. Association for Computational Linguistics.
+Daniela Gerz, Ivan Vulic, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A large-scale evaluation set of verb similarity. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173-2182, Austin, Texas. Association for Computational Linguistics.
+Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665-695.
+Olga Kovaleva, Saurabh Kulshreshtha, Anna Rogers, and Anna Rumshisky. 2021. BERT busters: Outlier dimensions that disrupt transformers. In Findings of the Association for Computational Linguis
+
+tics: ACL-IJCNLP 2021, pages 3392-3405, Online. Association for Computational Linguistics.
+Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130, Online. Association for Computational Linguistics.
+Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Roberta: A robustly optimized bert pretraining approach.
+Ziyang Luo, Artur Kulmizev, and Xiaoxi Mao. 2021. Positional artefacts propagate through masked language model embeddings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5312-5327, Online. Association for Computational Linguistics.
+Matej Martinc, Petra Kralj Novak, and Senja Pollak. 2020. Leveraging contextual embeddings for detecting diachronic semantic shift. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4811-4819, Marseille, France. European Language Resources Association.
+Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Minnesota. Association for Computational Linguistics.
+Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT embeddings during fine-tuning? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 33-44, Online. Association for Computational Linguistics.
+Timothee Mickus, Denis Paperno, Mathieu Constant, and Kees van Deemter. 2020. What do you mean, BERT? In Proceedings of the Society for Computation in Linguistics 2020, pages 279-290, New York, New York. Association for Computational Linguistics.
+
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.
+Ari S. Morcos, David G.T. Barrett, Neil C. Rabinowitz, and Matthew Botvinick. 2018. On the importance of single directions for generalization. In International Conference on Learning Representations.
+Jiaqi Mu and Pramod Viswanath. 2018. All-but-thetop: Simple and effective postprocessing for word representations. In International Conference on Learning Representations.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
+A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
+Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of bert. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM, 8(10):627-633.
+Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298-307, Lisbon, Portugal. Association for Computational Linguistics.
+Thibault Sellam, Steve Yadlowsky, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, Ian Tenney, and Ellie Pavlick. 2021. The multiberts: Bert reproductions for robustness analysis.
+Aina Garí Soler and Marianna Apidianaki. 2021. Let's play mono-poly: Bert can reveal words' polysemy level and partitionability into senses.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762.
+Elena Voita, Rico Sennrich, and Ivan Titov. 2020. Analyzing the source and target contributions to predictions in neural machine translation.
+
+Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222-7240, Online. Association for Computational Linguistics.
+
+Bin Wang, Angela Wang, Fenxiao Chen, Yuncheng Wang, and C.-C. Jay Kuo. 2019. Evaluating word embedding models: methods and experimental results. APSIPA Transactions on Signal and Information Processing, 8.
+
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+
+Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert.* In International Conference on Learning Representations.
+
+Vitalii Zhelezniak, Aleksandar Savkov, April Shen, and Nils Hammerla. 2019. Correlation coefficients and semantic textual similarity. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 951-962, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+# A Removing Dominant Dimensions and Representational Geometry
+
+To facilitate a direct comparison with anisotropy estimates of Ethayarajh (2019), we replicate the experiments of Section 4 before and after removing the top $k$ dimensions with the largest $\mathbb{E}[CC_i]$ . For these experiments we chose $k = 5$ dimensions to remove. Results for anisotropy estimates are shown in Figure 4. Three key takeaways from this analysis are:
+
+All models tested had highly anisotropic representations, including XLNet and RoBERTa which had not been evaluated in previous work. XLNet is even more anisotropic than GPT-2 in its final two layers. RoBERTa's word representations are likewise highly anisotropic, though starting in earlier layers than in XLNet and BERT.
+
+After removing just 5 dimensions, embeddings become relatively isotropic, with $\hat{A}(f_{\ell})$ never larger than 0.25 in any layer of any model.
+
+Anisotropy becomes consistent across models and across layers, suggesting that the deviant dimensions that drive anisotropy are idiosyncratic
+
+and model/layer specific; we show this to indeed be the case in Section 4. By contrast, the geometry of the embedding space without rogue dimensions show similar properties across models/layers, suggesting that the similar qualities of the representational geometries of each model are obscured by these rogue dimensions.
+
+This can additionally be seen in our replication of the intra-sentence similarity and self-similarity from Ethayarajh (2019). While they find extreme cases in which words of the same type are no more similar to one another than randomly sampled words, we find a consistently high degree of self-similarity across all layers of all models after removing 5 dimensions. This suggests that information about word identity is preserved across all layers, rather than giving way to extremely contextualized representations in the final layer, this concurs with our findings in Section 5. Together, these show that our conclusions about the geometry of contextual embedding spaces are heavily skewed by the sensitivity of cosine similarity to rogue dimensions present in each of these models.
+
+# B Informativity of Euclidean Distance
+
+In this section, we conduct a similar analysis to Section 3.2 to see whether the variability in Euclidean distances between pairs of embeddings can be explained by Euclidean distance with the top $k$ dimensions are removed. Our methods for this analysis are identical to those of Section 3.2, except our criterion for choosing $k$ is the variance in each dimension. Results are shown in Table 3. In the extreme case of XLNet, none of the variability in Euclidean distances can be explained by Euclidean distances when a single dimension is removed. This means that Euclidean distance in this layer is effectively a measure of a single dimension.
+
+
+Average Cosine Similarity between Randomly Sampled Words
+Figure 4: Anisotropy by layer of the full embedding space (left) and with the top 5 dimensions removed, as measured by $\mathbb{E}[CC_i]$ (right). In all models, anisotropy drastically decreases, and becomes more consistent across models and layers.
+
+
+
+
+Average Intra-Sentence Similarity (anisotropy-adjusted)
+Figure 5: Intra-sentence similarity by layer of the full embedding space (left) and with the top 5 dimensions removed, as measured by $\mathbb{E}[CC_i]$ (right). Intra-sentence similarity is much more consistent and monotonically increasing when the top 5 dimensions are removed.
+
+
+
+
+Average Self-Similarity (anisotropy-adjusted)
+Figure 6: Average self-similarity (similarity of the same word type across contexts) by layer of the full embedding space (left) and with the top 5 dimensions removed, as measured by $\mathbb{E}[CC_i]$ (right). In the full embedding space, words of the same type in GPT-2 and XLNet appear no more similar to one another than randomly-sampled tokens. When we remove just 5 dimensions, words of the same type are indeed more similar to one another than the random baseline.
+
+
+
+
+xlnet-base-cased layer # 9
+
+
+roberta-base layer # 9
+
+
+gpt2 layer # 9
+
+
+bert-base-cased layer # 9
+
+
+xlnt-base-cased layer # 10
+
+
+roberta-base layer # 10
+
+
+gpt2 layer # 10
+
+
+bert-base-cased layer # 10
+
+
+xlnet-base-cased layer # 11
+
+
+roberta-base layer # 11
+
+
+gpt2 layer # 11
+
+
+bert-base-cased layer # 11
+Figure 7: Relative contribution of each dimension to cosine similarity (top) paired with its relative influence on model behavior (bottom) for layers 9-11 of each model.
+
+
+Figure 8: Distribution of activations in the dimension with highest variance in layers 0-6 of each model across a sample of 10k tokens. Each color corresponds to a specific type/position, where the orange distribution is tokens occurring in position zero, the blue distribution is instances of the ".." token, and green is all other tokens. In many cases, there are two clear modes in each distribution, where one corresponds to a specific word type or position. Additionally, this behavior tends to persist within the same dimension number across layers, which is facilitated by the residual connections present in each model.
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 9: Distribution of activations in the dimension with highest variance in layers 7-12 of each model across a sample of 10k tokens. Each color corresponds to a specific type/position, where the orange distribution is tokens occurring in position zero, the blue distribution is instances of the ".." token, and green is all other tokens. In many cases, there are two clear modes in each distribution, where one corresponds to a specific word type or position. Additionally, this behavior tends to persist within the same dimension number across layers, which is facilitated by the residual connections present in each model.
+
+
+
+
+
+
+
+
+
+
+
+
+Acivations across tokens in dimension
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 10: Average correlation (Spearman's $\rho$ ) with human judgements on each word similarity dataset, with and without postprocessing for GPT-2
+
+
+Figure 11: Average correlation (Spearman's $\rho$ ) with human judgements on each word similarity dataset, with and without postprocessing for BERT
+
+
+Figure 12: Average correlation (Spearman's $\rho$ ) with human judgements on each word similarity dataset, with and without postprocessing for RoBERTa
+
+
+Figure 13: Average correlation (Spearman's $\rho$ ) with human judgements on each word similarity dataset, with and without postprocessing for XLNet
+
+| Model | Layer | k=1 | k=3 | k=5 |
| GPT-2 | 0 | 0.999 | 0.996 | 0.996 |
| 1 | 0.983 | 0.975 | 0.975 |
| 2 | 0.999 | 0.783 | 0.783 |
| 3 | 0.992 | 0.257 | 0.257 |
| 4 | 0.993 | 0.200 | 0.200 |
| 5 | 0.993 | 0.159 | 0.159 |
| 6 | 0.993 | 0.090 | 0.090 |
| 7 | 0.992 | 0.037 | 0.037 |
| 8 | 0.990 | 0.007 | 0.007 |
| 9 | 0.990 | 0.002 | 0.002 |
| 10 | 0.986 | 0.022 | 0.022 |
| 11 | 0.971 | 0.974 | 0.974 |
| 12 | 0.909 | 0.333 | 0.333 |
| BERT | 0 | 0.997 | 0.997 | 0.997 |
| 1 | 0.994 | 0.993 | 0.993 |
| 2 | 0.993 | 0.992 | 0.992 |
| 3 | 0.994 | 0.993 | 0.993 |
| 4 | 0.988 | 0.987 | 0.987 |
| 5 | 0.992 | 0.991 | 0.991 |
| 6 | 0.988 | 0.987 | 0.987 |
| 7 | 0.982 | 0.981 | 0.981 |
| 8 | 0.969 | 0.968 | 0.968 |
| 9 | 0.925 | 0.924 | 0.924 |
| 10 | 0.762 | 0.761 | 0.761 |
| 11 | 0.434 | 0.433 | 0.433 |
| 12 | 0.990 | 0.989 | 0.989 |
| RoBERTa | 0 | 0.810 | 0.770 | 0.770 |
| 1 | 0.509 | 0.264 | 0.264 |
| 2 | 0.584 | 0.141 | 0.141 |
| 3 | 0.607 | 0.152 | 0.152 |
| 4 | 0.657 | 0.200 | 0.200 |
| 5 | 0.623 | 0.225 | 0.225 |
| 6 | 0.641 | 0.242 | 0.242 |
| 7 | 0.614 | 0.241 | 0.241 |
| 8 | 0.578 | 0.235 | 0.235 |
| 9 | 0.591 | 0.270 | 0.270 |
| 10 | 0.575 | 0.281 | 0.281 |
| 11 | 0.591 | 0.319 | 0.319 |
| 12 | 0.566 | 0.301 | 0.301 |
| XLNet | 0 | 0.999 | 0.996 | 0.996 |
| 1 | 1.000 | 1.000 | 1.000 |
| 2 | 1.000 | 0.987 | 0.987 |
| 3 | 0.993 | 0.992 | 0.992 |
| 4 | 0.983 | 0.978 | 0.978 |
| 5 | 0.903 | 0.896 | 0.896 |
| 6 | 0.481 | 0.470 | 0.470 |
| 7 | 0.432 | 0.426 | 0.426 |
| 8 | 0.235 | 0.236 | 0.236 |
| 9 | 0.321 | 0.323 | 0.323 |
| 10 | 0.308 | 0.307 | 0.307 |
| 11 | 0.124 | 0.150 | 0.150 |
| 12 | 0.028 | 0.024 | 0.024 |
+
+Table 3: Proportion of variance in Euclidean distance $r^2$ explained by Euclidean distance when the top $k$ dimensions (measured by the variance in each dimension) are removed.
+
+| Model | Layer | 1 | 2 | 3 | A(fl) |
| GPT-2 | 0 | 0.054 | 0.051 | 0.051 | 0.484 |
| 1 | 0.324 | 0.163 | 0.150 | 0.626 |
| 2 | 0.319 | 0.205 | 0.149 | 0.612 |
| 3 | 0.294 | 0.264 | 0.145 | 0.589 |
| 4 | 0.297 | 0.275 | 0.151 | 0.549 |
| 5 | 0.324 | 0.258 | 0.150 | 0.517 |
| 6 | 0.351 | 0.237 | 0.148 | 0.485 |
| 7 | 0.374 | 0.205 | 0.144 | 0.466 |
| 8 | 0.376 | 0.156 | 0.141 | 0.461 |
| 9 | 0.364 | 0.190 | 0.157 | 0.466 |
| 10 | 0.326 | 0.257 | 0.207 | 0.498 |
| 11 | 0.275 | 0.269 | 0.265 | 0.640 |
| 12 | 0.763 | 0.131 | 0.078 | 0.885 |
| BERT | 0 | 0.159 | 0.076 | 0.035 | 0.066 |
| 1 | 0.541 | 0.049 | 0.024 | 0.154 |
| 2 | 0.790 | 0.006 | 0.005 | 0.224 |
| 3 | 0.792 | 0.006 | 0.004 | 0.234 |
| 4 | 0.781 | 0.007 | 0.005 | 0.283 |
| 5 | 0.809 | 0.007 | 0.005 | 0.360 |
| 6 | 0.792 | 0.005 | 0.004 | 0.382 |
| 7 | 0.716 | 0.006 | 0.005 | 0.342 |
| 8 | 0.668 | 0.006 | 0.006 | 0.326 |
| 9 | 0.743 | 0.004 | 0.004 | 0.380 |
| 10 | 0.817 | 0.004 | 0.003 | 0.396 |
| 11 | 0.884 | 0.003 | 0.002 | 0.506 |
| 12 | 0.686 | 0.005 | 0.005 | 0.370 |
| RoBERTa | 0 | 0.726 | 0.040 | 0.021 | 0.143 |
| 1 | 0.850 | 0.081 | 0.009 | 0.442 |
| 2 | 0.862 | 0.093 | 0.013 | 0.627 |
| 3 | 0.841 | 0.113 | 0.017 | 0.659 |
| 4 | 0.796 | 0.146 | 0.023 | 0.666 |
| 5 | 0.775 | 0.160 | 0.025 | 0.672 |
| 6 | 0.745 | 0.180 | 0.030 | 0.679 |
| 7 | 0.726 | 0.193 | 0.032 | 0.705 |
| 8 | 0.674 | 0.229 | 0.038 | 0.690 |
| 9 | 0.648 | 0.254 | 0.040 | 0.675 |
| 10 | 0.698 | 0.223 | 0.032 | 0.689 |
| 11 | 0.666 | 0.252 | 0.031 | 0.696 |
| 12 | 0.663 | 0.262 | 0.020 | 0.745 |
| XLNet | 0 | 0.300 | 0.043 | 0.028 | 0.037 |
| 1 | 0.085 | 0.059 | 0.036 | 0.022 |
| 2 | 0.042 | 0.031 | 0.016 | 0.050 |
| 3 | 0.157 | 0.013 | 0.011 | 0.051 |
| 4 | 0.413 | 0.017 | 0.009 | 0.169 |
| 5 | 0.700 | 0.005 | 0.004 | 0.177 |
| 6 | 0.908 | 0.003 | 0.002 | 0.514 |
| 7 | 0.942 | 0.001 | 0.001 | 0.563 |
| 8 | 0.982 | 0.000 | 0.000 | 0.826 |
| 9 | 0.984 | 0.000 | 0.000 | 0.833 |
| 10 | 0.990 | 0.000 | 0.000 | 0.887 |
| 11 | 0.996 | 0.001 | 0.000 | 0.981 |
| 12 | 0.973 | 0.003 | 0.002 | 0.884 |
+
+Table 4: Proportion of total expected cosine similarity, $CC(f_{\ell}^{i}) / \hat{A}(f_{\ell})$ , contributed by each of the top 3 dimensions for all layers of each model, along with the anisotropy estimate $\hat{A}(f_{\ell})$ for the given layer.
+
+| Model | Layer | k=1 | k=3 | k=5 |
| GPT-2 | 0 | 0.999 | 0.996 | 0.996 |
| 1 | 0.985 | 0.888 | 0.888 |
| 2 | 0.990 | 0.899 | 0.899 |
| 3 | 0.991 | 0.849 | 0.849 |
| 4 | 0.910 | 0.775 | 0.775 |
| 5 | 0.872 | 0.719 | 0.719 |
| 6 | 0.853 | 0.684 | 0.684 |
| 7 | 0.862 | 0.713 | 0.713 |
| 8 | 0.894 | 0.797 | 0.797 |
| 9 | 0.921 | 0.490 | 0.490 |
| 10 | 0.947 | 0.428 | 0.428 |
| 11 | 0.967 | 0.352 | 0.352 |
| 12 | 0.819 | 0.232 | 0.232 |
| BERT | 0 | 0.999 | 0.997 | 0.997 |
| 1 | 0.894 | 0.848 | 0.848 |
| 2 | 0.580 | 0.568 | 0.568 |
| 3 | 0.514 | 0.504 | 0.504 |
| 4 | 0.459 | 0.449 | 0.449 |
| 5 | 0.383 | 0.374 | 0.374 |
| 6 | 0.343 | 0.338 | 0.338 |
| 7 | 0.391 | 0.394 | 0.394 |
| 8 | 0.400 | 0.398 | 0.398 |
| 9 | 0.219 | 0.220 | 0.220 |
| 10 | 0.119 | 0.123 | 0.123 |
| 11 | 0.046 | 0.048 | 0.048 |
| 12 | 0.213 | 0.214 | 0.214 |
| RoBERTa | 0 | 0.810 | 0.770 | 0.770 |
| 1 | 0.509 | 0.264 | 0.264 |
| 2 | 0.584 | 0.141 | 0.141 |
| 3 | 0.607 | 0.152 | 0.152 |
| 4 | 0.657 | 0.200 | 0.200 |
| 5 | 0.623 | 0.225 | 0.225 |
| 6 | 0.641 | 0.242 | 0.242 |
| 7 | 0.614 | 0.241 | 0.241 |
| 8 | 0.578 | 0.235 | 0.235 |
| 9 | 0.591 | 0.270 | 0.270 |
| 10 | 0.575 | 0.281 | 0.281 |
| 11 | 0.591 | 0.319 | 0.319 |
| 12 | 0.566 | 0.301 | 0.301 |
| XLNet | 0 | 0.999 | 0.996 | 0.996 |
| 1 | 1.000 | 1.000 | 1.000 |
| 2 | 1.000 | 0.987 | 0.987 |
| 3 | 0.993 | 0.992 | 0.992 |
| 4 | 0.983 | 0.978 | 0.978 |
| 5 | 0.903 | 0.896 | 0.896 |
| 6 | 0.481 | 0.470 | 0.470 |
| 7 | 0.432 | 0.426 | 0.426 |
| 8 | 0.235 | 0.236 | 0.236 |
| 9 | 0.321 | 0.323 | 0.323 |
| 10 | 0.308 | 0.307 | 0.307 |
| 11 | 0.124 | 0.150 | 0.150 |
| 12 | 0.028 | 0.024 | 0.024 |
+
+Table 5: Proportion of variance in cosine similarity $r^2$ explained by cosine similarity when the top $k$ dimensions (measured by cosine similarity contribution) are removed. Layer 0 is the static embedding layer.
\ No newline at end of file
diff --git a/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/images.zip b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..50aa18090d5d069d4c92d3e9acf05a6a0726aa00
--- /dev/null
+++ b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb13e17fa2f7182d7aee554abc207214ab511783a62507ead0dfa59915498613
+size 1616968
diff --git a/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/layout.json b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2e827f5efa98a11a439b0656ec31b0a9b47dd9a4
--- /dev/null
+++ b/allbarkandnobiteroguedimensionsintransformerlanguagemodelsobscurerepresentationalquality/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3d47a5bb0e791bc30bc55e4cd7f76aa306c21dd017e1326827cca49756f522f0
+size 585452
diff --git a/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_content_list.json b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..74771a83440b346b4b7c59b23d8e94b0e7421fc7
--- /dev/null
+++ b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83f04f70de74b6d25b5511d90dd3921f6a56a9166da97702230eabcac8045c9d
+size 90098
diff --git a/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_model.json b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e104ea863c78ca582fc61802db7f6be47ab7b0cf
--- /dev/null
+++ b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0f91509e40b1453c5d5266aa494f9a1257c85b711fc0e4aded806267eb853bd
+size 107474
diff --git a/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_origin.pdf b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ba5c17e08f088f1bb98e006a90ffd84304d34b9d
--- /dev/null
+++ b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/93dba626-5f56-42b8-b820-220fb2b217ec_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c648b6f740509524bc95532783482bf87d1adfd73f83fa8a52f8cfcb560e808
+size 684877
diff --git a/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/full.md b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bdbe3b1e23cc4b5216be88e95d55aa79cd63e027
--- /dev/null
+++ b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/full.md
@@ -0,0 +1,342 @@
+# Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
+
+Bo Zheng†, Li Dong‡, Shaohan Huang‡, Saksham Singhal‡, Wanxiang Che†, Ting Liu†, Xia Song‡, Furu Wei‡
+†Harbin Institute of Technology
+‡Microsoft Corporation
+{bzheng, car, tliu}@ir.hit.edu.cn
+{lidong1, shaohanh, saksingh, xiao, fuwei}@microsoft.com
+
+# Abstract
+
+Compared to monolingual models, cross-lingual models usually require a more expressive vocabulary to represent all languages adequately. We find that many languages are under-represented in recent cross-lingual language models due to the limited vocabulary capacity. To this end, we propose an algorithm VOCAP to determine the desired vocabulary capacity of each language. However, increasing the vocabulary size significantly slows down the pre-training speed. In order to address the issues, we propose $k$ -NN-based target sampling to accelerate the expensive softmax. Our experiments show that the multilingual vocabulary learned with VOCAP benefits cross-lingual language model pre-training. Moreover, $k$ -NN-based target sampling mitigates the side-effects of increasing the vocabulary size while achieving comparable performance and faster pre-training speed. The code and the pretrained multilingual vocabularies are available at https://github.com/bozheng-hit/VoCapXLM.
+
+# 1 Introduction
+
+Pretrained cross-lingual language models (Conneau and Lample, 2019; Conneau et al., 2020; Chi et al., 2021b; Xue et al., 2020) have recently shown great success in improving cross-lingual transferability. These models encode texts from different languages into universal representations with a shared multilingual vocabulary and a shared Transformer encoder (Vaswani et al., 2017). By pretraining cross-lingual language models on the largescale multilingual corpus, the models achieve state-of-the-art performance on various downstream tasks, e.g., cross-lingual question answering and cross-lingual sentence classification.
+
+Although the Transformer architecture used in most pretrained monolingual and cross-lingual language models are almost identical, the vocabularies
+
+are quite different. The vocabulary sizes in existing pretrained monolingual language models typically range from 30K to 60K subword units (Devlin et al., 2019; Liu et al., 2019; Dong et al., 2019; Bao et al., 2020). Meanwhile, state-of-the-art pretrained cross-lingual language models use the shared multilingual vocabulary of 250K subword units to represent more than 100 languages (Conneau et al., 2020; Chi et al., 2021b; Xue et al., 2020). Although some subword units are shared across languages, no more than 2.5K language-specific subword units on average are allocated for each language, which is still relatively small. Besides, the multilingual vocabulary is trained on the combined multilingual corpus with subword segmentation algorithms like BPE (Sennrich et al., 2015) and unigram language model (Kudo, 2018). During vocabulary construction, these algorithms tend to select more subword units shared across languages with common scripts like Latin and Cyrillic (Chung et al., 2020b), but have a lower chance to select language-specific subword units. It is hard to determine how much vocabulary capacity a particular language requires and whether the shared multilingual vocabulary has allocated enough vocabulary capacity to represent the language.
+
+In this paper, we propose VOCAP, an algorithm to allocate large vocabulary for cross-lingual language model by separately evaluating the required vocabulary capacity of each language. First, we use the average log probability (ALP) to evaluate the ability of a vocabulary to represent a particular language. We find that ALP is highly correlated to the downstream task performance, and we use it as an indicator to allocate language-specific vocabulary capacity. In addition, the language-specific pre-training corpus size should also be considered since the pretrained model can only learn limited knowledge from low-resource languages where the pre-training data is scarce. Therefore, allocating too much vocabulary capacity for low-resource lan
+
+guages is inefficient. VOCAP leverages both ALP and pre-training corpus size to evaluate the required vocabulary capacity of each language. We finally allocate a multilingual vocabulary with 500K subword units with VOCAP and show it can significantly improve the model performance.
+
+However, increasing the vocabulary size has two practical drawbacks: slow pre-training speed and heavy model size. To address the pre-training speed issue, we propose $k$ -NN-based target sampling, an approximate algorithm to improve the computing efficiency in the expensive softmax caused by the large vocabulary. We pre-train the model with a small subset of the entire vocabulary constructed with $k$ nearest neighbors of the target words in current mini-batch data, evaluated with the inner product of subword embeddings. As for the model size, we halve the embedding dimension and draw a different conclusion from Conneau et al. (2020) that increasing vocabulary from 250K to 500K with a fixed capacity model can also improve the performance.
+
+Our contributions are summarized as follows:
+
+- We propose VOCAP, an algorithm to allocate appropriate vocabulary capacity for each language in the shared multilingual vocabulary of cross-lingual language models.
+- We propose $k$ -NN-based target sampling, a softmax approximation algorithm to improve the computing efficiency during cross-lingual language model pre-training.
+- We evaluate our methods on the XTREME benchmark (Hu et al., 2020), including three different tasks on seven datasets. Experiments show that VOCAP consistently outperforms previous vocabulary construction methods. Meanwhile, our $k$ -NN-based target sampling enables effective acceleration while achieving comparable performance.
+
+# 2 VOCAP: Language-Specific Vocabulary Capacity Allocation
+
+We attribute the main factors that affect the performance of a particular language in a crosslingual language model to language-specific pretraining corpus size and vocabulary capacity. While previous work adjusts pre-training corpus size with an exponentially smoothed sampling distribution (Conneau and Lample, 2019; Conneau et al.,
+
+2020), few existing works have explored the effect of the language-specific vocabulary capacity in pretrained cross-lingual language models.
+
+In this section, we first investigate the correlation between the language-specific vocabulary capacity and downstream task performance through experiments. Then we introduce our proposed multilingual vocabulary allocation algorithm VOCAP.
+
+# 2.1 Investigating Language-Specific Vocabulary Capacity
+
+We start by introducing average log probability (ALP) to quantify the language-specific vocabulary capacity in the shared multilingual vocabulary for a specific language. Given a monolingual corpus composed of sentences $\mathcal{D}_i = \{s_1, \dots, s_{|\mathcal{D}_i|}\}$ from the $i$ -th language and tokenized with vocabulary $V$ , the average log probability is defined as follows:
+
+$$
+\mathrm {A L P} \left(\mathcal {D} _ {i}, V\right) = \frac {1}{\left| \mathcal {D} _ {i} \right|} \sum_ {j = 1} ^ {\left| \mathcal {D} _ {i} \right|} \sum_ {k = 1} ^ {\left| s _ {j} \right|} \log p _ {u n i} \left(s _ {j} ^ {k}\right) \tag {1}
+$$
+
+where $s_j^k$ is the $k$ -th subword of the sentence $s_j$ , and $p_{uni}(\cdot)$ is the unigram distribution counted on the monolingual corpus $\mathcal{D}_i$ . It is difficult to count the language-specific subword units in multilingual vocabularies since the raw text contains a lot of code-switched data. By contrast, ALP is a more convenient indicator of language-specific vocabulary capacity and it is penalized by the subword units with low-frequency.
+
+To investigate the impact of language-specific vocabulary capacity, we first learn monolingual vocabularies in different sizes to obtain vocabularies with different ALP, i.e., language-specific vocabulary capacity. Then we conduct pre-training with these monolingual vocabularies on their corresponding monolingual corpora. Finally, we evaluate these monolingual models on downstream tasks and study the correlation between language-specific vocabulary capacity and downstream task performance.
+
+# 2.1.1 Setup
+
+To alleviate the bias from the languages' characteristics, we first select four languages with different pre-training corpus sizes from different language families, which are Hindi (hi), Persian (fa), Italian (it), Russian (ru). We first learn thirty monolingual
+
+
+Figure 1: ALP of different monolingual vocabularies with different vocabulary sizes.
+
+
+Figure 3: F1 score on NER task with different vocabularies versus their ALP on the monolingual corpus.
+
+vocabies for each language on the corresponding monolingual corpus, with vocabulary size ranging from 1K to 30K. Then we pretrain monolingual language models with the corresponding monolingual vocabularies. We evaluate these pretrained models on two downstream tasks: NER (Pan et al., 2017) and POS (Zeman et al., 2019) from the XTREME benchmark since there is annotated task data for a large number of languages. The vocabularies are learned on the reconstructed CommonCrawl corpus (Chi et al., 2021b; Conneau et al., 2020) using SentencePiece (Kudo and Richardson, 2018) with the unigram language model (Kudo, 2018). The unigram distributions are also counted on the CommonCrawl corpus. The Wikipedia corpus is used for all pre-training experiments in this paper since it is easier to run experiments due to its smaller size. More details about the pre-training data can be found in the appendix.
+
+# 2.1.2 Observations
+
+Increasing vocabulary size affects ALP of different languages in varying degrees. In Fig-
+
+
+Figure 2: F1 score on POS task with different vocabularies versus their ALP on the monolingual corpus.
+
+
+Figure 4: Comparison of vocabulary capacity of different-resourced languages. Shorter bars indicate larger vocabulary capacity.
+
+ure 1, we show the correlation between vocabulary size and ALP of four different languages. We observe the ALP varies across different languages, mainly because ALP correlates with the lexicon granularity of the language, i.e., the average number of tokens per sentence. Besides, when the vocabulary size is larger than 10,000, the gains of increasing monolingual vocabulary size in hi and fa are less than it and ru. We attribute it to that hi and fa does not have extensive compoundings. Another observation is that for each language, every time we increase the vocabulary size by 1K, the increment in ALP is monotonically decreasing.
+
+ALP correlates positively with downstream task performance. In Figure 2 and Figure 3, we illustrate downstream task performance of models pretrained with monolingual vocabularies on corresponding monolingual corpora. We observe that ALP correlates positively with downstream task performance, making language-specific ALP a valid indicator to allocate multilingual vocabulary. Another natural option to allocate multilingual vo
+
+Algorithm 1 Allocating Multilingual Vocabulary with VO-
+CAP
+Input: size of target multilingual vocabulary $T$ ; monolin
+gual vocabularies of $N$ languages $\{V_{t_i}^i\}_{i = 1}^N$ ; monolingual
+corpus of $N$ languages $\{D_i\}_{i = 1}^N$
+Output: multilingual vocabulary $V$
+1: for $i\gets 1$ to $N$ do
+2: for $j\gets 1$ to 50 do
+3: $a_{i,j\times 1000}\gets \mathrm{ALP}(D_i,V_{j\times 1000}^i)$
+4: $t_i\gets 0$
+5: $a_{i,0}\gets -\infty$
+6: do
+7: $j\gets 0$
+8: $\delta \gets 0$
+9: for $i\gets 1$ to $N$ do
+10: if $\delta < a_{i,t_i + 1000} - a_{i,t_i}$ then
+11: $\delta \gets a_{i,t_i + 1000} - a_{i,t_i}$
+12: $j\gets i$
+13: $t_j\gets t_j + 1000$
+14: $V\gets |\bigcup_{i = 1}^{N}V_{t_i}^i |$
+15: while $|V| < T$
+16: if $|V| > T$ then
+17: Clip the size of $V$ to $T$
+
+cabulary is directly using monolingual vocabulary size to indicate language-specific vocabulary capacity. We compare ALP against vocabulary size and observe that ALP correlates better than vocabulary size with the downstream task performance. Besides, ALP reflects the language-specific characteristics, while vocabulary size does not. The detailed comparison is shown in the appendix.
+
+# 2.2 Allocating Multilingual Vocabulary with VOCAP
+
+Based on the observations in Section 2.1.2, we first give the implementation of our proposed vocabulary allocation algorithm VOCAP. Then we compare the multilingual vocabulary learned with VOCAP and directly learned with SentencePiece on the multilingual corpus.
+
+# 2.2.1 VOCAP Implementation
+
+We formulate the vocabulary construction of VOCAP as the problem of finding the optimal way to allocate language-specific vocabulary size to each language, such that the overall ALP of all languages is maximized. In addition to language-specific vocabulary capacity measured with ALP from Equation (1), the language-specific pretraining corpus size also affects the downstream task performance. Considering the two factors, the
+
+procedure of VOCAP can be formulated as follows:
+
+$$
+\operatorname {a r g m a x} _ {t _ {1}, \dots , t _ {N}} \sum_ {i = 1} ^ {N} q _ {i} ^ {\beta} \mathrm {A L P} \left(D _ {i}, V _ {t _ {i}} ^ {i}\right) \quad s. t. \quad \left| \bigcup_ {i = 1} ^ {N} V _ {t _ {i}} ^ {i} \right| = T \tag {2}
+$$
+
+where $t_i \in \{x \times 1000 \mid x \leq 50, x \in N^+\}$ is the number of subword units allocated to the $i$ -th language, $\beta$ is a rescaling factor, $V_{t_i}^i$ is the vocabulary of the $i$ -th language with $t_i$ subword units, $T$ is the size of the target multilingual vocabulary, and $q_i$ is the probability of sampling training instances from $i$ -th language during pre-training (Conneau and Lample, 2019; Conneau et al., 2020):
+
+$$
+q _ {i} = \frac {f _ {i} ^ {\alpha}}{\sum_ {j = 1} ^ {N} f _ {j} ^ {\alpha}} \text {w i t h} f _ {i} = \frac {n _ {i}}{\sum_ {k = 1} ^ {N} n _ {k}} \tag {3}
+$$
+
+where $n_i$ is the number of instances in the $i$ -th language, $\alpha$ is a rescaling factor used to alleviate the bias towards high-resource languages. Since the increment in ALP when increasing the vocabulary size by a certain number is monotonically decreasing, Equation (2) can be solved with the greedy algorithm in Algorithm 1.
+
+# 2.2.2 Intrinsic Analysis
+
+We compare the multilingual vocabulary learned with VOCAP and directly learned with SentencePiece on the multilingual corpus. The multilingual corpus to learn vocabularies in this paper is the concatenation of sentences sampled randomly from the monolingual corpora. Sentences from the $i$ -th language is sampled with probability $q_{i}$ from Equation (3) and use $\alpha = 0.7$ . We filter languages with corpus size larger than 0.1 GB, resulting in 86 languages.
+
+We evaluate the multilingual vocabularies with their ALP on each language's monolingual corpus, and show results of different-resourced languages in Figure 4. We refer to languages with less than 1GB and more than 10GB pre-training corpus in the reconstructed CommonCrawl as low-resource and high-resource languages, respectively, otherwise mid-resource languages. When directly learning vocabulary on the multilingual corpus using SentencePiece, the vocabulary with 500K subword units (JOINT $_{500\mathrm{K}}$ ) only has a negligible improvement compared to the vocabulary with 250K subword units (JOINT $_{250\mathrm{K}}$ ). Meanwhile, our method
+
+$\mathrm{(VOCAP_{500K})}$ consistently outperforms $\mathrm{JOINT}_{500\mathrm{K}}$ in different-resourced languages, especially in mid and low-resource languages. The statistics of the allocated vocabulary size for each language in $\mathrm{VOCAP}_{500\mathrm{K}}$ are shown in the appendix.
+
+# 3 Accelerate Large-Vocabulary Language Model Pre-Training
+
+Although extending the multilingual vocabulary benefits cross-lingual language models, pretraining with such large vocabularies brings two practical issues: slow pre-training speed and heavy model size. To tackle the issues, we first introduce our $k$ -NN-based target sampling in Section 3.1, which is a softmax approximation algorithm to improve computing efficiency. Then we describe how we reallocate the model parameters to keep the model size fixed in Section 3.2.
+
+# 3.1 $k$ -NN-Based Target Sampling
+
+To reduce the expensive computation cost of the softmax function, we propose $k$ -NN-based target sampling to approximate the expensive softmax. The original masked language modeling objective minimizes the cross-entropy loss for every masked subword $w_{i}$ on the extensive multilingual vocabulary $V$ . The proposed $k$ -NN-based target sampling instead uses a smaller vocabulary subset $V'$ . The approximation of the masked language modeling loss for the masked subword $w_{i}$ is defined as follows:
+
+$$
+\mathcal {L} \left(w _ {i}\right) = - \log \frac {\exp \left(h ^ {\mathrm {T}} v _ {w _ {i}} + b _ {w _ {i}}\right)}{\sum_ {w _ {j} \in V ^ {\prime}} \exp \left(h ^ {\mathrm {T}} v _ {w _ {j}} + b _ {w _ {j}}\right)} \tag {4}
+$$
+
+where $h$ is the corresponding output vector of the penultimate network layer, i.e., the output vector of the Transformer encoder, $v_{w_i}$ is the embedding of the subword unit $w_i$ , and $b_{w_i}$ is a bias term. We formulate the construction of the vocabulary subset $V'$ as follows:
+
+$$
+V ^ {\prime} = \bigcup_ {w _ {i} \in \mathcal {W}} \mathcal {I} _ {k} \left(w _ {i}\right) \tag {5}
+$$
+
+$$
+\mathcal {I} _ {k} (w _ {i}) = \operatorname {t o p - k} (\{v _ {w _ {i}} ^ {\mathrm {T}} v _ {w _ {j}} \mid w _ {j} \in V \}) \tag {6}
+$$
+
+where $\mathcal{W}$ denotes the set of target masked subword units in the current mini-batch, and $\mathcal{I}_k(w_i)$ denotes the $k$ most similar subwords measured with the inner product of the subword embedding $v_{w_i}$ and $v_{w_j}$ .
+
+However, retrieving $\mathcal{I}_k(w_i)$ at every training step for every subword unit $w_{i}\in \mathcal{W}$ requires as much
+
+Algorithm 2 Pre-training with $k$ -NN-based target sampling
+
+Input: multilingual corpus $\mathcal{D}_{\mathrm{m}}$ size $k$ of $k$ -NN-based target sampling; multilingual vocabulary $V$ ; learning rate $\tau$ Output: model parameters $\theta$
+
+1: while not converged do
+2: Sample $n$ mini-batches $\{\mathcal{X}^{(t)},\mathcal{W}^{(t)}\}_{t = 1}^{n}\sim \mathcal{D}_{\mathfrak{m}}\triangleright$ $\mathcal{X}^{(t)}$ is a mini-batch of monolingual text, and $\mathcal{W}^{(t)}$ is the set of masked subwords.
+3: Update $\mathcal{I}_k(w_i)$ for every $w_{i}\in V$
+4: for $t\gets 1$ to $n$ do $\triangleright$ Train the model for $n$ steps.
+5: $V^{\prime}\gets \bigcup_{w_{i}\in \mathcal{W}^{(t)}}\mathcal{I}_{k}(w_{i})$
+6: $\pmb{g} \gets \sum_{w_i \in \mathcal{W}^{(t)}} \nabla_\pmb{\theta} \mathcal{L}(w_i)$
+7: $\pmb {\theta}\gets \pmb {\theta} - \tau \pmb{g}$
+
+computation cost as softmax, which is unaffordable. As an alternative, we compute $\mathcal{I}_k(w_i)$ for every subword $w_{i}\in V$ according to the current subword embeddings every $n$ training steps and replace the previous version of $\mathcal{I}_k(w_i)$ with the new one. We determine the value of $n$ such that $|V|\ll n\times |\mathcal{W}|$ . We illustrate the pre-training procedure with $k$ -NN-based target sampling in Algorithm 2.
+
+From a practical point of view under the crosslingual setting, the previous sampling-based softmax approximation methods either sample subwords from recent mini-batches or samples subwords from unigram distribution, the task becomes simpler since a considerable part of the subword samples is from different languages. Meanwhile, our $k$ -NN-based target sampling uses subwords with similar representations like synonyms, which enforces the model focus on discriminating the ground-truth subword from a set of noise samples that are not easy to distinguish. When using an approximate algorithm, the key point is to remain the difficult part of the original masked language modeling objective as much as possible.
+
+# 3.2 Reducing the Embedding Dimension
+
+In order to keep the number of model parameters fixed while increasing the vocabulary size, we follow (Lan et al., 2020) and (Chung et al., 2020a) to reduce both the input and output embedding dimension and linearly project the embeddings to the hidden dimension of the Transformer blocks. More precisely, we halve the embedding dimension when the vocabulary size is doubled. This rebalancing strategy only slightly degrades the model performance but improves pre-training speed and decreases the model size.
+
+Conneau et al. (2020) also studied the relation between the size of the shared multilingual vocabulary and downstream task performance with multi
+
+| Model | # Params | Speed | Pair Sentence | Structure Prediction | Question Answering |
| XNLI | PAWS-X | POS | NER | XQuAD | MLQA | TyDiQA | |
| | | Acc. | Acc. | F1 | F1 | F1/EM | F1/EM | F1/EM | Avg. |
| XLM-R250K | 265M | 1.00x | 68.7 | 82.6 | 72.1 | 60.6 | 63.4/47.4 | 57.2/39.6 | 45.2/29.6 | 60.7 |
| JOINT250K | 265M | 1.00x | 69.2 | 83.3 | 72.4 | 59.7 | 63.9/47.9 | 58.9/40.7 | 45.4/29.6 | 61.1 |
| JOINT500K | 448M | 0.72x | 69.4 | 82.2 | 72.1 | 60.5 | 64.7/48.0 | 58.2/40.3 | 48.0/32.6 | 61.4 |
| VOCAP250K | 265M | 1.00x | 69.3 | 82.0 | 71.4 | 60.0 | 66.2/50.3 | 60.1/42.6 | 45.6/30.6 | 61.5 |
| VOCAP500K | 448M | 0.72x | 70.5 | 83.0 | 72.9 | 62.7 | 66.8/50.6 | 60.9/42.9 | 50.0/34.5 | 63.1 |
| + k-NN | 448M | 1.18x | 70.8 | 82.6 | 72.5 | 61.8 | 67.1/49.8 | 61.4/42.5 | 56.3/39.3 | 63.7 |
| + half emb | 265M | 0.94x | 70.3 | 83.0 | 72.0 | 61.7 | 65.8/49.0 | 61.0/42.3 | 49.3/33.0 | 62.5 |
| + k-NN & half emb | 265M | 1.35x | 69.8 | 83.4 | 72.1 | 60.1 | 66.6/49.5 | 60.8/42.7 | 50.2/33.9 | 62.5 |
+
+Table 1: Evaluation results on the XTREME benchmark. “XLM-R $_{250\mathrm{K}}$ ” denotes using the XLM-R (Conneau et al., 2020) vocabulary with 250K subword units. “ $k$ -NN” and “half emb” denote our $k$ -NN-based target sampling method and using half embedding dimension, respectively.
+
+lingual models of the fixed number of parameters. They keep the overall number of parameters constant by adjusting the width (i.e., hidden size) of the Transformer. Notice that we only reduce the embedding dimension while keeping the Transformer blocks untouched.
+
+# 4 Experiment
+
+# 4.1 Setup
+
+Fine-Tuning Datasets To validate the effectiveness of our methods, we conduct experiments on three types of cross-lingual understanding tasks from XTREME benchmark (Hu et al., 2020), including two classification datasets: XNLI (Connieu et al., 2018), PAWS-X (Yang et al., 2019), three span extraction datasets: XQuAD (Artetxe et al., 2020), MLQA (Lewis et al., 2020), TyDiQA-GoldP (Clark et al., 2020), and two sequence labeling datasets: NER (Pan et al., 2017), POS (Zeman et al., 2019). The statistics of the datasets are shown in the appendix.
+
+Implementation Details We adapt the Transformer architecture from the base model setting in Conneau et al. (2020), i.e., 12 layers and 768 hidden dimension size. We use masked language modeling objective to train our models for 1 million updates on eight 32GB Nvidia V100 GPUs with a batch size of 256. We update the top-k indices for every word in the multilingual vocabulary every 1,000 training steps and use $k = 50$ in $k$ -NN-based target sampling. The learning rate is scheduled with a polynomial decay with 10K warmup steps, where the peak learning rate is set as 0.0001. We adapt other hyper-parameters in pre-training from Chi et al. (2021b). All fine-tuning results are averaged over five random seeds. The fine-tuning pipeline
+
+is based on the code base of (Zheng et al., 2021). The fine-tuning implementation details are shown in the appendix.
+
+# 4.2 Results
+
+Table 1 shows XTREME fine-tuning results with models pretrained using different vocabularies and acceleration strategies. Compared to vocabulary directly learned on multilingual corpus with SentencePiece, i.e., XLM- $\mathbf{R}_{250\mathrm{K}}$ and $\mathsf{JOINT}_{250\mathrm{K}}$ , our $\mathsf{VOCAP}_{250\mathrm{K}}$ improves on question answering datasets but degrades on PAWS-X, POS and NER. Then increasing the vocabulary from $\mathsf{VOCAP}_{250\mathrm{K}}$ to $\mathsf{VOCAP}_{500\mathrm{K}}$ mitigates the gap and bring improvements on six datasets except for PAWS-X, which only includes seven high-resource languages. However, increasing the size of vocabulary directly learned with Sentencepiece from $\mathsf{JOINT}_{250\mathrm{K}}$ to $\mathsf{JOINT}_{500\mathrm{K}}$ does not improve the performance as our VOCAP method does, showing the importance of selecting language-specific subword units and leveraging how much vocabulary capacity each language requires.
+
+Since increasing vocabulary size brings the issues of model size and pre-training speed, we study the proposed method to accelerate pre-training: $k$ -NN-based target sampling ( $k$ -NN) and using half embedding dimension (half emb). Our $k$ -NN method improves pre-training speed with a 500K vocabulary so that the speed is 1.18 times that vanilla pre-training with a 250K vocabulary. Meanwhile, pre-training with our $k$ -NN method does not significantly degrade the performance, it even brings improvement on XNLI, MLQA, and TyDiQA. Then we halve the embedding dimension of the models with 500K vocabulary and results in a similar number of parameters to models with 250K
+
+| Method | XNLI | POS | MLQA | Speed |
| VOCAP500K | 69.2 | 72.9 | 59.9/41.7 | 1.00x |
| +k-NN | 69.3 | 72.1 | 59.6/40.3 | 1.64x |
| +target sampling | 68.8 | 71.3 | 57.6/38.8 | 1.56x |
| +NCE | 56.0 | 61.8 | 41.1/26.2 | 1.40x |
| +NEG | 56.5 | 62.9 | 40.1/25.6 | 1.40x |
+
+Table 2: Comparison between different sampling-based softmax approximation approaches with vocabulary $\mathrm{VOCAP}_{500\mathrm{K}}$ . Models are pretrained for 0.5M steps.
+
+| Method | XNLI | POS | MLQA | Speed |
| VOCAP500K | 69.2 | 71.8 | 59.9/41.7 | 1.00x |
| + k-NN (k=5) | 68.5 | 71.3 | 58.6/40.0 | 1.76x |
| + k-NN (k=10) | 69.3 | 71.4 | 58.9/39.6 | 1.74x |
| + k-NN (k=25) | 69.2 | 71.7 | 59.8/40.9 | 1.69x |
| + k-NN (k=50) | 69.3 | 72.1 | 59.6/40.3 | 1.64x |
| + k-NN (k=100) | 69.5 | 72.1 | 60.0/41.3 | 1.57x |
+
+vocabulary. The overall performance degrades by 0.6-points but still consistently improves over models with 250K vocabularies while the speed is comparable. Combining the two methods above, we achieve a 1.35-times speed-up and more than 1 point improvement with a similar model size compared to models with 250K vocabularies.
+
+# 4.3 Analysis and Discussion
+
+We conduct a thorough analysis to understand the impact of our proposed methods on cross-lingual language models. To reduce the computation load, we only pre-train the cross-lingual language models for 500K steps for some of our settings.
+
+$k$ -NN-based target sampling outperforms previous sampling-based approaches. To verify the effectiveness of our proposed $k$ -NN-based sampling method, we compare it against previous sampling-based approaches used to approximate softmax, which are target sampling (Jean et al., 2015), noise contrastive estimation (Mnih and Teh (2012), NCE) and negative sampling (Mikolov et al. (2013), NEG). The results are shown in Table 2. To make a fair comparison, since our $k$ -NN-based sampling method using $k = 50$ samples vocabulary subset with less than 50,000 subword units per batch on average, we here sample 50,000 negative subword units per batch for target sampling, NCE, and NEG. Among the four methods, NCE and NEG are significantly worse than $k$ -NN and
+
+Table 3: Comparison between different $k$ values in $k$ -NN-based sampling method. Models are pretrained for 0.5M steps.
+
+| Method | XNLI | POS | NER | MLQA |
| β=0 | 66.9 | 71.8 | 61.5 | 58.6/41.0 |
| β=0.3 | 69.0 | 71.7 | 61.6 | 59.2/40.1 |
| β=0.7 | 69.2 | 71.8 | 61.5 | 59.9/41.7 |
| β=1.0 | 69.5 | 71.8 | 60.9 | 58.4/40.3 |
+
+Table 4: Impact of adjusting high-resource versus low-resource vocabulary capacity trade-off with $\beta$ . $\beta = 0$ indicates the vocabulary is allocated without considering pre-training corpus size. Models are pretrained for 0.5M steps.
+
+target sampling. We attribute it that NCE and NEG need more training steps to converge (Mnih and Teh, 2012). Besides, the original NCE typically sample different negative samples for every target word, while we here use 50,000 negative samples for all target word in current mini-batch, which is more efficient on GPUs.
+
+Effect of the value of $k$ in $k$ -NN-based target sampling. We illustrate the downstream task performance when using different values of $k$ in our $k$ -NN-based target sampling in Table 3. While a smaller $k$ indicates faster pre-training speed, we observe even with a small value like 5, the result does not significantly degrade compared to using the original softmax. We attribute this to that by retrieving subword samples that are most similar to the target subword, the model can focus on the difficult part of the original masked language modeling objective. More precisely, the model focuses on discriminating the ground-truth subword from a set of noise samples that are not easy to distinguish. Considering the overall performance, the pre-training speed, and running memory to store $k$ -NN indices, we use $k = 50$ in all our experiments.
+
+Language-specific pre-training corpus should also be considered when allocating vocabulary capacity. The pre-training corpus size varies across different languages. It is inefficient to allocate a large vocabulary capacity for low-resource languages with rare pre-training data since the pretrained model can only learn limited knowledge from these languages. Here we study the value of rescaling factor $\beta$ from Equation (2) in multilingual vocabulary construction in Table 4. The rescaling factor $\beta$ controls the number of selected language-specific subword units. Increasing the value of $\beta$ improves the performance of XNLI, where most languages are high-resource languages. However, it degrades the performance of NER, where more
+
+
+Figure 5: Performance on XNLI and MLQA versus the cross-lingual language models' pre-training cost.
+
+low-resources languages exist. When considering overall performance, we decide to use $\beta = 0.7$ in our experiments.
+
+The proposed acceleration strategies significantly improve the downstream task performance under the same pre-training cost. Increasing the vocabulary size slows the pre-training speed, even though there is almost no difference in fine-tuning speed. We study the relationship between the downstream task performance and the pre-training cost under different model settings in Figure 5. We observe $\mathrm{VOCAP}_{500\mathrm{K}} + k$ -NN achieves the best performance. Models trained with $500\mathrm{K}$ vocabulary consistently outperform $250\mathrm{K}$ vocabulary on XNLI. Besides, we observe the performance on MLQA with the model trained using $250\mathrm{K}$ vocabulary degrades as the training continues while models trained using $500\mathrm{K}$ vocabulary does not, indicating the sufficient vocabulary capacity is essential for question answering task.
+
+VOCAP gains more improvement on mid and low-resource languages than high-resource languages. In Figure 4 in Section 2, we show that the vocabulary learned with VOCAP benefits the vocabulary capacity of low-resource languages more than high-resource languages, indicating the improvements should mainly come from low-resource languages. To verify this, we compare VOCAP against SentencePiece baseline on the performance of different-resourced languages on XNLI and NER in Figure 6. We observe that the vocabulary learned with VOCAP significantly outperforms the vocabularies directly learned with SentencePiece on mid and low-resource languages. This observation is also consistent with the ALP results in Figure 4.
+
+
+Figure 6: Impact of VOCAP on the performance of different-resourced languages on XNLI and NER.
+
+# 5 Related Work
+
+Pretrained Cross-Lingual Language Models Recent work pre-trains Transformer models (Vaswani et al., 2017) on the large-scale multilingual corpus to obtain pretrained crosslingual language models (Conneau and Lample, 2019; Conneau et al., 2020; Chi et al., 2020, 2021a,b,c,d; Chung et al., 2020a; Xue et al., 2020; Ma et al., 2020, 2021). These models are capable of encoding texts from different languages into universal representations and significantly improves cross-lingual transferability.
+
+Multilingual Vocabulary Construction Cross-lingual language models need large vocabularies to ensure all languages are adequately represented. Recent research work on constructing multilingual vocabulary for cross-lingual language models can be categorized into two groups. mBERT (Devlin et al., 2019), XLM (Conneau and Lample, 2019), and XLM-R (Conneau et al., 2020) learn vocabularies on a combined multilingual corpus with WordPiece (Wu et al., 2016), BPE (Sennrich et al., 2015), and unigram language model (Kudo, 2018) from SentencePiece (Kudo and Richardson, 2018), respectively. Chung et al. (2020b) propose to balance the trade-off between optimizing for cross-lingual subword sharing and the need for robust representation of individual languages. They first group languages into clusters and learn vocabularies individually on each cluster, then combine all cluster-vocabularies to form a single unified multilingual vocabulary. Compared to Chung et al. (2020b), our advantage is that we separately quantify the vocabulary capacity each language needs with average log probability and balance the construction procedure with pre-training corpus size.
+
+Softmax Approximation Approximating the softmax was a core problem in training NLP tasks with a large vocabulary, e.g., neural machine translation, language modeling. With the rise of subword representations (Sennrich et al., 2015; Wu et al., 2016; Kudo, 2018), the vocabulary size significantly decreases, and the problem has been less studied recently. Nevertheless, the need for training cross-lingual language models with a large multilingual vocabulary has drawn our attention again to the softmax approximation approaches. The existing softmax approximation approaches can be grouped into softmax-based and sampling-based approaches. Softmax-based approaches include hierarchical softmax (Morin and Bengio, 2005), differentiated softmax (Chen et al., 2016), and CNN-softmax (Kim et al., 2016). However, these approaches improve the softmax efficiency by changing its architecture, which is unsuitable for either training on GPUs or multilingual settings. Sampling-based approaches instead optimize some other easy-to-compute loss function to approximate the original softmax, including target sampling (Jean et al., 2015), noise contrastive estimation (Mnih and Teh, 2012), negative sampling (Mikolov et al., 2013). Our $k$ -NN-based target sampling is also a sampling-based approach.
+
+# 6 Conclusion
+
+In this paper, we study pre-training cross-lingual language models with large vocabulary capacity. First, we propose VOCAP to construct large multilingual vocabulary in cross-lingual language models. We conduct a quantitative analysis to show that average log probability is an valid indicator of vocabulary capacity for a particular language, which also correlates with downstream task performance on the language. VOCAP uses the language-specific average log probability and pre-training corpus size to allocate appropriate vocabulary capacity for each language in the multilingual vocabulary. Moreover, we propose $k$ -NN-based target sampling to accelerate pre-training with the allocated large multilingual vocabulary by approximating the expensive softmax. We also show that reducing the embedding dimension is an effective way to keep the improvement brought by the large vocabulary without increasing the number of model parameters. The experiments demonstrate the effectiveness of the proposed vocabulary construction method as well as the acceleration methods.
+
+# Acknowledgments
+
+We would like to acknowledge Zewen Chi, Shuming Ma for the helpful discussions. This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 61772153. Wanxiang Che is the corresponding author.
+
+# References
+
+Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.
+Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, and Hsiao-Wuen Hon. 2020. UniLMv2: Pseudo-masked language models for unified language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, pages 7006-7016.
+Wenlin Chen, David Grangier, and Michael Auli. 2016. Strategies for training large vocabulary neural language models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.
+Zewen Chi, Li Dong, Shuming Ma, Shaohan Huang, Xian-Ling Mao, Heyan Huang, and Furu Wei. 2021a. mT6: Multilingual pretrained text-to-text transformer with translation pairs. arXiv preprint arXiv:2104.08692.
+Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao, and Heyan Huang. 2020. Cross-lingual natural language generation via pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7570-7577. AAAI Press.
+Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021b. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576-3588, Online. Association for Computational Linguistics.
+Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, XianLing Mao, Heyan Huang, and Furu Wei. 2021c.
+
+Improving pretrained cross-lingual language models via self-labeled word alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3418-3430, Online. Association for Computational Linguistics.
+Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Saksham Singhal, Payal Bajaj, Xia Song, and Furu Wei. 2021d. XLM-E: Cross-lingual language model pre-training via ELECTRA. ArXiv, abs/2106.16138.
+Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, and Sebastian Ruder. 2020a. Rethinking embedding coupling in pre-trained language models. CoRR, abs/2010.12821.
+Hyung Won Chung, Dan Garrette, Kiat Chuan Tan, and Jason Riesa. 2020b. Improving multilingual models with language-clustered vocabularies. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4536-4546. Association for Computational Linguistics.
+Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454-470.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
+Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7057-7067. Curran Associates, Inc.
+Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language
+
+Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pages 13063-13075. Curran Associates, Inc.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalization. arXiv preprint arXiv:2003.11080.
+Sebastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1-10. The Association for Computer Linguistics.
+Yoon Kim, Yacine Jernite, David A. Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2741-2749. AAAI Press.
+Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 66-75. Association for Computational Linguistics.
+Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 66-71. Association for Computational Linguistics.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Patrick Lewis, Barlas Oguz, Rudy Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evaluating cross-lingual extractive question answering. In
+
+Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7315-7330, Online. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Shuming Ma, Li Dong, Shaohan Huang, Dong-dong Zhang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, and Furu Wei. 2021. DeltaLM: Encoder-decoder pre-training for language generation and translation by augmenting pretrained multilingual encoders. ArXiv, abs/2106.13736.
+Shuming Ma, Jian Yang, H. Huang, Zewen Chi, Li Dong, Dongdong Zhang, Hany Hassan Awadalla, Alexandre Muzio, Akiko Eriguchi, Saksham Singhal, Xia Song, Arul Menezes, and Furu Wei. 2020. XLM-T: Scaling up multilingual machine translation with pretrained cross-lingual transformer encoders. *ArXiv*, abs/2012.15547.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111-3119.
+Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress.
+Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, AISTATS 2005, Bridgetown, Barbados, January 6-8, 2005. Society for Artificial Intelligence and Statistics.
+Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946-1958, Vancouver, Canada. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. Curran Associates, Inc.
+
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyls, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.
+Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934.
+Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687-3692, Hong Kong, China. Association for Computational Linguistics.
+Daniel Zeman, Joakim Nivre, Mitchell Abrams, and et al. 2019. Universal dependencies 2.5. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UfAL), Faculty of Mathematics and Physics, Charles University.
+Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, and Furu Wei. 2021. Consistency regularization for cross-lingual fine-tuning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3403-3417, Online. Association for Computational Linguistics.
+
+# A Correlation between Language-Specific Vocabulary Capacity and Task Performance
+
+We compare the Pearson correlation coefficients between ALP and downstream task performance with the coefficients between vocabulary size and downstream task performance in Table 5. The results show that ALP correlates better than vocabulary size with downstream task performance.
+
+# B Statistics of XTREMEDatasets
+
+| Language | Task | ρ(ALP,F1) | ρ(|V|,F1) |
| hi | POS | 0.922 | 0.787 |
| NER | 0.879 | 0.890 |
| fa | POS | 0.905 | 0.700 |
| NER | 0.912 | 0.872 |
| it | POS | 0.665 | 0.422 |
| NER | 0.899 | 0.900 |
| ru | POS | 0.423 | 0.327 |
| NER | 0.872 | 0.833 |
+
+Table 5: Pearson correlation coefficients between ALP and downstream task performance and between vocabulary size and downstream task performance.
+
+| Task | Dataset | |Train| | |Lang| |
| Classification | XNLI | 392K | 15 |
| PAWS-X | 49.4K | 7 |
| Structured Prediction | POS | 21K | 33 |
| NER | 20K | 40 |
| Question Answering | XQuAD | 87K | 11 |
| MLQA | 87K | 7 |
| TyDiQA | 3.7K | 9 |
+
+# C Fine-tuning Settings
+
+Implementation Details For the POS dataset, we use the average-pooling strategy on subwords to obtain word representation since part-of-speech is related to different parts of words, depending on the language. We tune the hyper-parameter and select the model with the best average results over all the languages' development set. There are two datasets without development set in multi-languages. For XQuAD, we tune the hyper-parameters with the development set of MLQA since they share the same training set and have a higher degree of overlap in languages. For TyDiQA-GoldP, we use the English test set as the development set.
+
+Hyper-Parameters For XNLI, PAWS-X, POS, and NER, we fine-tune 10 epochs. For XQuAD and MLQA, we fine-tune 4 epochs. For TyDiQA-GoldP, we fine-tune 6 or 8 epochs and select the best number of epochs with the English test set as the development set. For learning rate, we select
+
+Table 6: Statistics for the datasets in the XTREME benchmark. we report the number of training examples (|Train|), and the number of languages (|Lang|).
+
+| Code | Size (GB) | Code | Size (GB) | Code | Size (GB) |
| af | 0.2 | hu | 9.5 | pl | 28.6 |
| am | 0.4 | hy | 0.7 | ps | 0.4 |
| ar | 16.1 | id | 17.2 | pt | 39.4 |
| as | 0.1 | is | 0.5 | ro | 11.0 |
| az | 0.8 | it | 47.2 | ru | 253.3 |
| ba | 0.2 | ja | 86.8 | sa | 0.2 |
| be | 0.5 | ka | 1.0 | sd | 0.2 |
| bg | 7.0 | kk | 0.6 | si | 1.3 |
| bn | 5.5 | km | 0.2 | sk | 13.6 |
| ca | 3.0 | kn | 0.3 | sl | 6.2 |
| cs | 14.9 | ko | 40.0 | sq | 3.0 |
| cy | 0.4 | ky | 0.5 | sr | 7.2 |
| da | 6.9 | la | 0.3 | sv | 60.4 |
| de | 99.0 | lo | 0.2 | sw | 0.3 |
| el | 13.1 | lt | 2.3 | ta | 7.9 |
| en | 731.6 | lv | 1.3 | te | 2.3 |
| eo | 0.5 | mk | 0.6 | tg | 0.7 |
| es | 85.6 | ml | 1.3 | th | 33.0 |
| et | 1.4 | mn | 0.4 | tl | 1.2 |
| eu | 1.0 | mr | 0.5 | tr | 56.4 |
| fa | 19.0 | ms | 0.7 | tt | 0.6 |
| fi | 5.9 | mt | 0.2 | ug | 0.2 |
| fr | 89.9 | my | 0.4 | uk | 13.4 |
| ga | 0.2 | ne | 0.6 | ur | 3.0 |
| gl | 1.5 | nl | 25.9 | uz | 0.1 |
| gu | 0.3 | nn | 0.4 | vi | 74.5 |
| he | 4.4 | no | 5.5 | yi | 0.3 |
| hi | 5.0 | or | 0.3 | zh | 96.8 |
| hr | 1.4 | pa | 0.8 | | |
+
+Table 7: The statistics of the reconstructed Common-Crawl corpus for learning vocabularies.
+
+in [7e-6, 1e-5] for XNLI and PAWS-X, [1e-5, 2e-5] for POS and NER, [2e-5, 3e-5] for XQuAD, MLQA and TyDiQA-GoldP.
+
+# D Pre-Training Data
+
+We use the reconstruct CommonCrawl corpus in Chi et al. (2021b) to learn vocabularies in our paper. Because tokenizing the pre-training data is time-consuming, we instead conduct our pre-training on Wikipedia since it has a smaller size. We only consider the languages that are shared by the reconstructed CommonCrawl corpus and Wikipedia. The statistics of the Wikipedia corpus and the reconstructed CommonCrawl corpus are listed in Table 8 and Table 7.
+
+| Code | Size (GB) | Code | Size (GB) | Code | Size (GB) |
| af | 0.12 | hu | 0.8 | pl | 1.55 |
| am | 0.01 | hy | 0.6 | ps | 0.04 |
| ar | 1.29 | id | 0.52 | pt | 1.5 |
| as | 0.04 | is | 0.05 | ro | 0.42 |
| az | 0.24 | it | 2.69 | ru | 5.63 |
| ba | 0.13 | ja | 2.65 | sa | 0.04 |
| be | 0.31 | ka | 0.37 | sd | 0.02 |
| bg | 0.62 | kk | 0.29 | si | 0.09 |
| bn | 0.41 | km | 0.12 | sk | 0.21 |
| ca | 1.1 | kn | 0.25 | sl | 0.21 |
| cs | 0.8 | ko | 0.56 | sq | 0.1 |
| cy | 0.06 | ky | 0.1 | sr | 0.74 |
| da | 0.33 | la | 0.05 | sv | 1.7 |
| de | 5.43 | lo | 0.01 | sw | 0.03 |
| el | 0.73 | lt | 0.19 | ta | 0.46 |
| en | 12.58 | lv | 0.12 | te | 0.44 |
| eo | 0.25 | mk | 0.34 | tg | 0.04 |
| es | 3.38 | ml | 0.28 | th | 0.52 |
| et | 0.23 | mn | 0.05 | tl | 0.04 |
| eu | 0.24 | mr | 0.1 | tr | 0.43 |
| fa | 0.66 | ms | 0.2 | tt | 0.09 |
| fi | 0.68 | mt | 0.01 | ug | 0.03 |
| fr | 4.0 | my | 0.15 | uk | 2.43 |
| ga | 0.03 | ne | 0.06 | ur | 0.13 |
| gl | 0.27 | nl | 1.38 | uz | 0.06 |
| gu | 0.09 | nn | 0.13 | vi | 0.76 |
| he | 1.11 | no | 0.54 | yi | 0.02 |
| hi | 0.38 | or | 0.04 | zh | 1.08 |
| hr | 0.28 | pa | 0.1 | | |
+
+Table 8: The statistics of the Wikipedia corpus used for pre-training.
+
+| Code | Size (K) | Code | Size (K) | Code | Size (K) |
| af | 2 | hu | 12 | pl | 20 |
| am | 3 | hy | 5 | ps | 3 |
| ar | 15 | id | 13 | pt | 20 |
| as | 2 | is | 3 | ro | 13 |
| az | 5 | it | 22 | ru | 34 |
| ba | 2 | ja | 23 | sa | 1 |
| be | 3 | ka | 4 | sd | 2 |
| bg | 9 | kk | 4 | si | 3 |
| bn | 6 | km | 4 | sk | 11 |
| ca | 8 | kn | 2 | sl | 8 |
| cs | 14 | ko | 17 | sq | 7 |
| cy | 3 | ky | 3 | sr | 10 |
| da | 9 | la | 3 | sv | 18 |
| de | 24 | lo | 2 | sw | 3 |
| el | 17 | lt | 7 | ta | 6 |
| en | 23 | lv | 6 | te | 4 |
| eo | 4 | mk | 4 | tg | 5 |
| es | 26 | ml | 3 | th | 14 |
| et | 5 | mn | 3 | tl | 4 |
| eu | 4 | mr | 3 | tr | 18 |
| fa | 9 | ms | 4 | tt | 3 |
| fi | 9 | mt | 3 | ug | 3 |
| fr | 25 | my | 2 | uk | 12 |
| ga | 2 | ne | 3 | ur | 5 |
| gl | 5 | nl | 14 | uz | 2 |
| gu | 2 | nn | 3 | vi | 12 |
| he | 6 | no | 7 | yi | 2 |
| hi | 6 | or | 2 | zh | 30 |
| hr | 6 | pa | 3 | | |
+
+Table 9: The statistics of the allocated vocabulary size for each language.
\ No newline at end of file
diff --git a/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/images.zip b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0b71be795a867b9fdbe9fa599bba8f53d645e9cd
--- /dev/null
+++ b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2aa2a4d490802626b984d711d9be6ac76a763e42bf14a0f8c0c5231277ba66f4
+size 499286
diff --git a/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/layout.json b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7c636f7ed352f4fbd5d48dd651bddd10a872f09c
--- /dev/null
+++ b/allocatinglargevocabularycapacityforcrosslinguallanguagemodelpretraining/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e059c22af7d4c71f426544ee954389d896027ad4faae0559779322ebde14caa
+size 448571
diff --git a/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_content_list.json b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..926a00f9fa6cd0fd7310a009754be22112a59821
--- /dev/null
+++ b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:385c455c5f374a673d59798cc30fb5c95805c51598ba9419b4dd5514c1348452
+size 82087
diff --git a/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_model.json b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8ab749427e219ae9223b548c67a2ad51ff6f1f41
--- /dev/null
+++ b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f27e57c1d3bb381cf7c2c751e55596cb7eaf78e4c3df597745dac387fd23089
+size 96683
diff --git a/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_origin.pdf b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2dc8513b4ad642e3e5916dafbb7ec6845ca5014b
--- /dev/null
+++ b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/1730de68-621c-4246-86c3-da7956bb0fc9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e785bf0933f422cbf8455a7b4909265d952cd1a47bed3c324119dc7e9216bc16
+size 509741
diff --git a/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/full.md b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0a5bdc1a0894b494a6741771ed15343a5e6fb056
--- /dev/null
+++ b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/full.md
@@ -0,0 +1,257 @@
+# AM $^2$ ICO: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples
+
+Qianchu Liu1, Edoardo M. Ponti2, Diana McCarthy1, Ivan Vulić1, Anna Korhonen1
+
+$^{1}$ Language Technology Lab, TAL, University of Cambridge, UK
+
+$^{2}$ Mila/McGill University, Montreal, Canada
+
+{ql261,ep490,iv250,alk23}@cam.ac.uk
+
+diana@dianamccarthy.co.uk
+
+# Abstract
+
+Capturing word meaning in context and distinguishing between correspondences and variations across languages is key to building successful multilingual and cross-lingual text representation models. However, existing multilingual evaluation datasets that evaluate lexical semantics "in-context" have various limitations. In particular, 1) their language coverage is restricted to high-resource languages and skewed in favor of only a few language families and areas, 2) a design that makes the task solvable via superficial cues, which results in artificially inflated (and sometimes super-human) performances of pretrained encoders, and 3) no support for cross-lingual evaluation. In order to address these gaps, we present $\mathrm{AM}^2\mathrm{ICo}$ (Adversarial and Multilingual Meaning in Context), a wide-coverage cross-lingual and multilingual evaluation set; it aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts for 14 language pairs. We conduct a series of experiments in a wide range of setups and demonstrate the challenging nature of $\mathrm{AM}^2\mathrm{ICo}$ . The results reveal that current SotA pretrained encoders substantially lag behind human performance, and the largest gaps are observed for low-resource languages and languages dissimilar to English.
+
+# 1 Introduction
+
+Pretrained language models (LMs) such as BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) offer a natural way to distinguish different word meanings in context without performing explicit sense disambiguation. This property of "meaning contextualization" is typically evaluated either via standard entity linking (Rao et al., 2013; Shen et al., 2014) and Word Sense Disambiguation (WSD) tasks (Navigli, 2009; Moro et al., 2014; Raganato et al., 2017) or, recently, via the Word-in-Context (WiC) evaluation paradigm (Pilehvar and
+
+Camacho-Collados, 2019; Raganato et al., 2020).
+
+Although monolingual evaluation in English is still predominant, a need has been recognized to construct similar resources for other languages to support cross-lingual evaluation and model diagnostics. This includes multilingual and cross-lingual WSD benchmarks (Navigli and Ponzetto, 2012; Navigli et al., 2013; Scarlini et al., 2020; Barba et al., 2020, inter alia), cross-lingual entity linking (Tsai and Roth, 2016; Raiman and Raiman, 2018; Upadhyay et al., 2018) and, most recently, multilingual WiC (termed XL-WiC) spanning 12 languages (Raganato et al., 2020).
+
+This most recent WiC evaluation approach is particularly attractive as 1) it bypasses the dependence on modeling predefined ontologies (entity linking) and explicit sense inventories (WSD), and 2) it is framed as a simple binary classification task: for a target word $w$ appearing in two different contexts $c_{1}$ and $c_{2}$ , the system must decide whether $w$ conveys the same meaning in both contexts, or not.
+
+However, the current WiC evaluation still allows ample room for improvement: 1) current language coverage is limited, and biased towards resource-rich Indo-European languages; 2) coverage of lexical concepts, due to their paucity in language-specific WordNets, is also limited; 3) XL-WiC is a monolingual resource available in different languages, i.e., it does not support cross-lingual assessments. Further, 4) the current WiC datasets offer low human upper bounds and inflated (even superhuman) performance for some languages. This is due to superficial cues where 5) many examples in the current WiC datasets can be resolved relying either on the target word alone without any context or on the context alone, which eludes evaluation honing in on the interplay between target words and their corresponding contexts.
+
+In order to address these limitations and provide
+
+a more comprehensive evaluation framework, we present $\mathbf{AM}^2\mathbf{I}\mathbf{C}\mathbf{O}$ (Adversarial and Multilingual Meaning in Context), a novel multilingual and cross-lingual WiC task and resource. It covers a typologically diverse set of 15 languages, see Table 2. Based on Wikipedia in lieu of WordNet, $\mathbf{AM}^2\mathbf{I}\mathbf{C}\mathbf{O}$ covers a wider set of ambiguous words which especially complements WiC on the long tail of entity names and adds challenges in generalization for a larger vocabulary than a restricted set of common words. More importantly, the use of Wikipedia enables WiC evaluation on low-resource languages (e.g., Basque, Georgian, Bengali, Kazakh). We also improve the WiC resource design; it now 1) includes adversarial examples and careful data extraction procedures to prevent the models from backing off to superficial clues, 2) results in a more challenging benchmark with truer and much wider gaps between current SotA pretrained encoders and human capability (see §2.3), and 3) enables cross-lingual evaluation and analysis.
+
+The ample and diverse data in $\mathrm{AM}^2\mathrm{ICO}$ enables a wide spectrum of experiments and analyses in different scenarios. We evaluate SotA pretrained encoders, multilingual BERT and XLM-R, both off-the-shelf using a metric-based approach (i.e., without any task adaptation) and after task-specific fine-tuning. With fine-tuned models, we investigate zero-shot cross-lingual transfer as well as transfer from multiple source languages. In general, our results across these diverse scenarios firmly indicate a large gap between human and system performance across the board, which is even more prominent when dealing with resource-poor languages and languages dissimilar to English, holding promise to guide modeling improvements in the future.
+
+In the hope that $\mathrm{AM}^2\mathrm{ICo}$ will be a challenging and valuable diagnostic and evaluation asset for future work in multilingual and cross-lingual representation learning, we release the data along with the full guidelines at https://github.com/cambridgeltl/AM2iCo.
+
+# 2 AM $^2$ ICO: Cross-Linguial Word-in-Context Evaluation
+
+Task Definition. $\mathrm{AM}^2\mathrm{ICo}$ is a standard binary classification task on pairs of word in context instances. Each pair consists of a target word with its context in English and a target word with its context in a target language. Formally, each dataset of $\mathrm{AM}^2\mathrm{ICo}$ spans a set of $N$ examples $\hat{x}_i$ ,
+
+$i = 1,\dots ,N$ for a language pair. Each example $\hat{x}_i$ is in fact a pair of items $\hat{x}_i = (x_{i,src},x_{i,trg})$ , where the item $x_{i,src}$ is provided in the source language $L_{src}$ and the item $x_{i,trg}$ is in the target language $L_{trg}$ . The item $x_{i,src}$ in turn is another pair $x_{i,src} = (w_{i,src},c_{i,src})$ ; it contains a target word $w_{i,src}$ from $L_{src}$ and its (wider) context $c_{i,src}$ (also in $L_{src}$ ) in which that word appears, see Table 1; the same is valid for $x_{i,trg}$ . The classification task is then to judge whether the words $w_{i,src}$ and $w_{i,trg}$ occurring in the respective contexts $c_{i,src}$ and $c_{i,trg}$ have the same sense/meaning (i.e., whether they refer to the same entity/concept), or not.
+
+Final Resource. The full $\mathrm{AM}^2\mathrm{ICo}$ resource comprises datasets for 14 language pairs, where English is paired with 14 target languages. For brevity, in the rest of the paper we refer to the dataset of each language pair simply with the $L_{trg}$ language code (e.g., ZH instead of EN-ZH); languages and codes are provided in Table 2.
+
+As illustrative examples, we show a positive pair (label 'T') and a negative pair (label 'F') from the ZH AM $^2$ ICo dataset in Table 1 (Examples 1 and 2). In the positive example, both target words 'Apollo' and '阿波罗' in their contexts refer to the same concept: the Apollo spaceflight program. In the negative example, the Chinese target word '阿波罗' refers to the Apollo aircraft, but the English target word 'Apollo' now refers to the Greek God.
+
+In what follows we describe the creation of $\mathrm{AM}^2\mathrm{ICO}$ . We also demonstrate the benefits of $\mathrm{AM}^2\mathrm{ICO}$ and its challenging nature.
+
+# 2.1 Data Creation
+
+Wikipedia is a rich source of disambiguated contexts for multiple languages. The availability of Wikipedia's cross-lingual links provides a direct way to identify cross-lingual concept correspondence. The items $x_{i,src}$ and $x_{i,trg}$ are then extracted by taking the surrounding (sentential) context of a hyperlinked word in a Wikipedia article. We balance the context length by (i) discarding items longer than 100 words, and (ii) adding preceding and following sentences to the context for sentences shorter than 30 words. Using the Wikipedia dumps of our 15 languages (see Table 2), we create monolingual items $x$ for each language. We select only ambiguous target words ( $w-s$ ), that is, words
+
+| no. | English xi,src | Chinese xi,trg | Label |
| 1 | Bill Kaysing ( July 31 , 1922 – April 21 , 2005 ) was an American writer who claimed that the six Apollo Moon landings between July 1969 and December 1972 were hoaxes , and so a founder of the Moon hoax movement . | 泰坦系列导弹的发射任务结束后 , LC-16被移交给NASA用做双子座计划的航天员训练及 阿波罗 中飞船服务舱的静态试车。[...] (After the launch of the Titan missiles, LC-16 was handed over to NASA for the training of the astronauts in the Gemini program and the static test run of the service module of the spacecraft in Apollo [...] | T |
| 2 | Nearer the house , screening the service wing from view , is a Roman triumphal arch , the “ Temple of Apollo ” , also known ( because of its former use a venue for cock fighting ) as “ Cockpit Arch ” , which holds a copy of the famed Apollo Belvedere . | 阿波罗-联盟测试计划中 , 美国的 阿波罗 航天器和苏联的联盟航天器在地球轨道中对接。... (In the Apollo-Soyuz test plan, America's Apollo spacecraft and the Soviet Union's Soyuz spacecraft are docked in the Earth orbit [...] | F |
| 3 | Bill Kaysing ( July 31 , 1922 – April 21 , 2005 ) was an American writer who claimed that the six Apollo Moon landings between July 1969 and December 1972 were hoaxes , and so a founder of the Moon hoax movement . | 泰坦系列导弹的发射任务结束后 , LC-16被移交给 NASA 用做双子座计划的航天员训练及阿波罗中飞船服务舱的静态试车。[...] (After the launch of the Titan missiles, LC-16 was handed over to NASA for the training of the astronauts in the Gemini program and the static test run of the service module of the spacecraft in Apollo [...] | F |
+
+Table 1: Positive (1), negative (2) and adversarial negative examples (3) from ZH AM $^2$ ICo. Target words are provided in boldface and with a gray background. Translations of the ZH items are provided in italic.
+
+ | DE | RU | JA | ZH | AR | KO | FI | TR | ID | EU | KA | BN | KK | UR |
| train | 50,000 | 28,286 | 16,142 | 13,154 | 9,622 | 7,070 | 6,322 | 3,904 | 1,598 | 978 | - | - | - | - |
| dev | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 332 | 276 | 108 |
| test | 1,000 | 1,000 | 1,000 | 1,000 | 1,000 | 1,000 | 1,000 | 1,000 | 1,000 | 1,000 | 1,000 | 700 | 400 | 400 |
+
+Table 2: Data sizes for $\mathrm{AM}^2\mathrm{ICO}$ across 14 language pairs. We also provide larger dev and test sets for DE and RU spanning 5,000 and 10,000 examples, respectively. EN=English; DE=German; RU-Russian; JA=Japanese; ZH=Chinese; AR=Arabic; KO=Korean; FI=Finnish; TR=Turkish; ID=Indonesian; EU=Basque; KA=Georgian; BN=Bengali, KK=Kazakh; UR=Urdu.
+
+that link to at least two different Wikipedia pages.2
+
+For each word, we then create monolingual positive examples by pairing two items (i.e., word-context pairs) $x_{i}$ and $x_{j}$ in which the same target word $w$ is linked to the same Wikipedia page, signaling the same meaning. In a similar fashion, monolingual negative examples are created by pairing two items where the same target word $w$ is linked to two different Wikipedia pages. We ensure that there is roughly an equal number of positive and negative examples for each target word.
+
+Now, each monolingual example (i.e., pair of items) $\hat{x}$ contains the same word occurring in two different contexts. In order to create a cross-lingual dataset, we leverage the Wikipedia cross-lingual links; we simply (i) replace one of the two items from each English pair with an item in the target
+
+language, and (ii) replace one of the two items from each target language pair with an English item, where the cross-lingual replacements point to the same Wikipedia page as indicated by the cross-lingual Wiki links. Through this procedure, the final datasets cover a sufficient (and roughly comparable) number of examples containing ambiguous words both in English and in $L_{trg}$ . We also rely on data selection heuristics that improve the final data quality, discussed in §2.2 and §2.3.
+
+Finally, in each cross-lingual dataset we reserve 1,000 examples for testing, 500 examples as dev data; the rest is used for training. The exception are 4 resource-poor languages, where all the data examples are divided between dev and test. All data portions in all datasets are balanced. We ensure zero overlap between train, dev, and test portions. The final $\mathrm{AM}^2\mathrm{ICo}$ statistics are given in Table 2.
+
+Human Validation. We employ human annotators to assess the quality of $\mathrm{AM}^2\mathrm{ICO}$ . For each dataset,
+
+we recruit two annotators who each validate a random sample of 100 examples, where 50 examples are shared between the two samples and are used to compute inter-rater agreement.
+
+# 2.2 Data Selection Heuristics
+
+One critical requirement of $\mathrm{AM}^2\mathrm{ICO}$ is ensuring a high human upper bound. In the initial data creation phase, we observed several sources of confusion among human raters, typically related to some negative pairs being frequently labeled as positive; we identified two causes of this discrepancy and then mitigated it through data selection heuristics.
+
+First, some common monosemous words still might get linked to multiple different Wikipedia pages, thus creating confusing negative pairs. For instance, some pronouns (e.g., 'he', 'it') and common nouns (e.g., 'daughter', 'son') may link to different entities as a result of coreference resolution. However, truly ambiguous words are typically directly defined in Wikipedia Disambiguation pages. We thus keep only the negative pairs that link to separate entries found in the Wikipedia Disambiguation pages. The second issue concerns concept granularity, as Wikipedia sometimes makes too fine-grained distinctions between concepts: e.g., by setting up separate pages for a country's name in different time periods.4 We mitigate this issue by requiring that the negative pairs do not share common or parent Wikipedia categories.
+
+The application of these heuristics during data creation (see §2.1) yields a substantial boost in human performance: e.g., the scores increase from $74\%$ to $88\%$ for ZH, and from $76\%$ to $94\%$ for DE.
+
+# 2.3 Adversarial Examples
+
+Another requirement is assessing to which extent models can grasp the meaning of a target word based on the (complex) interaction with its context. However, recently it was shown that SotA pretrained LMs exploit superficial cues while solving language understanding tasks due to spurious correlations seeping into the datasets (Gururangan et al., 2018; Niven and Kao, 2019). This hinders generalizations beyond the particular datasets and makes the models brittle to minor changes in the
+
+ | WiC | XL-WiC | MCL-WiC | AM2iCo |
| examples (mean) | 7,466 | 14,510 | 3600 | 13,074 |
| examples (median) | 7,466 | 1,676 | 2000 | 8,570 |
| word types (mean) | 4,130 | 7,255 | 2766 | 9,868 |
| word types (median) | 4,130 | 1,201 | 2072 | 8,520 |
| context length | 17 | 22.7 | 26.13 | 53.5 |
| human accuracy | 80 | 81.8 | - | 90.6 |
| human agreement | 80 | -5 | 94.2 | 88.4 |
| languages | 1 | 12 | 5 | 15 |
| language families | 1 | 5 | 3 | 10 |
+
+Table 3: Comparison of the most salient data statistics of $\mathrm{AM}^2\mathrm{ICo}$ versus WiC, XL-WiC and MCL-WiC.
+
+input space (Jia and Liang, 2017; Iyyer et al., 2018). As verified later in §4, we found this to be the case also for the existing WiC datasets: just considering the target word and neglecting the context (or vice versa) is sufficient to achieve high performance.
+
+To remedy this issue, we already ensured that models could not rely solely on target words in §2.1 by including both positive and negative examples for each ambiguous word in different contexts. Further, we now introduce adversarial negative examples in $\mathrm{AM}^2\mathsf{I}\mathrm{CO}$ to penalize models that rely only on context without considering target words. To create such negative examples, we sample a positive pair $x_{i}$ and instead of the original target word $w_{i}$ , we take another related word $\tilde{w}_i$ in the same context $c_{i}$ as the new target word $w_{i}$ .
+
+We define the related word as a hyperlinked mention sharing the same parent Wiki category as the original target word: e.g., in Table 1 we change the target word '阿波罗' (Apollo) from Example 1 into the related word 'NASA', resulting in Example 3. Both words share a common Wiki parent category, 美国国家航空航天局 (NASA). The contexts of both examples deal with spaceships; hence, only a fine-grained understanding of lexical differences between the target words warrants the ability to recognize Apollo as identical to '阿波罗' but different from 'NASA'. Overall, adversarial examples amount to roughly $1/4$ of our dataset.
+
+# 2.4 Data Statistics and Language Coverage
+
+We summarize the main properties of $\mathrm{AM}^2\mathrm{ICo}$ while comparing against previous word-in-context datasets WiC, XL-WiC and MCL-WiC in Table 3. More detailed per-language scores are listed in Table 4. First, we emphasize the accrued reliability of $\mathrm{AM}^2\mathrm{ICo}$ , as both human accuracy and inter-annotator agreement are substantially higher than with WiC and XL-WiC (i.e., rising by $\sim 10$ points).
+
+Second, for a comparable overall dataset size we increase the number of examples and word types in resource-poor languages. If we consider their median across languages, $\mathrm{AM}^2\mathrm{ICo}$ has 8,570 and 8,520, respectively, around four times more than XL-WiC (1,676 and 1,201) and MCL-WiC (2000 and 2072). XL-WiC are heavily skewed towards a small number of languages, namely German and French, and provides large datasets in those languages. MCL-WiC only offers training data for English. In contrast, $\mathrm{AM}^2\mathrm{ICo}$ provides a more balanced representation of its languages. Third, in $\mathrm{AM}^2\mathrm{ICo}$ we deliberately include longer contexts. While the data in WiC, XL-WiC and MCL-WiC are derived from concise dictionary examples, $\mathrm{AM}^2\mathrm{ICo}$ data reflect natural text where key information may be spread across a much wider context.
+
+Our selection of languages is guided by the recent initiatives to cover a typologically diverse language sample (Ponti et al., 2020). In particular, $\mathrm{AM}^2\mathrm{ICo}$ covers 15 languages, more than XL-WiC (12 languages) and MCL-WiC (5 languages). Diversity can be measured along multiple axes, such as family, geographic areas, and scripts (Ponti et al., 2019). $\mathrm{AM}^2\mathrm{ICo}$ includes 10 language families, namely: Afro-Asiatic (1 language), Austronesian (1), Basque (1), Indo-European (5), Japonic (1), Kartvelian (1), Koreanic (1), Sino-Tibetan (1), Turkic (2), Uralic (1). This provides a more balanced sample of the cross-lingual variation compared to XL-WiC (5 families) and MCL-WiC (3 families). Regarding geography, in addition to the areas covered by XL-WiC and MCL-WiC (mostly Europe and Eastern Asia), we also represent South-East Asia (with ID), the Middle East (TR), the Caucasus (KA), the Indian subcontinent (UR and BN), as well as central Asia (KK). Finally, $\mathrm{AM}^2\mathrm{ICo}$ also introduces scripts that were absent in other datasets, namely the Georgian alphabet and the Bengali script (a Northern Indian abugida), for a total of 8 distinct scripts.
+
+# 3 Experimental Setup
+
+We now establish a series of baselines on $\mathrm{AM}^2\mathrm{ICO}$ to measure the gap between current SotA models and human performance.
+
+Pretrained Encoders. Multilingual contextualized representations $\mathbf{e}_i\in \mathbb{R}^d$ for each target word are obtained via BASE variants of cased multilingual BERT (MBERT, Devlin et al., 2019) and
+
+XLM-R $^6$ (Conneau et al., 2020), available in the HuggingFace repository (Wolf et al., 2020).
+
+Classification. Given two contextualized representations $\mathbf{e}_{i,src}$ and $\mathbf{e}_{i,trg}$ for a pair of target words, two setups to make prediction are considered: the first, metric-based, is a non-parametric setup. In particular, we follow Pilehvar and Camacho-Collados (2019) and score the distance $\delta$ between the representations via cosine similarity. A threshold $t$ from the development set is set via grid search across 0.02 intervals in the [0, 1] interval. Therefore, if $\delta (\mathbf{e}_{i,src},\mathbf{e}_{i,trg})\geq t$ the pair is classified as negative, and positive otherwise. On the other hand, the fine-tuning setup is parametric: following Raganato et al. (2020), we train a logistic regression classifier that takes the concatenation of the contextualized representations $[\mathbf{e}_{i,src}\oplus \mathbf{e}_{i,trg}]$ as input. The entire model (both the encoder and the classifier) is then fine-tuned to minimize the cross-entropy loss of the training set examples with Adam (Kingma and Ba, 2015). We perform grid search for the learning rate in $[5e - 6,1e - 5,3e - 5]$ , and train for 20 epochs selecting the checkpoint with the best performance on the dev set.
+
+Cross-lingual Transfer. In addition to supervised learning, we also carry out cross-lingual transfer experiments where data splits may belong to different language pairs. The goal is transferring knowledge from a source language pair $\ell_s$ to a target language pair $\ell_t$ . To simulate different scenarios of data paucity, in the fine-tuning setup we consider: 1) zero-shot transfer, where train and development sets belong to $\ell_s$ and the test set to $\ell_t$ ; 2) zero-shot + TLD $^8$ transfer, which is similar except for the dev set given in $\ell_t$ ; 3) on top of zero-shot + TLD, we provide a small amount of training examples in $\ell_t$ , which we denote as few-shot transfer; 4) finally, in the joint multilingual setup, we train a single model on the concatenation of the train sets for all language pairs and select the hyper-parameters on the development set of $\ell_t$ .
+
+# 4 Results and Discussion
+
+Metric-based vs Fine-tuning. We report the results for the supervised learning setting (where all data splits belong to the same language pair) in Ta
+
+ | DE | RU | JA | ZH | AR | KO | FI | TR | ID | EU | KA | BN | KK | UR |
| FT MTR | MBERT | 67.1 | 65.0 | 62.3 | 65.8 | 63.9 | 62.1 | 61.7 | 57.1 | 66.3 | 64.1 | 60.4 | 60.0 | 59.2 | 58.8 |
| XLM-R | 65.0 | 63.1 | 56.7 | 56.7 | 58.4 | 57.5 | 64.1 | 62.4 | 65.7 | 62.9 | 58.3 | 56.2 | 58.0 | 55.5 |
| MBERT | 80.0 | 77.4 | 73.9 | 71.0 | 67.4 | 68.2 | 71.6 | 69.3 | 64.6 | 62.2 | - | - | - | - |
| XLM-R | 77.4 | 76.1 | 75.9 | 68.9 | 65.9 | 65.3 | 68.4 | 64.4 | 54.6 | 55.8 | - | - | - | - |
| HM | accuracy | 93.5 | 89.5 | 93.0 | 87.5 | 93.5 | 93.5 | 90.5 | 90.5 | 91.5 | 92.5 | 90.0 | 89.5 | 85.5 | 88.0 |
| agreement | 90.0 | 78.0 | 90.0 | 94.0 | 100.0 | 92.0 | 88.0 | 96.0 | 92.0 | 84.0 | 94.0 | 80.0 | 80.0 | 80.0 |
+
+Table 4: Accuracy of MBERT and XLM-R on $\mathsf{AM}^2\mathsf{ICO}$ in a supervised learning setting. We report metric-based classification (MTR) results, as well as the scores in the fine-tuning setup (FT). The third group of rows (HM) displays human performance, in terms of both accuracy and inter-rater agreement. Results for the larger test sets for DE and RU are reported in Table 9 in the Appendix.
+
+ble 4. The metric-based approach achieves consistent scores across all languages, fluctuating within the range [57.1, 67.1] for MBERT and [55.5, 65.0] for XLM-R. This indicates that the pretrained encoder alone already contains some relevant linguistic knowledge, to a certain degree. In comparison, fine-tuning yields more unequal results, being more data-hungry. In particular, it performs worse than the metric-based approach on languages with small training data size (e.g., ID and EU in Table 4), whereas it surpasses the metric-based approach on languages with abundant examples (e.g., DE, RU).
+
+XLM-R vs MBERT. Table 4 also reveals that XLM-R is more sensitive to train data size than MBERT, often falling behind in both the metric-based and fine-tuning setups, especially for resource-poorer languages. These findings are in line with what Vulic et al. (2020) report for Multi-SimLex, which are grounded on lexical semantics similarly to $\mathrm{AM}^2\mathrm{ICo}$ . However, they contradict the received wisdom from experiments in other multilingual sentence-level tasks (Ponti et al., 2020; Conneau et al., 2020), where XLM-R outperforms MBERT in cross-lingual transfer. While the exact causes go beyond the scope of this work, we speculate that the two encoders excel in separate aspects of semantics, the lexical and the sentence level.
+
+Effect of Data Size on Fine-Tuning. To further investigate the effect of train data size on fine-tuning, we perform an in-depth analysis on some selected languages (DE, RU and JA). Note that we use the larger dev and test sets for DE and RU for this experiment. We study how performance changes as we vary the number of training examples from 500 to the full set. The results in Figure 1 indicate that, while fine-tuning starts lower than the metric-based baseline, it grows steadily and begins to take the lead from around 2,500 train examples.
+
+Zero-shot Transfer. The results are presented in
+
+
+(a) MBERT
+
+
+(b) XLM-R
+Figure 1: Impact of train data size (for fine-tuning) on performance in DE, RU (larger test sets used), and JA from $\mathrm{AM}^2\mathrm{ICO}$ . X axis is in the log scale.
+
+Table 5. We select the training data of each of the five languages with most data (DE, RU, JA, ZH, AR) in turn for source-language fine-tuning. Subsequently, we report the average prediction performance across all remaining 9 target languages.
+
+First, we note that the TLD variant for hyperparameter selection does not yield gains. Second, the best choice of a source language appears to be German across the board, achieving an average score of 71.2 with MBERT and 72.0 with XLM-R. Nevertheless, this is simply due to its ample number of examples (50k). In fact, when controlling for this variable by equalizing the total size of each train split to 10k, see the bottom half of Table 5, all source languages perform comparably.
+
+Breaking down the average results into individ
+
+| \( \ell_s \) | Zero-shot | +TLD |
| MBERT | XLM-R | MBERT | XLM-R |
| DE (all) | 71.2 | 72.0 | 71.5 | 71.7 |
| RU (all) | 71.1 | 69.8 | 71.0 | 69.9 |
| JA (all) | 68.1 | 61.9 | 68.6 | 63.2 |
| ZH (all) | 66.2 | 60.3 | 66.6 | 62.1 |
| AR (all) | 67.7 | 61.8 | 67.1 | 62.1 |
| DE (10k) | 65.4 | 62.4 | 65.9 | 62.5 |
| RU (10k) | 66.4 | 64.7 | 66.1 | 64.6 |
| JA (10k) | 67.5 | 61.3 | 67.2 | 61.7 |
| ZH (10k) | 66.0 | 62.8 | 65.8 | 62.1 |
| AR (10k) | 67.7 | 61.8 | 67.1 | 62.1 |
+
+Table 5: Zero-shot transfer from 5 high-resource source languages to the remaining 9 languages in $\mathbf{AM}^2\mathbf{I}\mathbf{CO}$ . The parentheses contain (approximate) train data sizes.
+
+ual languages in Table 6 (top section), however, reveals an even more intricate picture. In particular, the best source language for KK is RU and for JA is ZH, rather than DE. This can be explained by the fact that these pairs share their scripts, Cyrillic and Kanji / Hanzi, respectively, at least in part. This indicates that a resource-leaner but related language might sometimes be a more effective option as source language than a resource-rich one. It is also noteworthy that zero-shot transfer from DE outperforms supervised learning in most languages, except for those both resource-rich and distant (JA, ZH and AR).
+
+Few-shot Transfer. To study the differences between training on $\ell_s$ and $\ell_t$ with controlled train data size, we plot the model performance on two target languages (RU and JA) as a function of the amount of available examples across different transfer conditions in Figure 2. Comparing supervised learning (based on target language data) with zero-shot learning (based on DE data), it emerges how the former is always superior if the number of examples is the same. However, zero-shot learning may eventually surpass the peak performance of supervised learning by taking advantage of a larger pool of examples: this is the case in RU, but not in JA. This illustrates a trade-off between quality (in-domain but possibly scarce data) and quantity (abundant but possibly out-of-domain data).
+
+Few-shot learning combines the desirable properties of both approaches. After pre-training a model on DE, it can be adapted on a small amount of target-language examples. Performance continues to grow with more shots; with as few as 1k JA examples it is comparable to supervised learning on 15k examples. Few-shot learning thus not only achieves the highest scores, but also leverages costly target-language data in a sample-efficient fashion.
+
+
+Figure 2: MBERT performance in two target languages, RU (larger test set) and JA, across different amounts of training examples under different settings: supervised learning (examples directly from $L_{trg}$ ) few-shot transfer (examples from DE $L_{src}$ plus $L_{trg}$ examples), and zero-shot transfer (+TLD) from DE.
+
+Joint Multilingual Learning. The results are shown in Table 6. We observe a substantial boost in performance across all the languages compared to both zero-shot transfer from any individual language and supervised learning (cf. Table 4), including high-resource languages such as DE and RU. Low-resource languages enjoy the most copious gains: with MBERT as the encoder, UR improves by 4.8 points, KK by 4.3, BN by 5.7, and KA by 6.3. However, this is still insufficient to equalize performances across the board, as the latter group of languages continues to lag behind: between DE and UR remains a gap of 7.2 points. We speculate that the reason behind this asymmetry is the fact that in addition to being resource-poor, UR, KK, BN, and KA are also typologically distant from languages where most of the examples are concentrated. Overall, these findings suggest that leveraging multiple sources is better than a single one by virtue of the transfer capabilities of massively multilingual encoders, as previously demonstrated (Wu and Dredze, 2019; Ponti et al., 2021).
+
+Adversarial Examples. Finally, we investigate whether the inclusion of adversarial examples (see §2.3) makes $\mathrm{AM}^2\mathrm{I}\mathrm{CO}$ less likely to be solved by relying on superficial clues. In Table 7, we compare the performance of MBERT trained on the full input (which we label FULL) with two adversarial baselines. We implement the TGT variant by inputting only the target word to the classification model and the CTX variant by replacing the target word with a '[MASK]' token. We perform analysis
+
+| \( \ell_s \) | model | Zero-shot Transfer (+TLD) |
| DE | MBERT | - | 75.1 | 71.1 | 68.9 | 64.8 | 71.1 | 78 | 75.7 | 75.4 | 74.2 | 70.2 | 68 | 62.8 | 68.5 |
| XLM-R | - | 77.5 | 70.1 | 67.7 | 67.4 | 70.2 | 77.2 | 76.6 | 74.8 | 70.7 | 71.9 | 70.6 | 66.5 | 67 |
| RU | MBERT | 74.4 | - | 71.7 | 68.4 | 65.1 | 71 | 74.8 | 72.6 | 74.5 | 71.2 | 69.3 | 70.3 | 67 | 67.8 |
| XLM-R | 72.4 | - | 68.2 | 64.4 | 69.5 | 72.3 | 73.7 | 71.2 | 69.7 | 68.8 | 69.6 | 68.3 | 69 | 66.7 |
| JA | MBERT | 70.9 | 70.7 | - | 68.9 | 63.2 | 72.5 | 70.9 | 72.5 | 70.1 | 67.7 | 65.3 | 67.9 | 65.8 | 65 |
| XLM-R | 64.7 | 65.7 | - | 70.4 | 63.2 | 67.1 | 66.5 | 64.9 | 62.5 | 62.5 | 61.7 | 66.7 | 57.8 | 59.3 |
| ZH | MBERT | 71.5 | 69.2 | 68.7 | - | 61.6 | 70.6 | 68.6 | 70.8 | 66.9 | 68 | 64.9 | 65.6 | 61.5 | 62.8 |
| XLM-R | 66.5 | 67.1 | 72.1 | - | 62.6 | 62.9 | 66.2 | 63.8 | 63.4 | 61.6 | 59.8 | 61.1 | 59.8 | 60.3 |
| AR | MBERT | 69.2 | 67.6 | 68.5 | 66.3 | - | 67.1 | 68.6 | 69.5 | 68.2 | 67.6 | 66.9 | 68.1 | 63.8 | 63.8 |
| XLM-R | 62 | 63 | 62.2 | 62.0 | - | 61.9 | 62.2 | 63.6 | 63.9 | 61.3 | 62.1 | 64.9 | 60.8 | 58.3 |
| Joint Multilingual Learning |
| all | MBERT | 80.4 | 82.1 | 78.2 | 75.2 | 73.3 | 75.8 | 81.2 | 80.6 | 78.4 | 75.9 | 76.5 | 76 | 71.3 | 73.3 |
| XLM-R | 79.4 | 80.9 | 79.4 | 76.1 | 73.6 | 76 | 81.2 | 80.5 | 77.9 | 74.2 | 77.7 | 73 | 74.5 | 72.5 |
+
+Table 6: Results for zero-shot transfer from a single source (top section) and joint transfer from multiple sources (bottom section) in $\mathsf{AM}^2\mathsf{I}\mathsf{CO}$ . The best scores for each setup are in bold. Results for the larger testsets of DE an RU are reported in Table 10.
+
+| Dataset | l | Full | Ctx | Tgt | Hm |
| WiC | EN | 67.1 | 65.1 | 55.2 | 80.0 |
| XL-WiC | DE | 81.0 | 75.0 | 80.0 | 74.0 |
| MCL-WiC | EN | 83.3 | 82.6 | 53.4 | 96.8 |
| AM21Co-A | DE | 84.2 | 83.6 | 50.0 | 93.0 |
| AM21Co+A | DE | 80.0 | 73.6 | 66.9 | 93.5 |
| AM21Co-A | ZH | 79.8 | 78.2 | 49.0 | 86.0 |
| AM21Co+A | ZH | 71.0 | 66.0 | 61.6 | 87.5 |
+
+across $\mathrm{AM}^2\mathrm{ICo}$ ,WiC,XL-WiC and MCL-WiC.9
+
+In previous datasets, at least one of the adversarial baselines reaches performance close to the FULL model: in WiC (EN), CTx has a gap of only 2 points. In XL-WiC (DE), TGT is only 1 point away from FULL. In MCL-WiC (EN), the gap between CTx and FULL is even below 1 point. This would also be the case in $\mathrm{AM}^2\mathrm{I}\mathrm{CO}$ were it not for the extra adversarial examples (rows +A): by virtue of this change, the distance between FULL and the best adversarial baseline is 6.4 points in DE and 5.0 in ZH. Therefore, it is safe to conclude that a higher score on $\mathrm{AM}^2\mathrm{I}\mathrm{CO}$ better reflects a deep semantic understanding by the model. Moreover, the last column of Table 7 also includes reference human accuracy. While the best baseline in XL-WiC even surpasses the human upper bound, the addition of adversarial examples in $\mathrm{AM}^2\mathrm{I}\mathrm{CO}$ combined
+
+Table 7: The impact of adversarial examples on MBERT performance. +A indicates the presence of adversarial examples, -A their absence. CTX represents training a model on the context only, TGT on the target word only, and FULL on the whole input. HM stands for human accuracy.
+
+ | DE | RU | JA | ZH | AR |
| Adversarial | 73.7 | 66.5 | 65.3 | 64.7 | 55.0 |
| Non-adversarial | 83.1 | 77.9 | 76.5 | 75.5 | 70.5 |
+
+Table 8: MBERT performance on adversarial and nonadversarial examples across five datasets in $\mathsf{AM}^2\mathsf{ICO}$
+
+with higher human accuracy drastically increases the gap between the two: 13.5 points in DE and 16.5 in ZH. Overall, this results in a much more challenging evaluation benchmark.
+
+In addition, we report separate results on these adversarial examples, and compare with model performance on non-adversarial examples for DE, RU, JA, ZH and AR, within the supervised setting (Table 8). It is clear that the adversarial examples pose a much greater challenge for the model with overall much lower scores. This is expected as for adversarial examples the models must have a finer-grained and more accurate understanding of both the target word semantics and the surrounding (sentential) context.
+
+# 5 Related Work
+
+Cross-Lingual Evaluation of Word Meaning in Context. Going beyond readily available sense inventories required for WSD-style evaluations, the comprehensive benchmarks for evaluating word meaning in context cross-lingually are still few and far between. XL-WiC (Raganato et al., 2020) extends the original English WiC framework of Pilehvar and Camacho-Collados (2019) to 12 other languages, but supports only monolingual evaluation, and suffers from issues such as small gaps between human and system performance. The SemEval
+
+2021 shared task MCL-WiC does focus on cross-lingual WiC, but covers only five high-resource languages from three language families (English, French, Chinese, Arabic, Russian). Both XL-WiC and MCL-WiC mainly focus on common words and do not include less frequent concepts (e.g., named entities). Further, their language coverage and data availability are heavily skewed towards Indo-European languages.
+
+There are several other 'non-WiC' datasets designed to evaluate cross-lingual context-aware lexical representations. Bilingual Contextual Word Similarity (BCWS) (Chi and Chen, 2018) challenges a model to predict graded similarity of cross-lingual word pairs given sentential context, one in each language. In the Bilingual Token-level Sense Retrieval (BTSR) task (Liu et al., 2019), given a query word in a source language context, a system must retrieve a meaning-equivalent target language word within a target language context.[10] However, both BCWS and BTSR are again very restricted in terms of language coverage: BCWS covers only one language pair (EN-ZH), while BTSR contains two pairs (EN-ZH/ES). Further, they provide only test data: as such, they can merely be used as general intrinsic probes for pretrained models, but cannot support fine-tuning experiments and cannot fully expose the relevance of information available in pretrained models for downstream applications. This is problematic as intrinsic tasks in general do not necessarily correlate well with downstream performance (Chiu et al., 2016; Glavaš et al., 2019).
+
+AM $^2$ ICO vs. Entity Linking. Our work is related to the entity linking (EL) task (Rao et al., 2013; Cornolti et al., 2013; Shen et al., 2014) similarly to how the original WiC (based on WordNet knowledge) is related to WSD. EL systems must map entities in context to a predefined knowledge base (KB). While WSD relies on the WordNet sense inventory, the EL task focuses on KBs such as Wikipedia and DBPedia. When each entity mention is mapped to a unique Wiki page, this procedure is termed wikification (Mihalcea and Csomai, 2007). The cross-lingual wikification task (Ji et al., 2015; Tsai and Roth, 2016) grounds multilingual mentions to English Wikipedia pages. Similar to WSD, EL evaluation is tied to a specific KB. It thus faces similar limitations of WSD in terms of restricting
+
+meanings and their distinctions to those predefined in the inventory. In comparison, $\mathrm{AM}^2\mathrm{ICo}$ leverages Wikipedia only as a convenient resource for extracting the examples, similar to how the original WiC work leverages WordNet. $\mathrm{AM}^2\mathrm{ICo}$ itself is then framed on natural text, without requiring the modeling of the KBs. Also, in comparison with EL, $\mathrm{AM}^2\mathrm{ICo}$ provides higher data quality and a more challenging evaluation of complex word-context interactions, achieved by a carefully designed data extraction and filtering procedure.
+
+# 6 Conclusion
+
+We presented $\mathrm{AM}^2\mathrm{ICO}$ , a large-scale and challenging multilingual benchmark for evaluating word meaning in context (WiC) across languages. $\mathrm{AM}^2\mathrm{ICO}$ is constructed by leveraging multilingual Wikipedias, and subsequently validated by humans. It covers 15 typologically diverse languages and a vocabulary substantially larger than all previous WiC datasets. As such, it provides more comprehensive and reliable quality estimates for multilingual encoders. Moreover, $\mathrm{AM}^2\mathrm{ICO}$ includes adversarial examples: resolving such examples requires genuine lexical understanding, as opposed to relying on spurious correlations from partial input. Finally, $\mathrm{AM}^2\mathrm{ICO}$ offers the possibility to perform cross-lingual evaluation, pairing context between different languages. Moreover, we explored the impact of language relatedness on model performance by transferring knowledge from multiple source languages. We established a series of baselines on $\mathrm{AM}^2\mathrm{ICO}$ , based on SotA multilingual models, revealing that the task is far from being 'solved' even with abundant training data. All models struggle especially when transferring to distant and resource-lean target languages. We hope that $\mathrm{AM}^2\mathrm{ICO}$ will guide and foster further research on effective representation learning across different languages.
+
+# Acknowledgments
+
+We thank the anonymous reviewers for their helpful feedback. We acknowledge Peterhouse College at University of Cambridge for funding Qianchu Liu's PhD research. The work was also supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909) awarded to Anna Korhonen. We also appreciate many helpful discussions and feedback from our colleagues in the Language Technology Lab.
+
+# References
+
+Edoardo Barba, Luigi Procopio, Niccolò Campolungo, Tommaso Pasini, and Roberto Navigli. 2020. MuLaN: Multilingual label propagation for word sense disambiguation. In Proceedings of IJCAI 2020, pages 3837-3844.
+Ta-Chung Chi and Yun-Nung Chen. 2018. CLUSE: Cross-lingual unsupervised sense embeddings. In Proceedings of EMNLP 2018, pages 271-281.
+Billy Chiu, Anna Korhonen, and Sampo Pyysalo. 2016. Intrinsic evaluation of word vectors fails to predict extrinsic performance. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 1-6.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of ACL 2020, pages 8440-8451.
+Marco Cornolti, Paolo Ferragina, and Massimiliano Ciaramita. 2013. A framework for benchmarking entity-annotation systems. In Proceedings of WWW 2013, pages 249-260.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT 2019, pages 4171-4186.
+Goran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vulić. 2019. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of ACL 2019, pages 710-721.
+Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of NAACL-HLT 2018, pages 107-112.
+Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of NAACL-HLT 2018, pages 1875-1885.
+Heng Ji, Joel Nothman, Ben Hachey, and Radu Florian. 2015. Overview of TAC-KBP2015 tri-lingual entity discovery and linking. In Proceedings of TAC 2015.
+Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of EMNLP 2017, pages 2021-2031.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR 2015.
+
+Qianchu Liu, Diana McCarthy, Ivan Vulic, and Anna Korhonen. 2019. Investigating cross-lingual alignment methods for contextualized embeddings with token-level evaluation. In Proceedings of CoNLL 2019, pages 33-43.
+Rada Mihalcea and Andras Csomai. 2007. Wikify! Linking documents to encyclopedic knowledge. In Proceedings of CIKM 2007, pages 233-242.
+Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint, CoRR, abs/1309.4168.
+Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity linking meets word sense disambiguation: a unified approach. Transactions of the ACL, 2:231-244.
+Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41(2):1-69.
+Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. Semeval-2013 task 12: Multilingual word sense disambiguation. In Proceedings of SEMEVAL 2013, pages 222-231.
+Roberto Navigli and Simone Paolo Ponzetto. 2012. Joining forces pays off: Multilingual joint word sense disambiguation. In Proceedings of EMNLP-ConNLL 2012, pages 1399-1410.
+Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of ACL 2019, pages 4658-4664.
+Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of NAACL-HLT 2019, pages 1267-1273.
+Edoardo Maria Ponti, Goran Glavaš, Olga Majewska, Qianchu Liu, Ivan Vulić, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal commonsense reasoning. In Proceedings of EMNLP 2020, pages 2362-2376.
+Edoardo Maria Ponti, Helen O'horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, and Anna Korhonen. 2019. Modeling language variation and universals: A survey on typological linguistics for natural language processing. Computational Linguistics, 45(3):559-601.
+Edoardo Maria Ponti, Ivan Vulic, Ryan Cotterell, Marinela Parovic, Roi Reichart, and Anna Korhonen. 2021. Parameter space factorization for zero-shot learning across tasks and languages. Transactions of the ACL, 9:410-428.
+Alessandro Raganato, José Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation:
+
+A unified evaluation framework and empirical comparison. In Proceedings of EACL 2017, pages 99-110.
+Alessandro Raganato, Tommaso Pasini, Jose Camacho-Collados, and Mohammad Taher Pilehvar. 2020. XL-WiC: A multilingual benchmark for evaluating semantic contextualization. In Proceedings of EMNLP 2020, pages 7193-7206.
+Jonathan Raiman and Olivier Raiman. 2018. DeepType: Multilingual entity linking by neural type system evolution. In Proceedings of AAAI 2018, pages 5406-5413.
+Delip Rao, Paul McNamee, and Mark Dredze. 2013. Entity linking: Finding extracted entities in a knowledge base. In Multi-source, multilingual information extraction and summarization, pages 93-115. Springer.
+Sebastian Ruder, Ivan Vulic, and Anders Søgaard. 2019. A survey of cross-lingual embedding models. Journal of Artificial Intelligence Research, 65:569-631.
+Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020. Sense-annotated corpora for word sense disambiguation in multiple languages and domains. In Proceedings of LREC 2020, pages 5905-5911.
+Wei Shen, Jianyong Wang, and Jiawei Han. 2014. Entity linking with a knowledge base: Issues, techniques, and solutions. IEEE Transactions on Knowledge and Data Engineering, 27(2):443-460.
+Anders Søgaard, Sebastian Ruder, and Ivan Vulić. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of ACL 2018, pages 778-788.
+Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wikification using multilingual embeddings. In Proceedings of NAACL-HLT 2016, pages 589-598.
+Shyam Upadhyay, Nitish Gupta, and Dan Roth. 2018. Joint multilingual supervision for cross-lingual entity linking. In Proceedings of EMNLP 2018, pages 2486-2495.
+Ivan Vulic, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeanu, et al. 2020. Multi-SimLex: A large-scale evaluation of multilingual and cross-lingual lexical semantic similarity. Computational Linguistics, 46(4):847-897.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of EMNLP 2020: System Demonstrations, pages 38-45.
+
+Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of EMNLP-IJCNLP 2019, pages 833-844.
+
+# A Results on the larger test set in DE and RU
+
+Table 9 and Table 10 list results for the larger test sets of DE and RU in $\mathrm{AM}^2\mathrm{ICO}$
+
+ | | DE large | RU large |
| MTR | MBERT | 66.1 | 65.3 |
| XLM-R | 64.5 | 63.2 |
| FT | MBERT | 80.7 | 75.7 |
| XLM-R | 77.7 | 75.5 |
+
+Table 9: Accuracy of MBERT and XLM-R on larger test sets of DE and RU in $\mathrm{AM}^2\mathrm{I}\mathrm{CO}$ in a supervised learning setting. We report metric-based classification (MTR) results, as well as the scores in the fine-tuning setup (FT).
+
+| \( \ell_s \) | model | DE large | RU large |
| Zero-shot Transfer (+TLD) |
| DE | MBERT | - | 77.1 |
| XLM-R | - | 76.4 |
| RU | MBERT | 76.1 | - |
| XLM-R | 72.5 | - |
| JA | MBERT | 72.3 | 72.2 |
| XLM-R | 65.5 | 66.0 |
| ZH | MBERT | 70.8 | 70.0 |
| XLM-R | 64.7 | 64.4 |
| AR | MBERT | 69.1 | 68.9 |
| XLM-R | 62.9 | 63.7 |
| Joint Multilingual Learning |
| all | MBERT | 81.9 | 81.2 |
| XLM-R | 80.5 | 80.7 |
+
+Table 10: $\mathrm{AM}^2\mathrm{ICO}$ transfer results for the larger test sets of DE and RU. Performance from zero-shot transfer from a single source is in the top section; joint transfer from multiple sources is in the bottom section.
\ No newline at end of file
diff --git a/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/images.zip b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f56ba4365cbce5d4a495569e561803909a07b5e9
--- /dev/null
+++ b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:163614853f912860332df66d59053c9e6879c9c4a42efe3bfa61dcde497ab87e
+size 550393
diff --git a/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/layout.json b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d897bb59132f60547fbebd5e5b99f5f93a6872c7
--- /dev/null
+++ b/am2icoevaluatingwordmeaningincontextacrosslowresourcelanguageswithadversarialexamples/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:50471d137c42077745cbdf9b45a642dbcd91a6392b0cbd0352faf2850ec32f7f
+size 384946
diff --git a/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_content_list.json b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e9e0005d8252397626f4e60c1489dcd8cee95ba7
--- /dev/null
+++ b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:17bbc461aab18de65b30f385c55285409a136f22e0ec2fbcdc64391484e9fd38
+size 111291
diff --git a/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_model.json b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5e1f4ab5c8a79a931f70e610910a82740eb9b6cf
--- /dev/null
+++ b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71578d3869d6577b22acc0cf59aaf7237807319a233df53997576fddcb480668
+size 135822
diff --git a/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_origin.pdf b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4299f94ad508c8c59938abcb467992d77b053c41
--- /dev/null
+++ b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/d038a9ce-0a35-4cf4-bcee-980997ca14e5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1a2aea4cb646cc96eaa4376808d1416b9122fbb3ad1930a3e7eebb36d0b27122
+size 885320
diff --git a/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/full.md b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d834325d28d7848449b1ff5218fc08a7b44ebc22
--- /dev/null
+++ b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/full.md
@@ -0,0 +1,471 @@
+# A Massively Multilingual Analysis of Cross-linguality in Shared Embedding Space
+
+Alex Jones
+
+Dartmouth College
+
+alexander.g.jones.23@dartmouth.edu
+
+# William Yang Wang
+
+University of California, Santa Barbara
+
+william@cs.ucsb.edu
+
+# Kyle Mahowald
+
+University of Texas at Austin
+
+mahowald@utexas.edu
+
+# Abstract
+
+In cross-lingual language models, representations for many different languages live in the same space. Here, we investigate the linguistic and non-linguistic factors affecting sentence-level alignment in cross-lingual pretrained language models for 101 languages and 5,050 language pairs. Using BERT-based LaBSE and BiLSTM-based LASER as our models, and the Bible as our corpus, we compute a task-based measure of cross-lingual alignment in the form of bitext retrieval performance, as well as four intrinsic measures of vector space alignment and isomorphism. We then examine a range of linguistic, quasi-linguistic, and training-related features as potential predictors of these alignment metrics. The results of our analyses show that word order agreement and agreement in morphological complexity are two of the strongest linguistic predictors of cross-linguality. We also note in-family training data as a stronger predictor than language-specific training data across the board. We verify some of our linguistic findings by looking at the effect of morphological segmentation on English-Inuktitut alignment, in addition to examining the effect of word order agreement on isomorphism for 66 zero-shot language pairs from a different corpus. We make the data and code for our experiments publicly available. $^{1}$
+
+# 1 Introduction
+
+Cross-lingual language models are polyglots insofar as they house representations for many different languages in the same space. But to what extent are they good polyglots? The answer depends, in part, on how well-aligned and isomorphic the representations are, and not all language pairs are equally well-aligned. What determines the quality of the alignment? Are language pairs from the same family (e.g., Spanish and French) better
+
+
+Figure 1: A look at some of the strongest features for predicting cross-linguality, according to their number of occurrences in best-feature regression searches across all our dependent variables (see Section 6.2).
+
+aligned than languages from two unrelated families (e.g., Japanese and Swahili)? Are languages which are geographically closer or share an alphabet better aligned? How do factors from linguistic typology (like word order and morphological marking) affect alignment?
+
+Recent work has looked at the typological and training-related factors affecting cross-lingual alignment in monolingual embedding space (Vulic et al., 2020; Dubossarsky et al., 2020), assessed the cross-linguality of pretrained language models using probing tasks and downstream performance measures (Conneau et al., 2020; Wu and Dredze, 2019, 2020; Pires et al., 2019; Groenwold et al., 2020), and probed Transformer models (Wolf et al., 2020) for linguistic structure (see Rogers et al. 2020 for an overview of over 150 studies). However, a gap in the research exists regarding the following question: What are the linguistic, quasi-linguistic, and training-related factors determining the cross-linguality of sentence representations in shared embedding space, and what are the relative weights of these factors?
+
+We argue, that given the importance of alignment in multilingual model performance, gaining fundamental insight into what affects inter-language alignment and isomorphism—specifically by exploring which linguistic factors matter for inter
+
+language alignment—will make it possible to leverage existing information on linguistic typology to improve alignment (and thereby task performance) for low-resource languages.
+
+Our contributions are as follows:
+
+- We provide a characterization of crosslinguallity for 101 languages (29 language families) in two massively multilingual sentence embedding models with different architectures (LaBSE and LASER), attacking the question from the vantage of vector space analysis—using four measures of alignment and isomorphism—and downstream task performance (namely bitext retrieval).
+- We present over a dozen linguistic, quasi-linguistic, and training-related factors as potential predictors of cross-linguality, and examine their relationship with the above metrics using diverse statistical analyses.
+- We uncover novel and pronounced effects of morphology agreement and word order agreement on cross-linguality, demonstrate the importance of in-family training data in ensuring multilinguality, and validate our linguistic findings with two empirical case studies on low-resource languages.
+
+# 2 Related Work
+
+Various studies have assessed the cross-linguality of pretrained language models. Recent efforts have approached this question via performance on an array of downstream NLP tasks (Conneau et al., 2020; Wu and Dredze, 2019, 2020; Karthikeyan et al., 2020; Pires et al., 2019; Groenwold et al., 2020), and others have proposed methods for better cross-lingual alignment in light of systematic cross-lingual deficiencies (Zhang et al., 2019; Xia et al., 2021). Our study hews closest methodologically to Vulic et al. (2020) and Dubossarsky et al. (2020), who investigate the determinants of cross-lingual isomorphism using monolingual fastText embeddings (Bojanowski et al., 2016; Joulin et al., 2016; Mikolov et al., 2013).
+
+Findings from these studies have been mixed, but some patterns emerge. Pires et al. (2019) and Conneau et al. (2020) find that cross-lingual transfer works best between typologically similar language pairs, in particular between languages that share word order features. Wu and Dredze (2019)
+
+approach cross-linguality by focusing on zero-shot cross-lingual transfer in mBERT, and show that each mBERT layer retains language-specific information and that token overlap correlates with cross-lingual performance. Wu and Dredze (2020) home in on low-resource languages, finding that they often fail to reap the benefits of massively multilingual joint training but that their performance can be boosted by providing similar-language training data. Somewhat contrary to others' results (including ours), Karthikeyan et al. (2020) find that lexical overlap factors in negligibly to cross-lingual transfer, while the depth of the network is integrally important. Vulić et al. (2020) and Dubossarsky et al. (2020) look at how typological features, training-related factors, and measures of vector space isomorphism predict cross-lingual performance between monolingual word embeddings. Vulić et al. (2020) find in their experiments that cross-lingual performance depends mostly on training data and regimes, while Dubossarsky et al. (2020) see more mixed results from their experiments: They show that linguistic typology is important, but not deterministic, for predicting cross-lingual performance.
+
+Our work not only replicates these findings for monolingual spaces in the multilingual embedding space (e.g. on word order similarity, related training data, typological distance, subword overlap), but extends that work through: (1) The scale (101 languages, 5,050 language pairs in the main analysis); (2) The quantity and diversity of predictors (13 linguistic, quasi-linguistic, and training-related features); (3) The models (cross-lingual sentence encoders with different architectures); and (4) The analytic methods (a blend of prediction-based and classical statistical techniques, supplemented by performance-based case studies on extremely low-resource languages).
+
+# 3 Bible Corpus
+
+The source of the bitexts we evaluate on is the superparallel Bible corpus2 from Christodouloupoulos and Steedman (2014), whence we gather texts for 101 languages and bitexts for 5,050 language pairs.3 We evaluate on the Books of Matthew and John in the New Testament separately and average the results, as these parts are available for all 101 languages. In doing so, we avoid the pitfalls of
+
+relying on a single set of bittexts for our analysis. Each document contains 800-1000 sentences.
+
+# 4 Measures of Cross-lingual Alignment & Isomorphism
+
+We formulate alignment metrics in two distinct ways: over language pairs and over individual languages. The latter group is computed from the former by averaging over all pairs in which a language appears. For example, to derive the average F1-score for Chinese, we average over the F1-scores for Chinese-German, Chinese-Amuzgo, etc.
+
+Some metrics we use are measures of vector subspace isomorphism (i.e. those examined in Dubossarsky et al. 2020), while others are measures of alignment (namely those pertaining to bitext retrieval). Vector spaces may be isomorphic without being well-aligned, so we quantify multilinguality in diverse ways.
+
+# 4.1 Bitext Retrieval Task
+
+The bitext retrieval task consists of finding all sentences in a paired set of documents that are translations of each other. This process can be carried out between two comparable corpora, such as Wikipedia ("bitext mining"), but we use the 5,050 bitexts collected from the Bible corpus. We mine in two directions: for each sentence in document $\mathcal{X}$ , we find a match in document $\mathcal{Y}$ , and viceversa. We then take the intersection of those two searches, which has proven to be a useful heuristic (Artetxe and Schwenk, 2019a; Jones and Wijaya, 2021). Note that this task can be thought of as the sentence-level analog to the bilingual lexicon induction (BLI) task used in Vulic et al. (2020) and Dubossarsky et al. (2020).
+
+Task performance Margin scoring, introduced by Artetxe and Schwenk (2019a), has shown success on the bitext retrieval task (Schwenk et al., 2021; Schwenk et al., 2019; Keung et al., 2021; Tran et al., 2020; Fan et al., 2020; Jones and Wijaya, 2021). Margin score may be thought of as "relativized" cosine similarity, in that it selects vectors that "stand out" most among their neighbors in terms of proximity, rather than ones that are simply closest together. The method requires initially finding the $k$ -nearest neighbors of each source and target sentence, which we do efficiently with Faisss (Johnson et al., 2017a). The sentence pair $(x,y)$ is then chosen to maximize the margin score between
+
+$x$ and $y$ , namely
+
+$$
+\begin{array}{l} \operatorname {s c o r e} _ {\text {m a r g i n}} (x, y) = \\ \frac {2 k \cos (x , y)}{\sum_ {z \in N N _ {k} (x)} \cos (x , z) + \sum_ {z \in N N _ {k} (y)} \cos (y , z)} \\ \end{array}
+$$
+
+After retrieving sentence pairs in both directions and keeping the intersection, we compute standard F1-score against ground-truth alignments.
+
+Average margin score We also introduce a novel alignment metric in the form of the average margin score across ground-truth sentence alignments. Namely, given aligned sentence embedding matrices $\mathcal{X}$ and $\mathcal{Y}$ with $N$ embeddings each, the average margin score is computed as
+
+$$
+\operatorname {m a r g i n} _ {\text {a v g}} (\mathcal {X}, \mathcal {Y}) =
+$$
+
+$$
+\frac {1}{N} \sum_ {i = 1} ^ {N} \mathrm {s c o r e} _ {\mathrm {m a r g i n}} (\mathcal {X} _ {i}, \mathcal {Y} _ {i}) \mid \mathcal {X}, \mathcal {Y} \in \mathbb {R} ^ {N \times e m b \_ d i m}
+$$
+
+This provides a continuous measure of crosslingual alignment that is correlated with, but not equivalent to, the F1-score on this task.
+
+# 4.2 Approximate Isomorphism
+
+Vulic et al. (2020) and Dubossarsky et al. (2020) introduce various ways of quantifying the degree of isomorphism between two vector spaces, of which we use three. Note that unlike Vulic et al. (2020) and Dubossarsky et al. (2020), who investigate isomorphism between monolingual spaces, we examine cross-lingual isomorphism within shared embedding space. These metrics thus technically quantify vector subspace isomorphism, where each subspace comprises embeddings in a particular language.
+
+Gromov-Hausdorff distance The Hausdorff distance between two metric spaces $\mathcal{X}$ and $\mathcal{Y}$ , given by
+
+$$
+\mathcal {H} (\mathcal {X}, \mathcal {Y}) = \max _ {x \in \mathcal {X}} \inf _ {y \in \mathcal {Y}} d (x, y), \sup _ {y \in \mathcal {Y}} \inf _ {x \in \mathcal {X}} d (x, y) ]
+$$
+
+intuitively measures the worst-case distance between the nearest neighbors of $\mathcal{X}$ and $\mathcal{Y}$ (Vulic et al., 2020). The Gromov-Hausdorff distance then minimizes this distance over all isometric transforms $f$ and $g$ :
+
+$$
+\mathcal {G H} (\mathcal {X}, \mathcal {Y}) = \inf _ {f, g} \mathcal {H} (f (\mathcal {X}), g (\mathcal {Y}))
+$$
+
+In practice, the Gromov-Hausdorff distance is approximated by computing the Bottleneck distance between $\mathcal{X}$ and $\mathcal{Y}$ (Dubossarsky et al., 2020; Chazal et al., 2009).
+
+Singular value gap Given cross-lingual aligned sentence embeddings stored in matrices $\mathcal{X}$ and $\mathcal{Y}$ , each with $n$ singular values $\sigma_{1},\sigma_{2},\ldots,\sigma_{n}$ sorted in descending order, the singular value gap (Dubossarsky et al., 2020) between $\mathcal{X}$ and $\mathcal{Y}$ is defined as
+
+$$
+\operatorname {S V G} (\mathcal {X}, \mathcal {Y}) = \sum_ {i = 1} ^ {n} (\log \sigma_ {i} ^ {\mathcal {X}} - \log \sigma_ {i} ^ {\mathcal {Y}}) ^ {2}
+$$
+
+Effective condition number The effective condition number (Dubossarsky et al., 2020) of a matrix $\mathcal{X}$ intuitively captures the extent to which small perturbations in $\mathcal{X}$ are amplified as a result of arbitrary transformations $\phi(\mathcal{X})$ . The lower the (effective) condition number of an embedding space, the more robust it is to transformations (e.g. transfer functions mapping one embedding space to another).
+
+Dubossarsky et al. (2020) reason that monolingual embedding spaces with lower (effective) condition numbers map better to other spaces. They further show that taking the harmonic mean of the effective condition numbers (ECOND-HM) of two embedding spaces provides a reliable measure of approximate isomorphism between those spaces4. We use ECOND-HM in a similar fashion to gauge the approximate isomorphism, or "mappability," of cross-lingual embedding subspaces, where a lower ECOND-HM indicates greater isomorphism.
+
+# 5 Predictors
+
+# 5.1 Linguistic Features
+
+Similarly to the alignment metrics, we define separate sets of features pertaining to language pairs and pertaining to individual languages. We take note of this in our descriptions below.
+
+Phylogeny For individual languages (all languages in the New Testament corpus), we use both language family and subfamily as categorical features. For language pairs, we define two binary variables: same family and same subfamily, corresponding to whether two languages are in the same family or subfamily, respectively.
+
+We include subfamily as a feature in order to investigate finer-grained typological and phylogenetic differences that may affect cross-lingual alignment or isomorphism.
+
+Word order typology For individual languages, we include basic word order as a feature, using the canonical six-way taxonomy (i.e. permutations of $\{\mathrm{S},\mathrm{O},\mathrm{V}\}$ ). For language pairs, we define binary feature same word order analogously to the binary features above. We consult the WALS database (Dryer and Haspelmath, 2013) and Glottolog (Hammarstrom et al., 2020) to assign dominant word orders.
+
+Morphological typology Though it is possible to make fine-grained distinctions in morphological typology in theory, we simply draw a binary distinction between languages that are widely considered polysynthetic (mostly Amerindian languages) and all other languages. Even more so than word order, morphological complexity is gradient (Cotterell et al., 2019). But we argue that polysynthetic languages pose a unique challenge for NLP systems and so perform one-vs-all binary coding such that individual languages are associated with a polysynthesis status and language pairs are associated with the feature same polysynthesis status. We classify 17 languages in the corpus as polysynthetic.
+
+Typological distance We also use typological word vectors from lang2vec $^{7}$ (Malaviya et al., 2017), based on the URIEL $^{8}$ typological database (Littell et al., 2017) to compute the distance between languages on the basis of aggregated linguistic features. Specifically, we compute:
+
+1. Syntactic distance using KNN-based syntax vectors
+2. Phonological distance using KNN-based phonology vectors
+3. Inventory distance using KNN-based phonological inventory vectors (distinct from phonological distance)
+4. Geographic distance using geographic location vectors
+
+All distances are computed as cosine distances.
+
+Character- & token-level overlap The standard Jaccard similarity coefficient quantifies the overlap
+
+between sets $\mathbf{A}$ and $\mathbf{B}$ as:
+
+$$
+J (\mathbf {A}, \mathbf {B}) = \frac {| \mathbf {A} \bigcap \mathbf {B} |}{| \mathbf {A} \bigcup \mathbf {B} |}
+$$
+
+However, this measure fails to take into account the frequency of the items (here, characters) in each set. What we really want is the weighted, or multiset, version of the Jaccard coefficient. For our purposes, it suffices to reformulate $J$ as:
+
+$$
+J _ {M} (\mathcal {X}, \mathcal {Y}) = \frac {\left| c h r \left(\mathcal {X} _ {M}\right) \bigcap c h r \left(\mathcal {Y} _ {M}\right) \right|}{\left| c h r \left(\mathcal {X} _ {M}\right) \bigcup c h r \left(\mathcal {Y} _ {M}\right) \right|} \forall \mathcal {X}, \mathcal {Y} \in \mathbf {C}
+$$
+
+where $\text{chr}(\mathcal{D}_M)$ represents the multiset of characters in document $\mathcal{D}$ , and $\mathbf{C}$ is the corpus of bittexts we're working with. For convenience and to avoid redundancy, we compute $J_M$ (character-level overlap) only on aligned texts in the Book of Matthew. Token-level overlap is computed analogously, using the wordpiece (Wu et al., 2016) tokenization method employed by LaBSE9. This measure is only computed on texts in the Book of John.
+
+# 5.2 Training-related Features
+
+The aim of our analysis is to understand the effect of each of the previously described features on cross-lingual alignment and isomorphism when training factors are controlled for. To this end, we control for several (pre)training data quantities for the models tested.
+
+First, we account for language-specific training data for individual languages. However, we also account for combined language-specific training data for language pairs, i.e. the amount of data for $x$ plus the amount of data for $y$ , where $(x, y)$ is a language pair. We then take it a step further and record (combined) in-family training data and (combined) in-subfamily training data, taking inspiration from gains made using transfer languages for cross-lingual learning (Johnson et al., 2017b; Littell et al., 2019; Lin et al., 2019).
+
+By considering these broader training-related statistics, we are able to better control for and observe the role higher-level typological information (e.g. at the family or subfamily level) plays in training these models.
+
+# 6 Analysis
+
+# 6.1 Simple Correlations
+
+Training data We first look at simple correlations between the training data quantities
+
+and the dependent variables (measures of alignment/isomorphism). Results for language pairs are given across all dependent variables for LaBSE and LASER in Table 1. The most striking observation is that combined in-family training data is more highly correlated $^{10}$ with the dependent variables than simple combined data or combined in-subfamily data for all dependent variables, for both LaBSE and LASER $^{11}$ $(0.12 \leq |r| \leq 0.57)^{12}$ . At the individual language level, results are similar (i.e. in-family data is most significant), but with weaker correlations across the board $(0.02 \leq |r| \leq 0.18)$ . Based on these preliminary results, we highlight combined in-family training data as a moderately strong predictor of alignment/isomorphism for a given language pair, one that is in fact better than language-specific data for making predictions about massively multilingual sentence models.
+
+(Quali)-linguistic Features Among the predictors, there were several noteworthy correlations. Same family was moderately correlated with better alignment/isomorphism in both LaBSE and LASER (generally $0.2 < |r| < 0.45$ ), while same subfamily was somewhat less correlated. This informs us as to the level at which related-language data is useful for building massively cross-lingual models. Same word order and same polysynthesis status had comparable relationships with the dependent variables as did same family. Token-level overlap was moderately but inconsistently correlated with dependent variables ( $\approx 0.05 < |r| < 0.5$ ), while character-level overlap was somewhat more weakly correlated. The typological distance features were weakly but non-negligibly correlated with dependent variables ( $\approx 0.1 < |r| < 0.3$ ), with one outlier (syntactic distance) was correlated with $r = -0.44$ with bitext retrieval F1-score for LASER). The typological distance features were moderately correlated with one another.
+
+# 6.2 Feature Search and Ablation
+
+Exhaustive Feature Selection We look at the optimal set of language-pair-specific features for
+
+| Metric | Comb. sentences | Comb. in-family sentences | Comb. in-subfamily sentences |
| LaBSE | LASER | LaBSE | LASER | LaBSE | LASER |
| Bitext retrieval (F1) | 0.34 | 0.13 | 0.49 | 0.57 | 0.46 | 0.35 |
| Avg. margin score | 0.30 | -0.03 | 0.40 | 0.14 | 0.37 | 0.07 |
| SVG | -0.08 | -0.04 | -0.12 | -0.13 | -0.11 | -0.08 |
| ECOND-HM | -0.03 | 0.07 | -0.38 | -0.30 | -0.31 | -0.11 |
| Gromov-Hausdorff dist. | -0.13 | -0.07 | -0.20 | -0.20 | -0.18 | -0.10 |
+
+Table 1: Correlations (Pearson's $r$ ) between training data quantities and alignment/isomorphism metrics for language pairs.
+
+predicting the five measures of alignment and isomorphism. To do so, we perform exhaustive feature search on linear regression models with each of the dependent variables being used separately as the regressand. To counter overfitting, we run ten-fold cross-validation $^{13}$ and use adjusted $r^2$ as the fit criterion, which further penalizes for additional predictors. Adjusted $r^2$ is given by
+
+$$
+r _ {a d j} ^ {2} = 1 - \frac {(1 - r ^ {2}) (n - 1)}{n - k - 1}
+$$
+
+where $n$ is the sample size (here, $n = 5050$ ) and $k$ is the number of predictors in the model (here, $1 \leq k \leq 13$ ). In total, we fit $2^{|F|} = 2^{13} = 8192$ regression models for LaBSE and LASER separately, where $F$ is our feature space.
+
+For interpretability, we aggregate results by tallying the frequency with which each feature appears in a best-feature list $^{14}$ —giving model-specific results as well as combined results—which are displayed in Table 2. For the combined (LaBSE+LASER) results, same polysynthesis status and combined in-family sentences are tied as the most popular predictors, with 8/10 best-feature list appearances each. Next in line for combined results is combined sentences (6 appearances), followed by a three-way tie between same word order, token-level overlap, and geographic distance (3 appearances). Results are very similar for each model separately, although same word order is tied for second place for LASER, alongside syntactic distance and phonological distance (3 appearances).
+
+These results show that certain (quasi)-linguistic features (in particular, same polysynthesis status and same word order) are not redundant predictors
+
+in the presence of training data quantities. Our next analysis examines individual features in terms of the size of their marginal contribution to the regression model fit.
+
+Single-step Regression To appraise the marginal contribution of each feature to overall regression fit, we perform a single-step ablation experiment where we eliminate features from a full-feature model one at a time. We fit a regression model with all 13 features using ten-fold cross-validation and obtain a baseline $r_{adj_{\mathrm{bs1}}}^2$ . We then compute
+
+$$
+\Delta r _ {a d j _ {\mathrm {f}}} ^ {2} = r _ {a d j _ {\mathrm {b s l}}} ^ {2} - r _ {a d j _ {\mathrm {a b l}}} ^ {2},
+$$
+
+$$
+r _ {a d j _ {\mathrm {a b l}}} ^ {2} \hat {=} F \backslash \{f \} \forall f \in F
+$$
+
+The value of $\Delta r_{adj_{\pm}}^{2}$ is computed for all features $f$ and with each dependent variable separately as the regressand, for LaBSE and LASER separately. To aggregate results, we look at the average rank of each feature according to the ablation experiment, across all five dependent variables.
+
+The top three results for LaBSE and LASER are given in Table 3. For LaBSE, same polysynthesis status and combined sentences are tied as the features with the highest predictive contributions (average rank $= 2.4$ ), followed by combined in-family sentences. For LASER, combined in-family sentences tops the list (average rank $= 2.4$ ), followed by same polysynthesis status and same word order. The results of this experiment are similar, but not identical, to those of the previous experiment. They support the same basic conclusion: training data is important, but so are agreement in word order and agreement in morphological complexity, among other features. If training data were a sufficient predictor alone, then removing the aforementioned features from the regression model
+
+| Feature | Count |
| LaBSE | LASER | Total |
| Comb. sentences | 4 | 2 | 6 |
| Comb. in-family sentences | 4 | 4 | 8 |
| Comb. in-subfamily sentences | 1 | 1 | 2 |
| Same word order | 1 | 3 | 4 |
| Same polysynthesis status | 4 | 4 | 8 |
| Same family | 1 | 2 | 3 |
| Same subfamily | 1 | 1 | 2 |
| Token overlap | 2 | 2 | 4 |
| Character overlap | 0 | 0 | 0 |
| Geographic distance | 2 | 2 | 4 |
| Syntactic distance | 0 | 3 | 3 |
| Phonological distance | 0 | 3 | 3 |
| Inventory distance | 0 | 0 | 0 |
+
+Table 2: The number of times each of the features appeared in the best-feature lists across the five alignment metrics. The top three results (including ties) in each group are in bold.
+
+| LaBSE |
| 1. Same polysynthesis status | 2.4 |
| 2. Combined sentences | 2.4 |
| 3. Combined in-family sentences | 3.6 |
| LASER |
| 1. Combined in-family sentences | 2.4 |
| 2. Same polysynthesis status | 3.4 |
| 3. Same word order | 3.8 |
+
+Table 3: Features with the top three average rankings in the single-step regression ablation experiment. Rankings are based on a feature's marginal predictive contribution relative to other features, and were averaged across all five alignment metrics.
+
+would either increase the fit or do nothing, which clearly isn't the case.
+
+# 6.3 Controlling for Training Data
+
+While the previous experiments center around prediction of the dependent variables, we bolster our analysis with classical statistical methods that aim to explicitly control for covariates. $^{15}$ Since we're dealing with categorical features, we use ANCOVA (ANalysis of COVariance).
+
+ANCOVA We run ANCOVAs separately for LaBSE and LASER and for each of the five dependent variables. We examine the language-pair-specific features, and look at same word order and same polysynthesis status separately as our "between" variables, and combined sentences, combined in-family sentences, and combined in
+
+subfamily sentences as our three covariates. Overall, same word order had a statistically significant $(p < 0.05)$ effect for 8/10 ANCOVAs, though effect sizes $(\eta_p^2)$ were generally small $^{16}$ (Cohen, 1988). Same polysynthesis status had a statistically significant effect for 10/10 ANCOVAs, with effect sizes being definitively small except for F1-score and ECOND-HM/average margin score $(\eta_p^2 \approx 0.1 - 0.16$ for LaBSE, $\eta_p^2 \approx 0.05$ for LASER). These results suggest that although same word order and same polysynthesis status are some of the more important features, the determinants of crosslinguality in shared embedding space are multifactorial and most features have a relatively small effect when considered individually.
+
+# 7 Zero-shot Cases
+
+The linguistic diversity of the New Testament Bible corpus, combined with the imperfect overlap between the languages in the corpus and those on which LaBSE and LASER were trained, implies a large number of zero-shot cases for our analysis. We can break these cases into two sub-cases. First, there are languages in the Bible corpus without language-specific training data (35 languages for LaBSE, 45 languages for LASER)17. But it follows that there are language pairs XX-YY for which no training data is present for either XX or YY (595 pairs for LaBSE, 990 pairs for LASER), which we dub the "double zero-shot" case.
+
+Simple Zero-shot Case For the simple zero-shot case, we use ANOVA (ANalysis Of VAriance) to investigate differences between group means within categorical variables. ANOVAs revealed large effects $(\eta_p^2 \approx 0.36)$ of basic word order on F1-score and ECOND-HM for LaBSE, with borderline p-values $(p \approx 0.07)$ , perhaps due to the small sample size (35 languages). The breakdown across word orders for zero-shot languages is given in Figure 2. A pairwise Tukey post-hoc test (Salkind, 2017) revealed a borderline-significant difference between SVO and VSO languages, surprisingly in favor of VSO. There were no statistically significant effects of polysynthesis for LaBSE or LASER. Interestingly, this suggests that agreement in morphological complexity may be important for cross
+
+linguality, but morphological complexity itself is not an important factor. More work is needed to validate this conclusion.
+
+ANOVAs also showed large $(\eta_p^2 \approx 1)$ effect sizes for family and subfamily membership, though most results were not statistically significant (again, perhaps due to sample size). This suggests that phylogenetic membership still shapes cross-linguality even when training data is perfectly controlled for, which is an interesting finding.
+
+
+
+
+Figure 2: "Meta-average" performance of zero-shot languages with different word orders on F1-score and negative ECOND-HM (LaBSE).
+
+Double Zero-shot Case Interestingly, the language-pair-specific feature which stood out most in the double zero-shot case was inventory distance, an anomaly in our analyses. Inventory distance was correlated with $r \approx 0.2 - 0.4$ for 4/5 dependent variables for LaBSE and with $r \approx 0.13 - 0.14$ for 2/5 dependent variables for LASER.
+
+However, as inventory distance quantifies phonological distance between languages, it could be confounded with surface-level information. To test this hypothesis, we regress it with character-level overlap and token-level overlap separately. For LaBSE, effects of inventory distance remain significant $(p < 0.05)$ for all dependent variables when
+
+regressing with token-level overlap, and 4/5 variables when regressing with character-level overlap. We wish to verify the importance of this feature in future studies.
+
+# 8 Case Study 1: Morphological Segmentation of Inuktitut
+
+Based on the above results, we conclude that whether a language has the same polysynthesis status as another language will affect their success on a cross-lingual task. However, our observations pertain to correlation, not causality. To test this observation further, we run an experiment in which we introduce a causal intervention. If indeed polysynthesis status matters, then we hypothesize that making a language "less polysynthetic" will improve alignment with a more analytic language like English.
+
+To test this hypothesis, we examine the effect of morphological segmentation of Inuktitut on the bitext retrieval task. Inuktitut is a polysynthetic, indigenous language and is completely zero-shot for both our models, in that not even in-family data is provided during pretraining. The intuition behind our experiment is that by morphologically segmenting a polysynthetic language, the "polysynthesis status" of the segmented Inuktitut is made closer to that of a more analytic language. If our previous findings are correct, we expect Inuktitut to align better with English post-segmentation.
+
+We use the first 10,000 sentences from the Nunavut Hansard Inuktitut-English parallel corpus (Joanis et al., 2020) as our bitext. For the Inuktitut half of the corpus, we use both the "raw" version and a version that has been pre-segmented with the Uqailaut morphological analyzer $^{18}$ .
+
+We then perform bitext retrieval as described in section 4.1 on both bittexts: English aligned with non-segmented Inuktitut and English aligned with segmented Inuktitut. Results in terms of F1-score are displayed in Figure 3. For LaBSE, we see a $+28.7 \ (\approx 5 \times)$ F1-score increase using segmented Inuktitut; for LASER, we see a $+0.04$ $(1.5 \times)$ increase. These empirical results support our earlier statistical findings on the feature same polysynthesis status.
+
+
+
+
+Figure 3: F1-scores on the bitext retrieval task for English-Inuktitut, using raw and morphologically segmented Inuktitut, for LaBSE (top) and LASER (bottom).
+
+# 9 Case Study 2: Word Order & Isomorphism
+
+To test the validity of our findings on the same word order feature, we examine whether embeddings in languages with similar word orders are more isomorphic to each other than those with substantially different word orders, sampling from a different corpus than the one we use in our main analysis. To this end, we select twelve zero-shot19 languages from the Universal Declaration of Human Rights (UDHR) parallel corpus (Vatanen et al., 2010). Six of these are canonically verb-initial: K'iche', Mam, Chinanteco, Tzotzil, Mixteco, and Garifuna. The other six are subject-initial: Chickasaw, Quechua, Achuar-Shiwiar, Bambara, Dagaare, and Guarani. We hypothesize that similar-word-order language pairs will be more isomorphic, on average, than pairs of languages with disparate word orders.
+
+We compute SVG and ECOND-HM across all $\binom{12}{2} = 66$ language pairs for LaBSE and LASER separately and group the results based on whether the language pairs have similar word order or different word order. The averages of these groups are given in Table 4. Similar-word-order pairs are
+
+ | SVG | ECOND-HM |
| LaBSE | LASER | LaBSE | LASER |
| Similar | 5.63 | 3.67 | 18.08 | 18.13 |
| Different | 6.34 | 4.37 | 18.13 | 18.20 |
+
+Table 4: Average values of SVG and ECOND-HM across 66 double zero-shot language pairs in the UDHR subset with similar or different word orders (based on whether a language is verb-initial or subject-initial). Note that LaBSE and LASER results are not comparable in absolute terms.
+
+more isomorphic than their different-word-order counterparts across all metrics and both models.
+
+# 10 Conclusions
+
+We find evidence that linguistic and quasi-linguistic factors continue to play a role in determining the cross-linguality of a model even after training data is accounted for, and validate our findings with two case studies on extremely low-resource languages. Our analysis points to, among other things, the importance of word order agreement (similarly to Pires et al. 2019) and morphology agreement on building aligned and isomorphic cross-lingual subspaces. We also rigorously demonstrate the importance of in-family training data in building massively multilingual models, and show moderate effects of other typological measures on cross-linguality. In the future, we are confident that these insights can be used to improve the cross-linguality of shared embedding spaces, particularly for low-resource languages.
+
+# 11 Acknowledgements
+
+We would like to thank Michael Saxon at the University of California, Santa Barbara for his suggestions regarding methodology; Ivan Vulić at the University of Cambridge for his advice on structuring our analysis; and Fangxiaoyu Feng at Google for providing training data statistics on LaBSE.
+
+This work is partly sponsored by the Office of the Director of National Intelligence/Intelligence Advanced Research Projects Activity (IARPA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
+
+# 12 Ethical Considerations
+
+When drawing inferences about multilingual language models, it is crucial to take into account languages that are low-resource, Indigenous, and endangered. Previous works have looked at the challenges facing these sorts of under-resourced and under-studied languages (e.g. Mager et al. 2018; Joshi et al. 2020) and proposed broad solutions and guidelines (e.g. Kann et al. 2019; Bender 2019).
+
+The Bible corpus (Christodouloupoulos and Steedman, 2014) that we use in our analysis includes 35 languages that are zero-shot for LaBSE and 45 that are zero-shot for LASER, all of which could be classified as low-resource or extremely low-resource. This means that, for our case studies, we can test our conclusions on extremely low-resource languages (including Indigenous languages) that are typically underrepresented in NLP.
+
+While the Bible corpus enables us to extend our work to low-resource languages, we also acknowledge that the corpus owes its existence largely to a settler colonial tradition, in which missionaries translated the Bible into Indigenous languages—often without crediting the Indigenous peoples who contributed their knowledge. We acknowledge these Indigenous peoples' contributions to this work.
+
+Studies such as Strubell et al. (2019) and Schwartz et al. (2019) have identified, analyzed, and proposed solutions for the energy consumption, cost, and environmental impact of NLP models, in particular the burdens associated with training and performing inference with large pretrained language models. Though we perform inference with two such models on a considerable amount of input, we note that these are one-time computations, made using a single NVIDIA V100 GPU, and that we plan to release our collected data publicly for reuse in future empirical analyses.
+
+# References
+
+Hervé Abdi. 2007. Part (semi-partial) and Partial Regression Coefficients. Encyclopedia of measurement and statistics, pages 736-740.
+Christopher H. Achen. 2005. Let's Put Garbage-Can Regressions and Garbage-Can Probits Where They Belong. Conflict Management and Peace Science, 22(4):327-339.
+Zeljko Agić and Ivan Vulić. 2019. JW300: A wide-coverage parallel corpus for low-resource languages.
+
+In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Computational Linguistics.
+Mikel Artetxe and Holger Schwenk. 2019a. Margin-based parallel corpus mining with multilingual sentence embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3197-3203, Florence, Italy. Association for Computational Linguistics.
+Mikel Artetxe and Holger Schwenk. 2019b. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.
+Emily Bender. 2019. The #BenderRule: On Naming the Languages We Study and Why It Matters. The Gradient.
+Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching Word Vectors with Subword Information. arXiv e-prints, page arXiv:1607.04606.
+Frédéric Chazal, David Cohen-Steiner, Leonidas J. Guibas, Facundo Mémoli, and Steve Y. Oudot. 2009. Gromov-Hausdorff Stable Signatures for Shapes using Persistence. Computer Graphics Forum, 28(5):1393-1403.
+Christos Christodouloupoulos and Mark Steedman. 2014. A Massively Parallel Corpus: the Bible in 100 Languages. Language Resources and Evaluation, 49(2):375-395.
+Jacob Cohen. 1988. Statistical Power Analysis for the Behavioral Sciences, 2 edition. L. Erlbaum Associates.
+Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Emerging crosslingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022-6034, Online. Association for Computational Linguistics.
+Ryan Cotterell, Christo Kirov, Mans Hulden, and Jason Eisner. 2019. On the complexity and typology of inflectional morphological systems. Transactions of the Association for Computational Linguistics, 7:327-342.
+Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Australia. Association for Computational Linguistics.
+Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+
+Haim Dubossarsky, Ivan Vulić, Roi Reichart, and Anna Korhonen. 2020. The secret is in the spectra: Predicting cross-lingual task performance with spectral similarity measures. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2377–2390, Online. Association for Computational Linguistics.
+Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Beyond English-Centric Multilingual Machine Translation. arXiv e-prints, page arXiv:2010.11125.
+Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Languageagnostic BERT Sentence Embedding. arXiv eprints, page arXiv:2007.01852.
+Sophie Groenwold, Samhita Honnavalli, Lily Ou, Aesha Parekh, Sharon Levy, Diba Mirza, and William Yang Wang. 2020. Evaluating Transformer-Based Multilingual Text Classification. arXiv eprints, page arXiv:2004.13939.
+Harald Hammarström, Robert Forkel, Martin Haspelmath, and Sebastian Bank. 2020. Glottolog Database 4.3.
+Eric Joanis, Rebecca Knowles, Roland Kuhn, Samuel Larkin, Patrick Littell, Chi-kiu Lo, Darlene Stewart, and Jeffrey Micher. 2020. The Nunavut hansard Inuktitut-English parallel corpus 3.0 with preliminary machine translation results. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2562-2572, Marseille, France. European Language Resources Association.
+Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017a. Billion-scale Similarity Search with GPUs. arXiv preprint arXiv:1702.08734.
+Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017b. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.
+Alex Jones and Derry Wijaya. 2021. Majority Voting with Bidirectional Pre-translation For Bitext Retrieval. arXiv e-prints, page arXiv:2103.06369.
+Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.
+
+Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of Tricks for Efficient Text Classification. arXiv e-prints, page arXiv:1607.01759.
+Katharina Kann, Kyunghyun Cho, and Samuel R. Bowman. 2019. Towards realistic practices in low-resource natural language processing: The development set. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3342-3349, Hong Kong, China. Association for Computational Linguistics.
+K Karthikeyan, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In International Conference on Learning Representations.
+Phillip Keung, Julian Salazar, Yichao Lu, and Noah A. Smith. 2021. Unsupervised Bitext Mining and Translation via Self-Trained Contextual Embeddings. Transactions of the Association for Computational Linguistics, 8:828-841.
+Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125-3135, Florence, Italy. Association for Computational Linguistics.
+Patrick Littell, Chi-kiu Lo, Samuel Larkin, and Darlene Stewart. 2019. Multi-source transformer for Kazakh-Russian-English neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 267–274, Florence, Italy. Association for Computational Linguistics.
+Patrick Littell, David R. Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8-14, Valencia, Spain. Association for Computational Linguistics.
+Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, and Ivan Meza-Ruiz. 2018. Challenges of language technologies for the indigenous languages of the Americas. In Proceedings of the 27th International Conference on Computational Linguistics, pages 55-69, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
+Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In Proceedings of the 2017
+
+Conference on Empirical Methods in Natural Language Processing, pages 2529-2535, Copenhagen, Denmark. Association for Computational Linguistics.
+Arya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Garrett Nicolai, Matt Post, and David Yarowsky. 2020. The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2884-2892, Marseille, France. European Language Resources Association.
+Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv e-prints, page arXiv:1301.3781.
+Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy. Association for Computational Linguistics.
+Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512-4525, Online. Association for Computational Linguistics.
+Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842-866.
+Neil J. Salkind. 2017. Post Hoc Tests: Tukey Honestly Significant Difference Test. The SAGE Encyclopedia of Communication Research Methods.
+Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2019. Green AI. arXiv e-prints, page arXiv:1907.10597.
+Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2021. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1351-1361, Online. Association for Computational Linguistics.
+Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, and Armand Joulin. 2019. CC-Matrix: Mining Billions of High-Quality Parallel Sentences on the WEB. arXiv e-prints, page arXiv:1911.04944.
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.
+
+Jörg Tiedemann. 2020. The tatoeba translation challenge - realistic data sets for low resource and multilingual MT. In Proceedings of the Fifth Conference on Machine Translation, pages 1174-1182, Online. Association for Computational Linguistics.
+Chau Tran, Yuqing Tang, Xian Li, and Jiatao Gu. 2020. Cross-lingual Retrieval for Iterative Self-supervised Training. arXiv preprint arXiv:2006.09526.
+Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9(86):2579-2605.
+Tommi Vatanen, Jaakko J. Vävrynen, and Sami Virpioja. 2010. Language identification of short text segments with n-gram models. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA).
+Ivan Vulic, Sebastian Ruder, and Anders Søgaard. 2020. Are all good word vector spaces isomorphic? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3178-3192, Online. Association for Computational Linguistics.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics.
+Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Computational Linguistics.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes,
+
+and Jeffrey Dean. 2016. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. CoRR, abs/1609.08144.
+
+Mengzhou Xia, Guoqing Zheng, Subhabrata Mukherjee, Milad Shokouhi, Graham Neubig, and Ahmed Hassan Awadallah. 2021. MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning. arXiv e-prints, page arXiv:2104.07908.
+
+Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber. 2019. Are girls neko or shojo? cross-lingual alignment of non-isomorphic embeddings with iterative normalization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3180-3189, Florence, Italy. Association for Computational Linguistics.
+
+# A Appendix
+
+# A.1 Issues With Using The Bible as a Corpus
+
+We take note of several issues with using the Bible to perform cross-lingual analyses, but defend our decision to use it over other available corpora. The primary concern is with the language itself of the Bible and its translations: Much of it is archaic and would sound unnatural to modern speakers, and certain translations may suffer from sub-optimal (possibly non-native) translation quality. Furthermore, the relative performance of LaBSE and LASER on these texts was somewhat unrepresentative: LaBSE vastly outperformed LASER, despite the fact that they are closer in performance on more modern, idiomatic texts (e.g. the Tatoeba dataset $^{20}$ from Artetxe and Schwenk (2019b)).
+
+However, the Bible corpus from Christodouloupoulos and Steedman (2014) lends itself to our analysis in the following ways:
+
+- Reliable sentence-level (technically verse-level) alignments
+- Clean, easy-to-parse text
+Large-scale multilinguality and linguistic diversity
+
+We also consider using JW300 (Agic and Vulic, 2019), the Tatoeba Challenge test data $^{21}$ Tiedemann (2020), and the Johns Hopkins University Bible corpus (McCarthy et al., 2020). However:
+
+- JW300 is difficult to download in its entirety and sentence-align into a superparallel cor
+
+pus in practice, and alignments may not be as clean as in the Bible corpus
+
+- The Tatoeba Challenge bittexts are not multi-parallel, so are useless for our main analysis
+- The Johns Hopkins Bible corpus, while impressive in size with $1600+$ languages, is overkill for the intended scale of our analysis (and, in practice, the quality of a corpus of this size is difficult to ascertain)
+
+For these reasons, we viewed using the corpus from Christodouloupoulos and Steedman (2014) as a "necessary evil" of sorts to achieve the scale of analysis we were hoping for.
+
+# A.2 Choice of Embedding Models
+
+We opt to use LaBSE (Feng et al., 2020) and LASER (Artetxe and Schwenk, 2019b) as our embedding models primarily because they are state-of-the-art sentence encoders that perform well on the bitext mining task (Reimers and Gurevych, 2020). Using two models with different underlying architectures (Transformer for LaBSE vs BiLSTM for LASER) makes our analysis more robust and generalizable, because any trend observed w.r.t. both models cannot be due to a peculiarity of one model or the other (e.g. training data domain, neural architecture, tokenization technique, etc.).
+
+However, while both models have generally high performance on this task, LaBSE is, on average, superior to LASER (see Reimers and Gurevych (2020), but also our full results $^{22}$ from this paper). On the lowest-resource languages and language pairs, we see an induced floor effect for LASER, where the variance among data points is low and statistical effects are hard to detect. For the same reason, we do not include results from mean-pooled subword embeddings—such as mBERT or XLM-RoBERTa—due to their relatively weak performance on the bitext mining task (Reimers and Gurevych, 2020).
+
+Floor effects do not pose nearly as much of a problem for LaBSE. Thus, by including LaBSE as one of our models, we are able to detect fine-grained differences among low-resource languages and language pairs that we might miss with LASER. For higher-resource cases, our conclusions are made all the more robust for having inferences from two high-performing models.
+
+# A.3 Principal Component Analysis
+
+We also perform principal component analysis (PCA) to determine how many independent components exist in our feature space, and how the loadings of those components break down.
+
+# A.3.1 Principal Component Regression
+
+We run principal component regression (PCR) to determine the optimal number of components in our feature space for predicting the dependent variables. To this end, we first perform PCA on the full set of 13 features (separately for LaBSE and LASER, as the training features are different for each). We then perform PCR (with linear regression) using the first 1 to 13 components in separate runs, with each of the dependent variables being modeled separately as the regressand. As we did before, we measure regression fit using adjusted $r^2$ and average the results from ten-fold cross validation on each run.
+
+We find that for LaBSE, the optimal number of components for predicting the dependent variables averaged 7.2, or roughly half the size of the feature space. For LASER, the average number of optimal components was 6.0.
+
+# A.3.2 Component Loadings
+
+We also look at how the loadings of the principal components for LaBSE and LASER features break down; the results for the first five components are given in Table 5. For both LaBSE and LASER, the first three components map almost entirely onto training features, while later components are a mixture of the remaining features. However, same word order and same polysynthesis status are next after training-related features in terms of weight: they are the top two features in components 4 and 5 for both systems.
+
+# A.4 Semi-partial Correlations for Typological Distance
+
+For the typological distance features, we use the semi-partial correlation (Abdi, 2007)
+
+$$
+r _ {1 (2. 3)} = \frac {r _ {1 2} - r _ {1 3} r _ {2 3}}{\sqrt {1 - r _ {2 3} ^ {2}}}
+$$
+
+where $r_{1(2.3)}$ is the correlation between $f_{1}$ and $f_{2}$ such that $f_{3}$ is held constant for $f_{2}$ (in our case, training data features are held constant for the dependent variables). This informs us how the typological distance features correlate with the dependent variables when training data features are mod
+
+eled as covariates. We compute semi-partial correlations between each typological distance measure and each dependent variable for LaBSE and LASER separately.
+
+The typological distance features had noteworthy $(r > 0.1)$ correlations for anywhere from 0/10 (phonological distance) to 5/10 (geographic distance) analyses. However, the $r$ values generally fell into the range $0.1 < |r| < 0.25$ . We conclude that lang2vec distances correlate with cross-linguality weakly but non-negligibly when training data is held constant, somewhat contrary to the stronger relationships observed in Dubossarsky et al. (2020) with monolingual embedding spaces.
+
+# A.5 Visualization from Case Study 2
+
+We visualize approximate isomorphism between select similar-word-order language pairs from section 9 with t-SNE (van der Maaten and Hinton, 2008), with default settings in scikit-learn. Results are displayed in Figure 4.
+
+# A.6 ECOND-HM Computation
+
+The condition number of a matrix $\mathcal{X}$ with $n$ singular values $\sigma_{1},\sigma_{2},\ldots ,\sigma_{n}$ , sorted in descending order, is defined as:
+
+$$
+\kappa (\mathcal {X}) = \frac {\sigma_ {1}}{\sigma_ {n}}
+$$
+
+Furthermore, the effective rank of $\mathcal{X}$ is defined as:
+
+$$
+\mathrm {r a n k} ^ {*} = \left\lfloor e ^ {H (\Sigma)} \right\rfloor
+$$
+
+where $\lfloor \cdot \rfloor$ is the floor function and $H(\Sigma)$ is the entropy of the normalized singular value distribution of $\mathcal{X}$ , namely $H(\Sigma) = -\sum_{i=1}^{n} \bar{\sigma}_i \log \bar{\sigma}_i$ , where $\bar{\sigma}_i = \frac{\sigma_i}{\sum_{j=1}^{n} \sigma_j}$ . Putting the two together, we define the effective condition number of $\mathcal{X}$ as:
+
+$$
+\kappa_ {e f f} = \frac {\sigma_ {1}}{\sigma_ {\mathrm {r a n k} ^ {*} (\mathcal {X})}}
+$$
+
+Finally, we define the effective condition number harmonic mean (Dubossarsky et al., 2020) as:
+
+$$
+\operatorname {E C O N D \_ H M} (\mathcal {X}, \mathcal {Y}) = \frac {2 \cdot \kappa_ {e f f} (\mathcal {X}) \cdot \kappa_ {e f f} (\mathcal {Y})}{\kappa_ {e f f} (\mathcal {X}) + \kappa_ {e f f} (\mathcal {Y})}
+$$
+
+Using the effective rank instead of the standard rank to determine the (effective) condition number is a heuristic method motivated by finding the least singular value that characterizes $\mathcal{X}$ in a significant way, as informed by the entropy associated with the singular value distribution of $\mathcal{X}$ .
+
+
+
+
+
+
+Figure 4: The first two t-SNE dimensions of sentence embeddings in the Universal Declaration of Human Rights, in four zero-shot languages (Chickasaw, Quechua, K'iche, and Mam). Languages with similar word order have been plotted together to demonstrate isomorphism of the resulting vector subspaces (LaBSE plots are top, LASER plots are bottom).
+
+
+
+| Feature | Loadings |
| PC1 | PC2 | PC3 | PC4 | PC5 |
| Combined sentences | 2.50e-2, 5.28e-2 | -2.30e-1, 2.53e-1 | 9.73e-1, 9.66e-1 | -8.59e-11, -7.83e-10 | 1.12e-11, -5.93e-10 |
| Combined in-family sentences | 9.78e-1, 9.60e-1 | 2.07e-1, -2.81e-1 | 2.38e-2, 2.13e-2 | -2.11e-11, -8.33-10 | 2.83e-12, -5.56e-12 |
| Combined in-subfamily sentences | 2.07e-1, 2.77e-1 | -9.51e-1, 9.26e-1 | -2.30e-1, -2.58e-1 | -5.92e-12, 4.33e-10 | 1.15e-11, 4.40e-10 |
| Same word order | 1.10e-11, 3.42e-10 | -5.38e-12, -4.97e-10 | 7.59e-11, 1.15e-9 | 7.60e-1, 7.13e-1 | 6.17e-1, 6.73e-1 |
| Same polysynthesis status | 1.57e-11, 4.30e-10 | -2.21e-11, -8.02e-11 | 6.69e-11, 1.11e-10 | 6.02e-1, 6.54e-1 | -7.81e-1, -7.33e-1 |
| Same family | 2.44e-11, 6.97e-10 | -1.19e-12, -1.28e-10 | -2.56e-11, -4.76e-11 | 1.67e-1, 1.84e-1 | -6.07e-4, -1.11e-2 |
| Same subfamily | 3.64e-12, 1.10e-10 | -1.48e-11, 1.19e-10 | -1.81e-11, -7.06e-11 | 7.38e-2, 7.71e-2 | 1.45e-2, 1.65e-2 |
| Token overlap | 6.68e-12, 2.07e-10 | -1.42e-11, 2.67e-11 | -1.20e-11, 6.42e-12 | 6.23e-2, 6.48e-2 | -1.87e-2, -1.77e-2 |
| Character overlap | 2.53e-12, 1.72e-10 | -2.34e-11, 3.68e-10 | -1.02e-10, -8.57e-11 | 4.93e-2, 2.39e-2 | 2.71e-2, 3.63e-2 |
| Geographic distance | -4.28e-12, -1.29e-10 | 6.74e-13, 4.58e-11 | -7.91e-13, -5.54e-11 | -6.41e-2, -6.77e-2 | 6.45e-2, 6.20e-2 |
| Syntactic distance | -7.48e-12, -2.36e-10 | 6.14e-12, 1.02e-10 | 1.53e-11, -2.23e-11 | -9.85e-2, -9.33e-2 | -6.25e-2, -6.48e-2 |
| Phonological distance | -5.80e-12, -1.70e-10 | -3.28e-12, 1.49e-10 | 2.87e-11, -9.04e-12 | -5.81e-2, -5.71e-2 | 2.20e-2, 2.19e-2 |
| Inventory distance | -8.39e-13, -2.12e-11 | -3.53e-12, 1.05e-10 | 1.64e-11, -3.90e-11 | -4.75e-2, -4.51e-2 | 1.49e-2 1.23e-2 |
+
+Table 5: Loadings from the first five principal components for the language-pair-related features. The top three loadings by magnitude in each component are colored red for LaBSE and green for LASER. Note that although LaBSE and LASER are trained using different neural architectures, the most significant features in each of the first five components are almost identical.
\ No newline at end of file
diff --git a/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/images.zip b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ade12c4ac6b52f3d0c4220d45b807b1beecb0bf0
--- /dev/null
+++ b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2c3ce5eee3515f989c2d6a3af8af9d851449219a4d7682deb22f611a1fa2c11c
+size 463745
diff --git a/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/layout.json b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d2a879791f094bf44bcc8a8a71e77481d41f97f8
--- /dev/null
+++ b/amassivelymultilingualanalysisofcrosslingualityinsharedembeddingspace/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:06e713054e01f44ec8ca5fbeb96d0324d6db51152f534e7343b908791dee1bdf
+size 524060
diff --git a/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_content_list.json b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3bcaf10139f183f42509500b9346eede74128b65
--- /dev/null
+++ b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b12ec9631f8f85e5582b8eb5853ede74571c214574197f243d5b72ad591094db
+size 78856
diff --git a/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_model.json b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4e0470cdad35715085d1d811c1ded1215fffd60e
--- /dev/null
+++ b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d0378ec4b7068082cdd28483f02cfd688c71148432e494037dafd7fd685d811b
+size 95173
diff --git a/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_origin.pdf b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a3bbbe6f6a73c90577c0c49c6ada3876f8346aad
--- /dev/null
+++ b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/6e0f0023-20b1-4210-a0e3-f63c11062ce2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:377aa542c462a9001234e7e75c68c9433d13c61b7b5af92fb0dd36770ed902bc
+size 459393
diff --git a/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/full.md b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e45c5c4cb1c28f2b3d35dcee288627df71ba147
--- /dev/null
+++ b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/full.md
@@ -0,0 +1,320 @@
+# Analyzing the Surprising Variability in Word Embedding Stability Across Languages
+
+Laura Burdick, Jonathan K. Kummerfeld and Rada Mihalcea
+
+Computer Science & Engineering
+
+University of Michigan, Ann Arbor
+
+{lburdick, jkummerf, mihalcea}@umich.edu
+
+# Abstract
+
+Word embeddings are powerful representations that form the foundation of many natural language processing architectures, both in English and in other languages. To gain further insight into word embeddings, we explore their stability (e.g., overlap between the nearest neighbors of a word in different embedding spaces) in diverse languages. We discuss linguistic properties that are related to stability, drawing out insights about correlations with affixing, language gender systems, and other features. This has implications for embedding use, particularly in research that uses them to study language trends.
+
+# 1 Introduction
+
+Word embeddings have become an established part of natural language processing (NLP) (Collobert et al., 2011; Wang et al., 2020a). Stability, defined as the overlap between the nearest neighbors of a word in different embedding spaces, was introduced to measure variations in local embedding neighborhoods across changes in data, algorithms, and word properties (Antoniak and Mimno, 2018; Wendlandt et al., 2018). These studies found that many common English embedding spaces are surprisingly unstable, which has implications for work that uses embeddings as features in downstream tasks, and work that uses embeddings to study specific properties of language.
+
+However, research to date on word embedding stability has been exclusively done on English and so is not representative of all languages. In this work, we explore the stability of word embeddings in a wide range of languages. Better understanding the differences caused by diverse languages will provide a foundation for building embeddings and NLP tools in all languages.
+
+In English and other very high resource languages, it has become common practice to use contextualized word embeddings, such as BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019). These algorithms require huge amounts of computational resources and data. For example, it takes 2.5 days to train XLNet with 512 TPU v3 chips. In addition to requiring heavy computational resources, most contextualized embedding algorithms need large amounts of data. BERT uses 3.3 billion words of training data. In contrast to these large corpora, many datasets from low-resource languages are fairly small (Maxwell and Hughes, 2006). To support scenarios where using huge amounts of data and computational resources is not feasible, it is important to continue developing our understanding of context-independent word embeddings, such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). These algorithms continue to be used in a wide variety of situations, including the computational humanities (Abdulrahim, 2019; Hellrich et al., 2019) and languages where only small corpora are available (Joshi et al., 2019).
+
+In this work, we consider how stability varies for different languages, and how linguistic properties are related to stability—a previously understudied relationship. Using regression modeling, we capture relationships between linguistic properties and average stability of a language, and we draw out insights about how linguistic features relate to stability. For instance, we find that embeddings in languages with more affixing tend to be less stable. Our findings provide crucial context for research that uses word embeddings to study language properties and trends (e.g., Heyman and Heyman, 2019; Abdulrahim, 2019), which often rely on raw embeddings created by GloVe or word2vec. If these embeddings are unstable, then research using them needs to take this into account in terms of methodologies and error analysis.
+
+# 2 Related Work
+
+Word embeddings are low-dimensional vectors used to represent words, normally in downstream tasks, such as word sense disambiguation (Scarlini et al., 2020) and text summarization (Moradi et al., 2020). They have been shown to capture both syntactic and semantic properties of words, making them useful in a wide range of NLP tasks (Wang et al., 2020b). In this work, we explore word embeddings that generate one embedding per word, regardless of the word's context. We consider two widely used algorithms: word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014).
+
+Our work analyzes embeddings in multiple languages, which is important because embeddings are commonly used across many languages. In particular, there has been interest in embeddings for low-resource languages (Chimalamarri et al., 2020; Stringham and Izbicki, 2020).
+
+In this work, we use stability to measure the quality of word embeddings. Similar to the work we present here on stability, other research looks at how nearest neighbors vary as properties of the embedding spaces change. Pierrejean and Tanguy (2018) found that the lowest frequency and the highest frequency words have the highest variation among nearest neighbors. Additional research has explored how semantic and syntactic properties of words change with different embedding algorithm and parameter choices (Artetxe et al., 2018; Yaghoobzadeh and Schütze, 2016). Unlike our work, previous studies only considered English.
+
+Finally, while our work is not a form of embedding evaluation, it is related to the topic (Chiu et al., 2016; Rogers et al., 2018; Qiu et al., 2018). There has been extensive work on evaluating word embeddings, seen in the recent RepEval workshops (Rogers et al., 2019), and going back to work comparing them with counting based methods (Baroni et al., 2014). Our findings indicate that work on embedding evaluation should take into consideration stability, using multiple training runs to confirm results. Similarly, stability should be considered when studying the impact of embeddings on downstream tasks. Leszczynski et al. (2020) specifically looked at the downstream instability of word embeddings, and found that there is a stability-memory tradeoff, and higher stability can be achieved by increasing the embedding dimension.
+
+# 3 Data
+
+In order to explore the stability of word embeddings in different languages, we work with two datasets, Wikipedia and the Bible. While Wikipedia has more data, the Bible covers more languages. Wikipedia is a comparable corpus, whereas the Bible is a parallel corpus.
+
+Wikipedia Corpus. We use pre-processed Wikipedia dumps in 40 languages taken from Al-Rfou' et al. (2013). The size of these Wikipedia corpora varies from 329,136 sentences (Tagalog) to 75,241,648 sentences (English), with an average of 9,292,394 sentences. For all of our experiments, we downsample each corpus to work with comparably sized data (details in Section 4.2).
+
+Bible Corpus. We consider 97 languages from the pre-processed Bible corpus (McCarthy et al., 2020):3 all languages for which at least $75\%$ of the Bible ( $\geq 23,326$ verses) is present.4 This excludes many languages for which there is only a partial Bible, e.g., just the New Testament, which would be insufficient for training word vectors. We consider two sets of languages with the Bible corpus: languages that overlap with the set of Wikipedia languages (26 languages), and all languages in the Bible corpus (97 languages).
+
+WALS. To gain linguistic properties of these languages, we use the World Atlas of Language Structures (WALS), a database of phonological, lexical, and grammatical properties for over 2,000 languages (Dryer and Haspelmath, 2013). This expert-curated resource contains 192 language features. For example, WALS records subject, object, and verb word order for various languages.
+
+# 4 Calculating Stability in Many Languages
+
+The first part of our work is a comparison of stability across languages. Before presenting our measurements, we define stability and analyze some important methodological decisions.
+
+Model 1: indie, punk, progressive, pop, roll, band, blues, brass, class, alternative
+
+Model 2: punk, indie, alternative, progressive, band, sedimentary, bands, psychedelic, climbing, pop
+
+Model 3: punk, pop, indie, alternative, band, roll, progressive, folk, climbing, metal
+
+Table 1: Ten nearest neighbors for the word rock in three GloVe models trained on different subsets of Large English Wikipedia. Words in all lists are in bold; words in only two lists are italicized. Models 1 and 2 have 6 words (60%) in common, models 1 and 3 have 7, and models 2 and 3 have 7. Therefore, this word has a stability of $66.7\%$ , the average word overlap between the three models.
+
+# 4.1 Defining Stability
+
+Stability is defined as the percent overlap between nearest neighbors in an embedding space. To calculate stability, given a word $W$ and two embedding spaces $A$ and $B$ , take the ten nearest neighbors (measured using cosine similarity) of $W$ in both $A$ and $B$ . The stability of $W$ is the percent overlap between these two lists of nearest neighbors. $100\%$ stability indicates perfect agreement between the two embedding spaces, while $0\%$ stability indicates complete disagreement. Table 1 shows a simple example. This definition of stability can be generalized to more than two embedding spaces by considering the average overlap between pairs of embedding spaces. Let $X$ and $Y$ be two sets of embedding spaces. Then, for every pair of embedding spaces $(x,y)$ , where $x \in X$ and $y \in Y$ , take the ten nearest neighbors of $W$ in both $x$ and $y$ and calculate percent overlap. Let the stability be the average percent overlap over every pair of embedding spaces $(x,y)$ .
+
+Previous work has explored stability for English word embeddings. For instance, it was found that the presence of certain documents in the training corpus affects stability (Antoniak and Mimno, 2018), and that training and evaluating embeddings on separate domains is less stable than training and evaluating on the same domain (Wendlandt et al., 2018). In this work, we expand this analysis to a more diverse set of languages.
+
+# 4.1.1 The Effect of Downsampling on Stability
+
+Stability measures how changes to the input data or training algorithm affect the resulting embeddings. Sometimes we make changes with the goal of shifting the embeddings, such as increasing the context window size to try to get embeddings that capture semantics more than syntax. In other cases, we would hope a change would not substantially change embeddings, such as changing the random seed for the algorithm. For our experiments, we consider a previously unstudied source of instability: different data samples from the same distribution. This is a case where we hope embeddings remain stable, given a sufficiently large sample.
+
+We generate data samples by downsampling a corpus to create multiple smaller corpora; we then measure stability across these downsamples. The choice of sampling with or without replacement, and the size of the sample are subtle methodological choices. In this section, we consider whether stability across downsamples produces consistent results that we can compare across languages.
+
+First, we consider downsampling with replacement, shown in Figure 1a. We use data drawn from an English Wikipedia corpus of 5,269,686 sentences (denoted "Large English Wikipedia").7 We randomly sample five sets of 500,000 sentences multiple times, controlling the amount of overlap between downsamples (from $10\%$ to $60\%$ shared across all five samples). For a specific overlap amount $X\%$ , $X\%$ of 500,000 sentences is randomly sampled and included in all of the five downsamples. The remaining $(100 - X)\%$ sentences are randomly sampled for each downsample.
+
+Stability is calculated using GloVe embeddings and the words that occur in every downsample for every overlap percentage. In Figure 1a, we group stability into buckets of size $5\%$ (i.e., $0 - 5\%$ , $5 - 10\%$ , etc). This allows us to see patterns in stability that are not visible from a single statistic, such as the overall average. We see that while stability trends are similar for different overlap amounts, stability is consistently higher as the overlap amount increases. This means that if we use downsampling with replacement, we cannot reliably compare stability across multiple corpora of varying sizes (e.g., Wikipedia and the much smaller Bible corpus). The overlap amount would change de
+
+| Experiment | Machine | Timing |
| Training one w2v embedding on one Wikipedia corpus (Section 4) | Machine 1 | 13 sec. |
| Training one GloVe embedding on one Wikipedia corpus (Section 4) | Machine 1 | 12 min. |
| Calculating stability on one Wikipedia corpus (Section 4) | Machine 1 | 17 sec. |
| Training one w2v embedding on one Bible corpus (Section 4) | Machine 1 | 5 sec. |
| Calculating stability on one Bible corpus (Section 4) | Machine 1 | 12 sec. |
| Training regression model (Section 5) | Machine 2 | <7 sec. |
| Leave-one-out cross-validation (Section 6) | Machine 2 | <4 sec. |
+
+Table 2: Runtimes for different experimental portions of this work. Machine 1 is four Intel(R) Xeon(R) CPU E5-1603 v3 @ 2.80 GHz processors. Machine 2 is a 2.9GHz Dual-Core Intel Core i5.
+
+
+(a) Sampling with replacement, varying percentage overlap between samples.
+
+
+(b) Sampling without replacement, varying sample size.
+Figure 1: Measuring the impact of data sampling parameters on stability measurements. Results when sampling with replacement consistently increase as overlap increases (a). This poses a problem, as results may reflect corpus size rather than intrinsic stability. Results when sampling without replacement do show a consistent pattern, even when the sample is only 50,000 sentences, a tenth of the largest sample size (b).
+
+pending on the size of the corpus, changing our stability measurement.
+
+Instead of downsampling with replacement, we consider downsampling without replacement, shown in Figure 1b for different downsample sizes. We see that varying the size of the downsample does not have a large effect on the patterns of stability. Particularly when looking at lower stability, the trends are remarkably consistent, even when the downsample size varies from 50,000 sentences to 500,000 sentences. The pattern grows less con
+
+sistent when looking at higher stability, especially with smaller downsample sizes.
+
+This comparison (Figures 1a and 1b) shows that downsampling without replacement produces more consistent (and thus comparable) stability results than downsampling with replacement. Thus, we only consider downsampling without replacement.
+
+# 4.2 Stability for Wikipedia and the Bible
+
+Our first study, shown in Figure 2, considers stability across the 26 languages included in both Wikipedia and the Bible. These results show three settings for Wikipedia: (1) Stability of GloVe embeddings across five downsampled corpora, (2) Stability of word2vec (w2v) embeddings across five downsampled corpora, and (3) Stability of word2vec embeddings using five random seeds on one downsampled corpus. For the Bible, we only show the third case, since it is too small for down-sampling.
+
+Each downsampled corpus is 100,000 sentences, and words that occur with a frequency less than five are ignored. Previous work (Pierrejean and Tanguy, 2018) has indicated that words that appear this infrequently will be very unstable. We use standard parameters for both embedding algorithms. For each embedding, we calculate the ten nearest neighbors of every word using FAISS (Johnson et al., 2019). Finally, for each language, we calculate the stability for every word in that language across all five embedding spaces. Experimental runtimes are listed in Table 2.
+
+Figure 2 shows bucketed stability for both Wikipedia and the Bible. Most languages have the same overall trend: a large number of relatively unstable word embeddings, then a fairly flat distri
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 2: Percentage of words that occur in each stability bucket for four different methods, three on Wikipedia and one on the Bible. The 26 languages in common are shown here. The average stability for each method is shown on the individual graphs.
+
+
+
+
+
+bution between $25\%$ and $75\%$ , and a sharp drop at high stability. This indicates that the conclusions from prior work on English apply to other languages as well. In particular, it means that any work that uses embeddings to study a language should train multiple embedding spaces to ensure robust findings.
+
+Some languages have substantially more stable embeddings than others. Comparing GloVe downsamples on Wikipedia, Vietnamese has the most
+
+stable embeddings (avg. $2.46\%$ ), while Korean has the least stable embeddings (avg. $0.58\%$ ). The plot for Vietnamese has a different trend than many of the other plots in Figure 2. Vietnamese is the only Austro-Asiatic language in our dataset, so there could be multiple distinctives that are related to it exhibiting different patterns than the other languages.
+
+Finally, varying the training algorithm has a smaller impact than changing the dataset. Keeping
+
+
+(a) German
+
+
+(b) French
+Figure 3: Percentage of words that occur in each stability bucket for different Bible translations.
+
+the dataset fixed (Wikipedia) and varying the algorithm, we see similar trends. Keeping the algorithm fixed (w2v random seeds) and varying the dataset, we often see substantial shifts. This means that in order to compare languages we need to carefully control for the content of the corpus (which the Bible data allows us to do). While the Bible is too small to support downsampling, these results on Wikipedia suggest that experiments varying the random seed lead to similar variations to experiments varying the data sample.
+
+To confirm this finding, we consider two languages with multiple Bible translations: German and French. We average stability across five word2vec embeddings using five random seeds on one downsampled corpus. The downsampled corpus is 100,000 sentences, randomly sampled. Figure 3 shows the stability patterns for each. The results are very consistent, indicating that variations in translator behavior do not impact stability the way shifting from one corpus to another does. The largest shift is for the French Parole de Viet translation (top line in yellow in Figure 3b), which intentionally uses simpler, everyday language. For further experiments on languages with multiple Bible translation, we choose the Bible translation
+
+with the highest average stability.
+
+It is difficult to infer more from these figures alone. In the next section, we use regression modeling to identify patterns in the results. Based on the observations above, we use results from GloVe across five downsampled corpora for Wikipedia, and results across five random seeds for the Bible.
+
+# 5 Regression Modeling
+
+We now explore linguistic factors that correlate with stability. To draw conclusions about specific linguistic features, we use a ridge regression model (Hoerl and Kennard, 1970) $^{10}$ to predict the average stability of all words in a language given features reflecting language properties. Regression models have previously been used to measure the impact of individual features (Singh et al., 2016). Ridge regression regularizes the magnitude of the model weights, producing a more interpretable model than non-regularized linear regression. We experiment with different regularization strengths and use the best-performing value $(\alpha = 10)$ . We choose to use a linear model here because of its interpretability. While more complicated models might yield additional insight, we show that there are interesting connections to be drawn from a linear model.
+
+# 5.1 Model Input and Output
+
+Our model takes linguistic features of a language as input and predicts stability as output. Since WALS properties are categorical, we turn each property into a set of binary features. If a particular language does not have a known value for a given property, then all of these features are marked zero.
+
+In order to draw out important correlations between linguistic features and stability, we filter the languages and WALS properties that we consider. We only include languages that have at least $25\%$ of all WALS properties. Then, we only consider WALS properties that cover at least $25\%$ of the filtered languages. We remove all WALS properties that do not have at least two features that each include at least five languages. Note that because all of our input features are binary, all weights are easily comparable. After this filtering, we end up
+
+with 37 languages, $^{12}$ and 97 WALS properties.
+
+We also group highly correlated WALS features. We create the groupings by combining features with a Pearson correlation greater than 0.8. A feature is included in a particular grouping if it correlates highly with any of the features already in the group. Each grouped feature is marked as one if any of the included features are marked as one.
+
+For each model, we bootstrap over the input features 1,000 times, allowing us to calculate standard error for the $R^2$ score and the model weights. Calculating significance for each feature allows us to discard highly variable weights and focus on features that consistently contribute to the regression model, giving us more confidence in the results.
+
+The output of our model is the average stability of a language, which is calculated by averaging together the stability of all of the words in a language. If a language is present in both corpora, we average the stabilities from the two corpora.
+
+# 5.2 Evaluation
+
+We evaluate our model in two ways. First, we measure goodness of fit using the coefficient of determination $R^2$ . This measures how much variance in the dependent variable $y$ (average stability) is captured by the independent variables $x$ (WALS properties). A model that always predicts the expected value of $y$ , regardless of the input features, will have an $R^2$ score of 0. The highest possible $R^2$ score is 1, and $R^2$ can be negative. Second, in addition to the $R^2$ score, we run leave-one-out cross-validation across all languages, and report absolute error on the left-out language. We compare this to a baseline of choosing the average stability over all training languages.
+
+We use the individual feature weights to measure how much a particular feature contributes to the overall model. When reporting weights, we train the model using all 37 languages. Because we are primarily using regression modeling to learn associations between certain features and stability, no test data are necessary. The emphasis is on the model itself and the feature weights it learns, not
+
+
+(a) Position of Case Affixes
+
+
+(b) Prefixing vs. Suffixing in Inflectional Morphology
+
+
+(c) Position of Tense-Aspect Affixes
+Figure 4: Affixing properties compared using box-andwhisker plots.
+
+on the model's performance on a task.
+
+# 6 Results and Discussion
+
+Our regression model has a high $R^2$ score of $0.96 \pm 0.00$ , indicating that the model fits the data well. Significant weights with the highest magnitude are shown in Table 3. Running leave-one-out cross-validation across all languages, we get an average absolute error of $0.62 \pm 0.53$ . For comparison, using the average stability gives an average absolute error of $0.86 \pm 0.55$ . (A two-sample t-test comparison gives a p-value of 0.060.)
+
+Table 4 breaks down the regression results by broad WALS category, listing both the number of binary features per category, as well as the average magnitude of weights for features in that category. The two most important groups of features are Nominal Categories and Verbal Categories. Both of these categories have a large number of features and a high average magnitude. While the Lexicon category has a high average magnitude, it contains
+
+| Cat. | WALS Attribute | Weight |
| VC, M | Suffixing Grouping:
+·Prefixing vs. Suffixing in Inflectional Morphology: Strongly Suffixing;
+·Position of Tense-Aspect Affixes: Tense-aspect suffixes | -0.14 ± 0.0 |
| L | Hand and Arm: Different | -0.11 ± 0.0 |
| CS | Relativization on Obliques: Gap | -0.10 ± 0.0 |
| VC | Overlap between Situational & Epistemic Modal Marking: Overlap for both possibility & neces-sity | -0.09 ± 0.0 |
| NC | Ordinal Numerals: First, second, three-th | -0.08 ± 0.0 |
| NC | Comitatives and Instrumentals: Differentiation | -0.08 ± 0.0 |
| P | Rhythm Types: Trochaic | -0.08 ± 0.0 |
| WO | Order of Adjective and Noun: Adjective-Noun | -0.07 ± 0.0 |
| WO | Order of Adposition and Noun Phrase: Postpositions | -0.07 ± 0.0 |
| No Gender Grouping:
+·Systems of Gender Assignment: No gender;
+·Sex-based and Non-sex-based Gender Systems: No gender;
+·Gender Distinctions in Independent Personal Pronouns: No gender distinctions;
+·Number of Genders: None | 0.05 ± 0.0 |
| NC | Voicing and Gaps in Plosive Systems: Other | 0.06 ± 0.0 |
| M | Prefixing vs. Suffixing in Inflectional Morphology: Little affixation | 0.06 ± 0.0 |
| CS | ‘Want’ Complement Subjects: Subject is expressed overtly | 0.06 ± 0.0 |
| VC | The Morphological Imperative: No second-person imperatives | 0.06 ± 0.0 |
| CS | Purpose Clauses: Balanced | 0.06 ± 0.0 |
| Prepositions Grouping:
+·Order of Adposition and Noun Phrase: Prepositions;
+·Relationship between the Order of Object and Verb and the Order of Adposition and Noun Phrase: VO and Prepositions | 0.06 ± 0.0 |
| WO | Order of Demonstrative and Noun: Noun-Demonstrative | 0.07 ± 0.0 |
| NC | Position of Case Affixes: No case affixes or adpositional clitics | 0.11 ± 0.0 |
+
+Table 3: Weights with the highest magnitude in the regression model. Negative weights correspond with low stability, and positive weights correspond with high stability.
+
+| WALS Category | Num. Features | Avg. Magnitude |
| Simple Clauses (SC) | 30 | 0.019 |
| Nominal Syntax (NS) | 2 | 0.021 |
| Other (O) | 2 | 0.023 |
| Complex Sentences (CS) | 11 | 0.028 |
| Morphology (M) | 18 | 0.031 |
| Word Order (WO) | 32 | 0.031 |
| Phonology (P) | 21 | 0.032 |
| Nominal Categories (NC) | 40 | 0.036 |
| Verbal Categories (VC) | 27 | 0.036 |
| Lexicon (L) | 6 | 0.039 |
+
+Table 4: Number of binary features and average magnitude of weights in the regression model for different WALS categories. Grouped features are included in each category that they cover.
+
+very few features. To further explore these results, we highlight a few WALS property in more detail.
+
+Suffixes and prefixes. Table 3 shows that three of the top features are related to affixes (suffixes and prefixes). Specifically, three main properties deal with affixes: Position of Case Affixes (Dryer, 2013a), Prefixing vs. Suffixing in Inflectional Morphology (Dryer, 2013c), and Position of Tense-Aspect Affixes (Dryer, 2013b). Distributions of
+
+these features in the 37 languages used for the regression model are shown in Figure 4 (categories with fewer than five languages are not shown).
+
+For all three of these properties, more affixing is associated with lower stability. When considering word embeddings, this result makes intuitive sense. Affixes cause there to be many different word variations (e.g., walk, walked, walking, walker), which may not be handled consistently by the embedding algorithm, leading to lower average stability.
+
+Gendered Languages. Table 3 also highlights a grouping of WALS properties related to whether a language is gendered or not. Four WALS properties are relevant to this: Systems of Gender Assignment (Corbett, 2013c), Sex-based and Non-sex-based Gender Systems (Corbett, 2013b), Gender Distinctions in Independent Personal Pronouns (Siewierska, 2013), and Number of Genders (Corbett, 2013a). In general, a language is considered to have a gender system if different parts-of-speech are required to agree in gender (as opposed to simply having gendered nouns). Distributions of these features are shown in Figure 5.
+
+For all of these properties, languages with no gender system tend to have higher average stability. Again, this result makes sense in the context of
+
+
+(a) Systems of Gender Assignment; Sex-based and Nonsex-based Gender Systems (Gender Grouping: No gender; Sex-based)
+
+
+(b) Gender Distinctions in Independent Personal Pronouns
+
+
+(c) Number of Genders
+Figure 5: Gender properties compared using box-andwhisker plots. Note, the 12 languages with "No Gender Grouping" are not the same across the three plots.
+
+word embeddings. Languages with gender systems will have more word forms (e.g., both male and female word forms), which may not be handled consistently by the embedding algorithm.
+
+# 7 Conclusion
+
+In this paper, we considered how stability varies across different languages. This work is important because algorithms such as GloVe and word2vec continue to be effective methods in a wide variety of scenarios (Arora et al., 2020), particularly the computational humanities and languages where large corpora are not available. We studied the relationship between linguistic properties and stability, something that has been previously understudied. We drew out several aspects of this relationship, including that languages with more affixing tend to have less stable embeddings, and languages with no gender systems tend to have more stable embeddings. These insights can be used in future work to inform the design of embeddings in many languages. For example, this work suggests that future embedding space designs need to take into account gendered words and morphologically rich words
+
+with affixes.
+
+# 8 Acknowledgements
+
+This material is based in part upon work supported by the National Science Foundation (grant #1815291) and by the John Templeton Foundation (grant #61156). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation or John Templeton Foundation.
+
+# References
+
+Abdul Z Abdulrahim. 2019. Ideological drifts in the US constitution: Detecting areas of contention with models of semantic change. In NeurlPS Joint Workshop on AI for Social Good, Vancouver, Canada.
+Rami Al-Rfou', Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 183-192, Sofia, Bulgaria. Association for Computational Linguistics.
+Maria Antoniak and David Mimno. 2018. Evaluating the stability of embedding-based word similarities. Transactions of the Association for Computational Linguistics, 6:107-119.
+Simran Arora, Avner May, Jian Zhang, and Christopher Ré. 2020. Contextual embeddings: When are they worth it? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2650-2663, Online. Association for Computational Linguistics.
+Mikel Artetxe, Gorka Labaka, Iñigo Lopez-Gazpio, and Eneko Agirre. 2018. Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 282-291, Brussels, Belgium. Association for Computational Linguistics.
+Marco Baroni, Georgiana Dinu, and German Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238-247, Baltimore, Maryland. Association for Computational Linguistics.
+Santwana Chimalamarri, Dinkar Sitaram, and Ashritha Jain. 2020. Morphological segmentation to improve crosslingual word embeddings for low resource languages. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 19(5):1-15.
+
+Billy Chiu, Anna Korhonen, and Sampo Pyysalo. 2016. Intrinsic evaluation of word vectors fails to predict extrinsic performance. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 1-6, Berlin, Germany. Association for Computational Linguistics.
+Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.
+Greville G. Corbett. 2013a. Number of genders. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Greville G. Corbett. 2013b. Sex-based and non-sex-based gender systems. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Greville G. Corbett. 2013c. Systems of gender assignment. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Matthew S. Dryer. 2013a. Position of case affixes. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Matthew S. Dryer. 2013b. Position of tense-aspect affixes. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Matthew S. Dryer. 2013c. Prefixing vs. suffixing in inflectional morphology. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Johannes Hellrich, Sven Buechel, and Udo Hahn. 2019. Modeling word emotion in historical language: Quantity beats supposed stability in seed
+
+word selection. In Proceedings of the 3rd Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 1-11, Minneapolis, USA. Association for Computational Linguistics.
+Tom Heyman and Geert Heyman. 2019. Can prediction-based distributional semantic models predict typicality? Quarterly Journal of Experimental Psychology, pages 2084-2109.
+Arthur E. Hoerl and Robert W. Kennard. 1970. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1):55-67.
+Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data.
+Ishani Joshi, Purvi Koringa, and Suman Mitra. 2019. Word embeddings in low resource Gujarati language. In 2019 International Conference on Document Analysis and Recognition Workshops (IC-DARW), volume 5, pages 110-115. IEEE.
+Megan Lesczynski, Avner May, Jian Zhang, Sen Wu, Christopher R. Aberger, and Christopher Ré. 2020. Understanding the downstream instability of word embeddings. In Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2-4, 2020. mlsys.org.
+Mike Maxwell and Baden Hughes. 2006. Frontiers in linguistic annotation for lower-density languages. In Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006, pages 29-37, Sydney, Australia. Association for Computational Linguistics.
+Arya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Garrett Nicolai, Matt Post, and David Yarowsky. 2020. The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2884-2892, Marseille, France. European Language Resources Association.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In 27th Annual Conference on Neural Information Processing Systems, pages 3111-3119, Lake Tahoe, Nevada.
+Milad Moradi, Maedeh Dashti, and Matthias Samwald. 2020. Summarization of biomedical articles using domain-specific word embeddings and graph ranking. Journal of Biomedical Informatics, 107:103452.
+Fabian Pedregosa, Gaël Varouquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher,
+
+Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
+Bénédicte Pierrejean and Ludovic Tanguy. 2018. Towards qualitative word embeddings evaluation: Measuring neighbors variation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 32-39, New Orleans, Louisiana, USA. Association for Computational Linguistics.
+Yuanyuan Qiu, Hongzheng Li, Shen Li, Yingdi Jiang, Renfen Hu, and Lijiao Yang. 2018. Revisiting correlations between intrinsic and extrinsic evaluations of word embeddings. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 209-221. Springer.
+Anna Rogers, Aleksandr Drozd, Anna Rumshisky, and Yoav Goldberg, editors. 2019. Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP. Association for Computational Linguistics, Minneapolis, USA.
+Anna Rogers, Shashwath Hosur Ananthakrishna, and Anna Rumshisky. 2018. What's in your embedding, and how it predicts task performance. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2690-2703, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
+Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020. With more contexts comes better performance: Contextualized sense embeddings for all-round word sense disambiguation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3528-3539, Online. Association for Computational Linguistics.
+Anna Siewierska. 2013. Gender distinctions in independent personal pronouns. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Abhinav Deep Singh, Poojan Mehta, Samar Husain, and Rajkumar Rajakrishnan. 2016. Quantifying sentence complexity based on eye-tracking measures. In Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC), pages 202-212, Osaka, Japan. The COLING 2016 Organizing Committee.
+
+Nathan Stringham and Mike Izbicki. 2020. Evaluating word embeddings on low-resource languages. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 176-186, Online. Association for Computational Linguistics.
+Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. 2016. Learning the curriculum with Bayesian optimization for task-specific word representation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 130-139, Berlin, Germany. Association for Computational Linguistics.
+Shirui Wang, Wenan Zhou, and Chao Jiang. 2020a. A survey of word embeddings based on deep learning. Computing, 102(3):717-740.
+Yuxuan Wang, Yutai Hou, Wanxiang Che, and Ting Liu. 2020b. From static to dynamic word representations: a survey. International Journal of Machine Learning and Cybernetics, pages 1-20.
+Laura Wendlandt, Jonathan K. Kummerfeld, and Rada Mihalcea. 2018. Factors influencing the surprising instability of word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2092-2102, New Orleans, Louisiana. Association for Computational Linguistics.
+Yadollah Yaghoobzadeh and Hinrich Schütze. 2016. Intrinsic subspace evaluation of word embedding representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 236-246, Berlin, Germany. Association for Computational Linguistics.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Thirty-third Conference on Neural Information Processing Systems, pages 5754-5764, Vancouver, Canada.
\ No newline at end of file
diff --git a/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/images.zip b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..42f6d63b53863f2b729b4deead80ce1b3b66e76a
--- /dev/null
+++ b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:733099a697daeb20d2a38e19c5b846cc593f38cb28ff96951363e13d6aedeeab
+size 730944
diff --git a/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/layout.json b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..94a9b1ddc9a9ab9f3f9c1968cf8916a174d7609a
--- /dev/null
+++ b/analyzingthesurprisingvariabilityinwordembeddingstabilityacrosslanguages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7ff4a2a3d35e881b5129beecf696a71a3524141371d98d3bfcc49064c4399372
+size 378781
diff --git a/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_content_list.json b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2193d5c587fe4c435c85f1db16510159839b29f6
--- /dev/null
+++ b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3a1f8484cadacacb39a1d74adcb65d5c79aa5dffc2e0ccae0ddced6a8868a51d
+size 55180
diff --git a/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_model.json b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f897629bdb9e57f616104304fcbb5bbf52b420fb
--- /dev/null
+++ b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:476048b5c9c7cf5813bd179db9c92cb1c5054e34eac4f86f6d26a74de9544c6e
+size 68687
diff --git a/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_origin.pdf b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..00e232d0ab8222537a2e085a2615cfb002df62f0
--- /dev/null
+++ b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/e8fe2dd1-fd15-473d-9790-3a8ccd6f54d4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cdbeaff334bdef416cbdd348563751cdae4516c255377a53368a93b38d5daca4
+size 395112
diff --git a/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/full.md b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a9472389904e1300e4b601daff9af6ee6937dc5
--- /dev/null
+++ b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/full.md
@@ -0,0 +1,204 @@
+# An Empirical Investigation of Word Alignment Supervision for Zero-Shot Multilingual Neural Machine Translation
+
+Alessandro Raganato, Raúl Vázquez, Mathias Creutz and Jörg Tiedemann
+
+University of Helsinki, {name.surname} $@$ helsinki.fi
+
+# Abstract
+
+Zero-shot translations is a fascinating feature of Multilingual Neural Machine Translation (MNMT) systems. These MNMT models are usually trained on English-centric data, i.e. English either as the source or target language, and with a language label presupposed to the input indicating the target language. However, recent work has highlighted several flaws of these models in zero-shot scenarios where language labels are ignored and the wrong language is generated or different runs show highly unstable results. In this paper, we investigate the benefits of an explicit alignment to language labels in Transformer-based MNMT models in the zero-shot context, by jointly training one cross attention head with word alignment supervision to stress the focus on the target language label. We compare and evaluate several MNMT systems on three multilingual MT benchmarks of different sizes, showing that simply supervising one cross attention head to focus both on word alignments and language labels reduces the bias towards translating into the wrong language, improving the zero-shot performance overall. Moreover, as an additional advantage, we find that our alignment supervision leads to more stable results across different training runs.
+
+# 1 Introduction
+
+Multilingual Neural Machine Translation (MNMT) focuses on translation between multiple language pairs through a single optimized neural model, and has been explored from different angles witnessing a rapid progress in recent years (Arivazha-gan et al., 2019b; Wang et al., 2020; Dabre et al., 2020; Lin et al., 2021). Besides the great flexibility MNMT models offer, they are also highlighted by their so called zero-shot translation capabilities, i.e., translating between all combinations of languages available in the training data, including those with no parallel data seen at training time (Ha et al., 2016; First et al., 2016; Johnson et al.,
+
+2017). Many studies have investigated this feature, focusing on the impact of both, the model architecture design (Arivazhagan et al., 2019a; Pham et al., 2019) and data pre-processing (Lee et al., 2017; Wang et al., 2019; Rios et al., 2020; Wu et al., 2021). Broadly speaking, MNMT architectures are categorized according to their degree of parameter sharing, from fully shared (Johnson et al., 2017) to the use of language-specific components (Vázquez et al., 2020; Escolano et al., 2021; Zhang et al., 2021). The Johnson et al. (2017) MNMT model is widely used, due to its simplicity and good translation quality. It uses the fully shared parameters setting, and relies on appending an artificial language label to each input sentence to indicate the target language. While this method allows for zero-shot translation, several works have highlighted two major flaws: i) its failure to reliably generalize to unseen language pairs, ending up with the so called off-target issue, where the language label is ignored and the wrong target language is produced as a result (Zhang et al., 2020), ii) its lack of stability in translation results between different training runs (Rios et al., 2020).
+
+In this work, we investigate the role of guided alignment in the Johnson et al. (2017) setting, by jointly training one cross attention head to explicitly focus on the target language label. We show that alignment supervision mitigates the off-target translation issue in the zero-shot case. Our method improves the zero-shot translation performance and results in more stable results across different training runs.
+
+# 2 Methodology
+
+Alignment Methods. Given a bitext $B_{src} = (s_1, \dots, s_j, \dots, s_N)$ and $B_{trg} = (t_1, \dots, t_i, \dots, t_M)$ where $B_{src}$ is a sentence in the source language and $B_{trg}$ is its translation in the target language, an alignment $A$ is a mapping of words between $B_{src}$ and $B_{trg}$ (Tiedemann, 2011), formally defined as
+
+
+(a)
+
+
+(b)
+Figure 1: English $\rightarrow$ German example sentence with different alignment methods. Alignments in (a) show word alignments between corresponding words in the two languages, (b) our introduced alignments between all target words and the input language label, and (c) the union of the two.
+
+
+(c)
+
+a subset of the Cartesian product of the word positions (Och and Ney, 2003):
+
+$$
+\mathrm {A} \subseteq \{(j, i): j = 1, \dots , N; i = 1, \dots , M \} \tag {1}
+$$
+
+We study three different settings: (a) standard word alignment between corresponding words, (b) alignments between all target words and the language label in the input string, and (c) the union between the former two. Figure 1 shows an example of those approaches. To produce word alignments between parallel sentences, i.e., Figure 1 (a), we use the awesome-align tool (Dou and Neubig, 2021), a recent work that leverages multilingual BERT (Devlin et al., 2019) to extract the links.
+
+Models. To train Many-to-Many MNMT models, we use a 6-layer Transformer architecture (Vaswani et al., 2017), preponding a language label in the input to indicate the target language (Johnson et al., 2017). Following Garg et al. (2019), given an alignment matrix $AM_{M,N}$ and an attention matrix computed by a cross attention head $AH_{M,N}$ , for each target word $i$ , we use the following cross-entropy loss $\mathcal{L}_a$ to minimize the Kullback-Leibler divergence between $AH$ and $AM$ :
+
+$$
+\mathcal {L} _ {a} (A H, A M) = - \frac {1}{M} \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {N} A M _ {i, j} \log \left(A H _ {i, j}\right) \tag {2}
+$$
+
+The overall loss $\mathcal{L}$ is:
+
+$$
+\mathcal {L} = \mathcal {L} _ {t} + \gamma \mathcal {L} _ {a} (A H, A M) \tag {3}
+$$
+
+where $\mathcal{L}_t$ is the standard NLL translation loss, and $\gamma$ is a hyperparameter. We use $\gamma = 0.05$ , supervising only one cross attention head at the third last
+
+layer. $^2$ Given the sparse nature of the alignments, we replace the softmax operator in the cross attention head with the $\alpha$ -entmax function (Peters et al., 2019; Correia et al., 2019). Entmax allows sparse attention weights for any $\alpha > 1$ . Following Peters et al. (2019), we use $\alpha = 1.5$ .
+
+# 3 Experimental Setup
+
+We use three highly multilingual MT benchmarks:
+
+- TED Talks (Qi et al., 2018). An English-centric parallel corpus with 10M training sentences across 116 translation directions. Following Aharoni et al. (2019), we evaluate on a total of 16 language directions, while as zero-shot test we evaluate on 4 language pairs.
+- WMT-2018 (Bojar et al., 2018).3 A parallel dataset provided by the WMT-2018 shared task on news translation. We use all available language pairs, i.e. 14, up to 5M training sentences for each language pair. We evaluate the models on the test sets of the shared task, i.e. newtest2018. As there are no zero-shot test sets provided by the competition, we use the test portion from the Tatoeba-challenge (Tiedemann, 2020),4 in all possible language pair combinations included in the challenge.
+- OPUS-100 (Zhang et al., 2020). An English-centric multi-domain benchmark, built upon the OPUS parallel text collection (Tiedemann, 2012). It covers a total of 198 language directions, with up to 1M training sentence per
+
+| ID | Model | #Param. | EN → X (16) | X → EN (16) | BLEUzero (4) | ACCzero (4) |
| Aharoni et al. (2019)-103 | 473M | 20.11 | 29.97 | 9.17 | - |
| Aharoni et al. (2019) | 93M | 19.54 | 28.03 | - | - |
| 1 | Transformer | 93M | 18.93 ±0.15 | 27.56 ±0.25 | 6.81 ±0.86 | 72.38 ± 7.18 |
| 2 | 1 + 1.5-entmax | 93M | 18.90 ±0.25 | 27.21 ±0.38 | 10.02 ±1.50 | 87.81 ± 8.80 |
| 3 | 2 + (a) | 93M | 18.99 ±0.07 | 27.58 ±0.12 | 8.38 ±5.37 | 73.12 ±41.14 |
| 4 | 2 + (b) | 93M | 18.98 ±0.08 | 27.48 ±0.13 | 6.35 ±0.87 | 65.01 ± 6.10 |
| 5 | 2 + (c) | 93M | 19.06 ±0.11 | 27.37 ±0.19 | 11.94 ±0.86 | 97.25 ± 2.66 |
+
+Table 1: Results on the Many-to-Many TED Talks benchmark. The baselines consist of $①$ our replication of the standard 6-layer Transformer model by Aharoni et al. (2019), and $②$ its variant with a 1.5-entmax function on the cross attention heads as in Correia et al. (2019). The labels (a), (b), (c) denote the use of different alignment supervision (see Section 2). "Param": trainable parameter number. "EN -> X (16)" and "X-> EN (16): average BLEU scores for English to Non-English languages and for Non-English languages to English on 16 language pairs respectively. "BLEUzero (4)" and "ACCzero (4): average BLEU scores and target language identification accuracy over 4 zero-shot language directions. We report average BLEU and accuracy scores, plus the standard deviation over 3 training runs with different random seeds.
+
+language pair. It provides supervised translation test data for 188 language pairs, and zero-shot evaluation data for 30 pairs.
+
+Following related work (Aharoni et al., 2019; Zhang et al., 2020), we apply joint Byte-Pair Encoding (BPE) segmentation (Sennrich et al., 2016; Kudo and Richardson, 2018), with a shared vocabulary size of $32\mathrm{K}$ symbols for TED Talks and $64\mathrm{K}$ for WMT-2018 and OPUS-100. As evaluation measure, we use tokenized BLEU (Papineni et al., 2002) to be comparable with Aharoni et al. (2019) for the TED Talks benchmark, and SACREBLEU $^{5}$ (Post, 2018) for WMT-2018 and OPUS-100. As an additional evaluation, we report the target language identification accuracy score for the zero-shot cases (Zhang et al., 2020), called $ACC_{zero}$ . We use fasttext as a language identification tool (Joulin et al., 2017), counting how many times the translation language matches the reference target language.
+
+The Transformer models follow the base setting of Vaswani et al. (2017), with three different random seeds in each run. All of them are trained on the Many-to-Many English-centric scenario, i.e., on the concatenation of the training data having English either as the source or target language. Details about data and model settings in the Appendix.
+
+# 4 Results and Discussion
+
+Throughout this section we refer to our baseline MNMT models by the labels $①$ and $②$ , while $③,④,$ and $⑤$ mark the models trained with the auxiliary alignment supervision task, (a), (b), (c) from Figure 1 respectively (see Section 2).
+
+TED Talks. Table 1 shows the results on the TED Talks benchmark. Regarding translation quality on the language pairs seen during training (EN $\rightarrow$ X and $\mathrm{X}\rightarrow \mathrm{EN}$ columns), average BLEU scores from all models end up in the same ballpark. In contrast, zero-shot results vary across the board, with $⑤$ attaining the best performance, with almost 2 BLEU points better than its baseline $②$ . Moreover, $⑤$ considerably improves target language identification accuracy ( $A C C_{z e r o}$ ), with more stable results, i.e. lower standard deviation, than counterparts. Surprisingly, the addition of alignment supervision (a) and (b) as an auxiliary task has an overall detrimental effect on the zero-shot performance, even though model $④$ results in more stable results than $②$ .
+
+WMT-2018. Table 2 reports the results on the WMT-2018 benchmark. As expected, in a high-resource scenario bilingual baselines are hard to beat. Among multilingual models, the overall performance follows a similar trend as before. Enriching the model with alignment supervision (c) results in the best system overall, with an improvement of more than 3 BLEU points in the zero-shot
+
+| ID | Model | #Param. | EN → X (7) | X → EN (7) | BLEUzero (24) | ACCzero (24) |
| Transformer, Bilingual | 127M | 18.28 | 19.25 | - | - |
| 1 | Transformer | 127M | 15.18 ±0.54 | 18.39 ±0.65 | 9.78 ±0.61 | 74.17 ±4.78 |
| 2 | 1 + 1.5-entmax | 127M | 15.17 ±0.41 | 18.33 ±0.56 | 8.55 ±0.61 | 65.31 ±4.46 |
| 3 | 2 + (a) | 127M | 11.99 ±0.37 | 16.42 ±0.73 | 6.38 ±0.83 | 73.78 ±7.84 |
| 4 | 2 + (b) | 127M | 15.46 ±0.16 | 18.66 ±0.31 | 11.72 ±0.76 | 85.64 ±3.37 |
| 5 | 2 + (c) | 127M | 15.50 ±0.18 | 18.70 ±0.23 | 11.98 ±0.12 | 85.68 ±0.82 |
+
+Table 2: Results on the Many-to-Many WMT-2018 benchmark. Average BLEU, target language identification accuracy and standard deviation of 3 training runs.
+
+| ID | Model | #Param. | EN → X (94) | X → EN (94) | EN → X (4) | X → EN (4) | BLEUzero (30) | ACCzero (30) |
| Transformer, Bilingual† | 110M | - | - | 20.28 | 21.23 | - | - |
| Transformer+MATT† | 141M | 20.77 | 29.15 | 16.08 | 24.15 | 4.71 | 39.40 |
| MATT+LALN+LALT† | 173M | 22.86 | 29.49 | 19.25 | 24.53 | 5.41 | 51.40 |
| 1 | Transformer | 142M | 18.50 ±0.08 | 26.85 ±0.13 | 18.37 ±0.39 | 25.70 ±0.05 | 4.59 ±0.21 | 30.91 ±2.05 |
| 2 | 1 + 1.5-entmax | 142M | 18.47 ±0.15 | 26.83 ±0.14 | 18.42 ±0.38 | 25.67 ±0.10 | 4.39 ±0.86 | 30.51 ±5.62 |
| 3 | 2 + (a) | 142M | 17.80 ±0.23 | 26.21 ±0.40 | 17.53 ±0.34 | 25.18 ±0.39 | 3.96 ±0.43 | 28.95 ±2.61 |
| 4 | 2 + (b) | 142M | 18.56 ±0.04 | 26.91 ±0.18 | 18.32 ±0.36 | 25.47 ±0.10 | 4.63 ±0.48 | 31.05 ±5.93 |
| 5 | 2 + (c) | 142M | 18.63 ±0.07 | 26.69 ±0.09 | 18.51 ±0.18 | 25.39 ±0.01 | 4.73 ±0.16 | 32.00 ±0.96 |
+
+Table 3: Results on the Many-to-Many OPUS-100 benchmark. Results marked with $\dagger$ are taken from Zhang et al. (2020). MATT denotes the use of merged attention (Zhang et al., 2019). LALN and LALT indicate the use of language-aware components. Average BLEU, target language identification accuracy and standard deviation of 3 training runs.
+
+testbed compared to baseline (2), and with stable results across three training runs (standard deviations of 0.12 and 0.82).
+
+OPUS-100. As one can see from Table 3, we confirm the positive effect of adding the alignment strategy (c) both as translation quality and as a mechanism to produce stable results even in a highly multilingual setup, i.e., training on 198 language directions. The average score over 30 zero-shot language pairs is low but the individual results range from 0.3 to 17.5 BLEU showing the potentials of multilingual models in this challenging data set as well.7 Even though the results from our best model still lag behind models with language-specific components, i.e. $\mathrm{MATT} + \mathrm{LALN} + \mathrm{LALT}$ from Zhang et al. (2020), we note that our results demonstrate the positive effect of alignment on zero-shot translation.8
+
+Overall, our experiments show consistent results across different benchmarks, providing quantitative evidence on the utility of guided alignment in highly multilingual MT scenarios. Supervising
+
+a single cross attention head with the alignment method (c) substantially reduces the instability between training runs, mitigating the off-target translation issue in the zero-shot evaluation. Zero-shot improvements, i.e. $BLEU_{zero}$ and $ACC_{zero}$ , are large in two benchmarks out of three, i.e. Ted Talks and WMT-2018, and with a similar trend in OPUS-100. We also note that performance differences may be related to the different data sizes (see Appendix A). TED Talks is a rather small and imbalanced multilingual dataset with 116 language directions with a total of 10M training sentences, while WMT-2018 and OPUS-100 comprise 14 language pairs for a total of 47.8M training sentences, and 110M training sentences for 198 language pairs, respectively. We plan on investigating the impact of the training size and the resulting alignments on the zero-shot test sets further in future work.
+
+Limitations Finally, we highlight that we have focused on a quantitative evaluation on English-centric MNMT benchmarks only, therefore we lack a comprehensive evaluation on complete MNMT benchmarks including training data without English as source and target language (Freitag and Firat, 2020; Rios et al., 2020; Tiedemann, 2020;
+
+Goyal et al., 2021).
+
+# 5 Conclusions and Future Work
+
+In this work we present an empirical comparative evaluation of integrating different alignment methods in Transformer-based models for highly multilingual English-centric MT setups. Our extensive evaluation over three alignment variants shows that adding alignment supervision between corresponding words and the language label consistently improves the stability of the models, resulting in stable performance across different runs and mitigating the off-target translation issue in the zero-shot scenario. We believe that our work will pave the way for designing new and better multilingual MT models to improve their generalization in zero-shot setups.
+
+As future work, we intend to analyze the quality of the learned alignments and their effect on the other attention weights in both supervised and zero-shot evaluation data (Raganato and Tiedemann, 2018; Tang et al., 2018; Mareček and Rosa, 2019; Voita et al., 2019). Finally, we plan to explore other mechanisms to inject prior knowledge to better handle zero-shot translations (Deshpande and Narasimhan, 2020; Raganato et al., 2020; Song et al., 2020).
+
+# Acknowledgments
+
+
+
+This work is part of the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 771113).
+
+The authors gratefully acknowledge the support of the CSC - IT Center for Science, Finland, for computational resources. Finally, We would also like to acknowledge NVIDIA and their GPU grant.
+
+# References
+
+Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Naveen Arivazhagan, Ankur Bapna, Orhan First, Roee Aharoni, Melvin Johnson, and Wolfgang
+
+Macherey. 2019a. The missing ingredient in zero-shot neural machine translation. arXiv preprint arXiv:1903.07091.
+
+Naveen Arivazhagan, Ankur Bapna, Orhan First, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019b. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019.
+
+Ondrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272-303, Belgium, Brussels. Association for Computational Linguistics.
+
+Gonçalo M. Correia, Vlad Niculae, and André F. T. Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2174-2184, Hong Kong, China. Association for Computational Linguistics.
+
+Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A survey of multilingual neural machine translation. ACM Computing Surveys (CSUR), 53(5):1-38.
+
+Ameet Deshpande and Karthik Narasimhan. 2020. Guiding attention for self-supervised learning with transformers. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4676-4686, Online. Association for Computational Linguistics.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112-2128, Online. Association for Computational Linguistics.
+
+Carlos Escolano, Marta R. Costa-jussà, José A. R. Fonollosa, and Mikel Artetxe. 2021. Multilingual machine translation: Closing the gap between shared and language-specific encoder-decoders. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 944-948, Online. Association for Computational Linguistics.
+
+Orhan First, Baskaran Sankaran, Yaser Al-onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016. Zero-resource translation with multi-lingual neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 268-277, Austin, Texas. Association for Computational Linguistics.
+Markus Freitag and Orhan First. 2020. Complete multilingual neural machine translation. In Proceedings of the Fifth Conference on Machine Translation, pages 550-560, Online. Association for Computational Linguistics.
+Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4453-4462, Hong Kong, China. Association for Computational Linguistics.
+Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2021. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. arXiv preprint arXiv:2106.03193.
+Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint arXiv:1611.04798.
+Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.
+Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.
+Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR).
+Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.
+Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In
+
+Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.
+Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365-378.
+Zehui Lin, Liwei Wu, Mingxuan Wang, and Lei Li. 2021. Learning language specific sub-network for multilingual machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 293-305, Online. Association for Computational Linguistics.
+David Mareček and Rudolf Rosa. 2019. From balustrades to pierre vinken: Looking for syntax in transformer self-attentions. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 263–275, Florence, Italy. Association for Computational Linguistics.
+Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318. Association for Computational Linguistics.
+Ben Peters, Vlad Niculae, and André F. T. Martins. 2019. Sparse sequence-to-sequence models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1504-1519, Florence, Italy. Association for Computational Linguistics.
+Ngoc-Quan Pham, Jan Niehues, Thanh-Le Ha, and Alexander Waibel. 2019. Improving zero-shot translation with language-independent constraints. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 13-23, Florence, Italy. Association for Computational Linguistics.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
+Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the
+
+Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529-535, New Orleans, Louisiana. Association for Computational Linguistics.
+Alessandro Raganato, Yves Scherrer, and Jörg Tiedemann. 2020. Fixed encoder self-attention patterns in transformer-based machine translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 556–568, Online. Association for Computational Linguistics.
+Alessandro Raganato and Jörg Tiedemann. 2018. An analysis of encoder representations in transformer-based machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287-297, Brussels, Belgium. Association for Computational Linguistics.
+Annette Rios, Mathias Müller, and Rico Sennrich. 2020. Subword segmentation and a single bridge language affect zero-shot neural machine translation. In Proceedings of the Fifth Conference on Machine Translation, pages 528-537, Online. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, and Min Zhang. 2020. Alignment-enhanced transformer for constraining nmt with pre-specified translations. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8886-8893.
+Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2018. An analysis of attention mechanisms: The case of word sense disambiguation in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 26-35, Brussels, Belgium. Association for Computational Linguistics.
+Jörg Tiedemann. 2011. Bitext alignment. Synthesis Lectures on Human Language Technologies, 4(2):1-165.
+Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA).
+Jörg Tiedemann. 2020. The tatoeba translation challenge - realistic data sets for low resource and multilingual MT. In Proceedings of the Fifth Conference on Machine Translation, pages 1174-1182, Online. Association for Computational Linguistics.
+
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.
+Raul Vázquez, Alessandro Raganato, Mathias Creutz, and Jörg Tiedemann. 2020. A systematic study of inner-attention-based sentence representations in multilingual neural machine translation. Computational Linguistics, 46(2):387-424.
+Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797-5808, Florence, Italy. Association for Computational Linguistics.
+Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019. Multilingual neural machine translation with soft decoupled encoding. In International Conference on Learning Representations.
+Yiren Wang, ChengXiang Zhai, and Hany Hassan. 2020. Multi-task learning for multilingual neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1022-1034, Online. Association for Computational Linguistics.
+Liwei Wu, Shanbo Cheng, Mingxuan Wang, and Lei Li. 2021. Language tags matter for zero-shot neural machine translation. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 3001-3007, Online. Association for Computational Linguistics.
+Biao Zhang, Ankur Bapna, Rico Sennrich, and Orhan First. 2021. Share or not? learning to schedule language-specific capacity for multilingual translation. In International Conference on Learning Representations (ICLR) 2021.
+Biao Zhang, Ivan Titov, and Rico Sennrich. 2019. Improving deep transformer with depth-scaled initialization and merged attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 898-909, Hong Kong, China. Association for Computational Linguistics.
+Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628-1639, Online. Association for Computational Linguistics.
+
+# A Data and Model details
+
+# A.1 Data
+
+TED Talks (Qi et al., 2018). This parallel corpus includes 59 language pairs from and to English. It is a highly imbalanced benchmark, ranging from less than $4\mathrm{K}$ up to $215\mathrm{K}$ training sentences. We use the same languages as Aharoni et al. (2019) for both supervised testing and zero-shot evaluation. As supervised test sets, we use {Azerbeijani, Belarusian, Galician, Slovak, Arabic, German, Hebrew, Italian} $\leftrightarrow$ English. As zero-shot test sets, we use Arabic $\leftrightarrow$ French, and Ukrainian $\leftrightarrow$ Russian.
+
+WMT-2018 (Bojar et al., 2018). We use training and testing data as provided by the WMT 2018 news translation task organizers. The benchmark contains a total of 14 language pairs: {Chinese, Czech, Estonian, Finnish, German, Russian, Turkish} $\leftrightarrow$ English. For training, we use up to 5M parallel sentences per language pair, with Turkish $\leftrightarrow$ English, Estonian $\leftrightarrow$ English, and Finnish $\leftrightarrow$ English, having only 200K, 1M, and 2.7M training sentences, respectively. For zero-shot test sets, we use the test data from Tiedemann (2020), using the following 24 language directions:
+
+Czech $\leftrightarrow$ German, German $\leftrightarrow$ Russian, German $\leftrightarrow$ Chinese, Finnish $\leftrightarrow$ German, Finnish $\leftrightarrow$ Turkish, Russian $\leftrightarrow$ Finnish, Russian $\leftrightarrow$ Chinese, Turkish $\leftrightarrow$ Chinese, Russian $\leftrightarrow$ Turkish, Estonian $\leftrightarrow$ Russian, Russian $\leftrightarrow$ Turkish
+
+OPUS-100 (Zhang et al., 2020). OPUS-100 is a recent benchmark consisting of 55M English-centric sentence pairs covering 100 languages. The data is collected from movie subtitles, GNOME documentation, and the Bible. Out of 99 language pairs, 44 have 1M sentences, 73 have at least 100K sentences, and 95 at least 10K. It provides also zero-shot test sets, pairing the following languages: Arabic, Chinese, Dutch, French, German, and Russian.
+
+# A.2 Model hyperparameters
+
+We use the OpenNMT-py framework (Klein et al., 2017), and the Transformer base model setting (Vaswani et al., 2017). Specifically, we use 6 layers for the encoder and the decoder, 512 as model dimension, and 2048 as hidden dimension.
+
+ | #Lang. pairs | #Train. sent. | #Zero-shot lang. pairs |
| TED Talks | 116 | 10M | 4 |
| WMT-2018 | 14 | 47M | 24 |
| OPUS-100 | 198 | 110M | 30 |
+
+Table 4: Benchmark statistics: number of language pairs used for training, total number of training sentences, and number of language pairs for zero-shot evaluation.
+
+We applied 0.1 as dropout for both residual layers and attention weights, using the Adam optimizer (Kingma and Ba, 2015) with $\beta 1 = 0.9$ , and $\beta 2 = 0.998$ , with learning rate set at 3 and $40\mathrm{K}$ warmup steps as in Aharoni et al. (2019). We train the models with three random seeds each, for $200\mathrm{K}$ training steps for the TED Talks and WMT-2018 benchmarks, while for $500\mathrm{K}$ training steps for the OPUS-100. To speed up training, we use half-precision, i.e., FP16.
\ No newline at end of file
diff --git a/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/images.zip b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..34857a008f588d491f811b169f0f01e5be4216f2
--- /dev/null
+++ b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3659ff0c2ce304b30d7f2379aa80ffcb5b077995b785c034cbb0e8368ed14e11
+size 253427
diff --git a/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/layout.json b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ad60d5752434906ecc287823433d752c2f15457d
--- /dev/null
+++ b/anempiricalinvestigationofwordalignmentsupervisionforzeroshotmultilingualneuralmachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9a41fdcd64ffee86096fc9fda3706a95b9506c76c3205edbde9f579f7027b9dd
+size 274102
diff --git a/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_content_list.json b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..387f6191436324a0f970d963d4b63fb01b7e1194
--- /dev/null
+++ b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b4e8e14cca5b6675e5defabb9d4879f389568bb45106b48754ddd802abe389a3
+size 52201
diff --git a/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_model.json b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..25aadeb1cff75cf7321797fb0e561375bf4d3e62
--- /dev/null
+++ b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a92bc57e106716c503ea3c8bee14cda3db087e0964044cf86df4a6b007a8bbbb
+size 65053
diff --git a/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_origin.pdf b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..664cdd96eaca3e10c184c60e1ffc92dad8810aea
--- /dev/null
+++ b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/529cccc1-e833-4e1a-9612-1f415828ca90_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:66f11e3eacf2dab416945b908440d42d1ece946920a775f654257ff0624e6759
+size 264483
diff --git a/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/full.md b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a203eb418e134b3a826c501c482fe5bd3ba181e4
--- /dev/null
+++ b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/full.md
@@ -0,0 +1,205 @@
+# An Empirical Study on Leveraging Position Embeddings for Target-oriented Opinion Words Extraction
+
+Samuel Mensah
+
+Computer Science Department University of Sheffield, UK
+
+s.mensah@sheffield.ac.uk
+
+Kai Sun
+
+BDBC and SKLSDE
+Beihang University, China
+
+sunkai@buaa.edu.cn
+
+Nikolaos Aletras
+
+Computer Science Department
+
+University of Sheffield, UK
+
+n. aletras@sheffield.ac.uk
+
+# Abstract
+
+Target-oriented opinion words extraction (TOWE) (Fan et al., 2019b) is a new subtask of target-oriented sentiment analysis that aims to extract opinion words for a given aspect in text. Current state-of-the-art methods leverage position embeddings to capture the relative position of a word to the target. However, the performance of these methods depends on the ability to incorporate this information into word representations. In this paper, we explore a variety of text encoders based on pretrained word embeddings or language models that leverage part-of-speech and position embeddings, aiming to examine the actual contribution of each component in TOWE. We also adapt a graph convolutional network (GCN) to enhance word representations by incorporating syntactic information. Our experimental results demonstrate that BiLSTM-based models can effectively encode position information into word representations while using a GCN only achieves marginal gains. Interestingly, our simple methods outperform several state-of-the-art complex neural structures.
+
+# 1 Introduction
+
+Target-oriented opinion words extraction (TOWE) (Fan et al., 2019b) is a fine-grained task of target-oriented sentiment analysis (Liu, 2012) aiming to extract opinion words with respect to an opinion target (or aspect) in text. Given the sentence "The food is good but the service is extremely slow", TOWE attempts to identify the opinion words "good" and "extremely slow" corresponding respectively to the targets "food" and "service". TOWE is usually treated as a sequence labeling problem using the BIO tagging scheme (Ramshaw and Marcus, 1999) to distinguish the Beginning, Inside and Outside of a span of opinion words. Table 1 shows an example of applying the BIO tagging scheme for TOWE.
+
+Sentence:
+
+The food is good but the service is extremely slow.
+
+True Labels for target 'food':
+
+The/O food/O is/O good/B but/O the/O service/O
+
+is/O extremely/O slow/O.
+
+True Labels for target 'service':
+
+The/O food/O is/O good/O but/O the/O service/O
+
+is/O extremely/B slow/1.
+
+TOWE Extraction Results:
+
+((food, good), (service, extremely slow))
+
+Table 1: Identifying target-oriented opinion words in a sentence. Underlined words are opinion targets. Spans tagged B and I are considered as opinion words.
+
+Learning effective word representations is a critical step towards tackling TOWE. Traditional work (Zhuang et al., 2006a; Hu and Liu, 2004a; Qiu et al., 2011) has used hand-crafted features to represent words which do not often generalize easily. More recent work (Liu et al., 2015; Fan et al., 2019b; Wu et al., 2020a; Veyseh et al., 2020) has explored neural networks to learn word representations automatically.
+
+Previous neural-based methods (Liu et al., 2015; Fan et al., 2019b) has used word embeddings (Collobert and Weston, 2008; Mikolov et al., 2013; Pennington et al., 2014) to represent the input. However, TOWE is a complex task that requires a model to know the relative position of each word to the aspect in text. Words that are relatively closer to the target usually express the sentiment towards that aspect (Zhou et al., 2020).
+
+Fan et al. (2019b) employ Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) to encode the target position information in word embeddings. Wu et al. (2020a) transfer latent opinion knowledge into a Bidirectional LSTM (BiLSTM) network that leverages word and position embeddings (Zeng et al., 2014). Recently, Veyseh et al. (2020) have proposed ONG, a method that combines BERT (Bidirectional Encoder Representations from Transformers) (De
+
+vlin et al., 2018), position embeddings, Ordered Neurons LSTM (ON-LSTM) (Shen et al., 2018), and a graph convolutional network (GCN) (Kipf and Welling, 2016) to introduce syntactic information into word representations. While this model achieves state-of-the-art results, previous studies have shown that the ON-LSTM does not actually perform much better than LSTMs in recovering latent tree structures (Dyer et al., 2019). Besides, ON-LSTMs perform worse than LSTMs in capturing short-term dependencies (Shen et al., 2018). Since opinion words are usually close to targets in text, ON-LSTM risks missing the relationship between the aspect and any information (e.g. position) relating to the opinion words.
+
+In this paper, we empirically evaluate a battery of popular text encoders which apart from words, take positional and part-of-speech information into account. Surprisingly, we show that methods based on BiLSTMs can effectively leverage position embeddings to achieve competitive if not better results than more complex methods such as ONG on standard TOWE datasets. Interestingly, combining a BiLSTM encoder with a GCN to explicitly capture syntactic information achieves only minor gains. This empirically highlights that BiLSTM-based methods have an inductive bias appropriate for the TOWE task, making a GCN less important.
+
+# 2 Methodology
+
+Given sentence $s = \{w_1, \ldots, w_n\}$ with aspect $w_t \in s$ , our approach consists of a text encoder that takes as input a combination of words, part-of-speech and position information for TOWE. We further explore enhancing text encoding by incorporating information from a syntactic parse of the sentence through a GCN encoder.
+
+# 2.1 Input Representation
+
+Word Embeddings: We experiment with Glove word vectors (Pennington et al., 2014) as well as BERT-based representations, extracted from the last layer of a BERT base model (Devlin et al., 2018) fine-tuned on TOWE.
+
+Position Embeddings (POSN): We compute the relative distance $d_{i}$ from $w_{i}$ to $w_{t}$ (i.e., $d_{i} = i - t$ ), and lookup their embedding in a randomly initialized position embedding table.
+
+Par-of-Speech Tag Embeddings (POST): We assign part-of-speech tags to each word token using the Stanford parser, and lookup their embedding in a randomly initialized POST embedding table.
+
+Combined Input: We consider two types of input representations:
+
+1. Glove Input (G): Constructed from concatenating Glove word embeddings, POST and POSN embeddings for each token.
+2. BERT Input (B): Constructed from concatenating BERT vectors with POSN embeddings for each word token following a similar approach as (Veyseh et al., 2020).3 We ignore POST embeddings since BERT is efficient in modeling such information (Tenney et al., 2019).
+
+# 2.2 Text Encoders
+
+We experiment with the following neural encoders that take word vector representations as input:
+
+CNN: A single layer convolutional neural network (LeCun et al., 1990). Given a word $w_{i} \in s$ , the CNN takes a fixed window of words around it and applies a filter on their representation to extract a feature vector for $w_{i}$ . We concatenate the feature vectors corresponding to different filters for $w_{i}$ to compute word representations.
+
+Transformer: A Transformer encoder (Vaswani et al., 2017) that takes a linear transformation of the input words to learn contextualized representations.
+
+BiLSTM: A bi-directional LSTM that takes the input representation and models the context in a forward and backward direction.
+
+ON-LSTM: A variant of the LSTM neural network proposed by (Shen et al., 2018) which has an inductive bias toward learning latent tree structures.
+
+# 2.3 GCN Encoder
+
+First, we interpret the syntactic parse tree as an adjacency binary matrix $A^{n\times n}$ ( $n$ is the sentence length) with entries $A_{ij} = 1$ if there is a connection between nodes $i$ and $j$ , and $A_{ij} = 0$ otherwise. To apply a GCN on $A$ , we consider the tree with self-loops at each node (i.e., $A_{ii} = 1$ ), ensuring
+
+nodes are informed by their corresponding representations at previous layers. Formally, let $H^{(k)}$ be the output at the $k$ -th GCN layer, $H^{(k)}$ is given by:
+
+$$
+H ^ {(k)} = \operatorname {R e L U} \left(A H ^ {(k - 1)} W ^ {(k)}\right) + H ^ {(k - 1)} \quad (1)
+$$
+
+where $k = 1,\dots ,K,W^{(k)}$ is a parameter matrix at layer $k$ . RELU is used as the activation function. $H^{(0)}$ corresponds to the set of word representations extracted by the text encoder. The second term in (1) induces a residual connection that retains the contextual information of $H^{(0)}$ during the propagation process (Sun et al., 2020).
+
+# 2.4 Classification and Optimization
+
+Our model uses the representation $H^{(l)}$ (where $l \geq 0$ ), applies a linear layer and then normalizes it with a softmax function to output a probability distribution over the set $\{\mathrm{B},\mathrm{I},\mathrm{O}\}$ for each word in the input. During training, we minimize the cross-entropy function for each word in text for the entire training set.
+
+# 3 Experiments and Results
+
+# 3.1 Baselines
+
+We compare our methods with Distance-rule (Hu and Liu, 2004b); Dependency-rule (Zhuang et al., 2006b); $\mathrm{LSTM}_{\mathrm{word}}$ and BiLSTMword (Liu et al., 2015); Pipeline (Fan et al., 2019b); TC-BiLSTM (Fan et al., 2019b); IOG (Fan et al., 2019b); LOTN (Wu et al., 2020a); and ONG (Veyseh et al., 2020).
+
+# 3.2 Data
+
+Following (Wu et al., 2020b), we use four benchmark datasets including restaurant (Res14, Res15, Res16) and laptop (Lap14) reviews from Semeval (Pontiki et al., 2014, 2015, 2016). We use the preprocessed data provided by Fan et al. (2019a). Table 2 shows the dataset statistics.
+
+# 3.3 Implementation Details
+
+Hyper-parameters are tuned on $20\%$ of samples randomly selected from the train set since there is no development set. We use the Adam optimizer
+
+| Dataset | #Sent. | #ASL | #AT | #OT | #D.Dist. | #S.Dist. |
| Lap14 (Train) | 1151 | 20.78 | 1632 | 1877 | 2.40 | 4.25 |
| Lap14 (Test) | 343 | 17.33 | 482 | 567 | 2.03 | 4.00 |
| Res14 (Train) | 1625 | 19.11 | 2636 | 3057 | 2.11 | 3.68 |
| Res14 (Test) | 500 | 19.22 | 862 | 1028 | 2.01 | 3.97 |
| Rest15 (Train) | 754 | 16.50 | 1076 | 1277 | 1.97 | 3.62 |
| Rest15 (Test) | 325 | 17.47 | 436 | 493 | 2.13 | 3.53 |
| Rest16 (Train) | 1079 | 16.78 | 1512 | 1770 | 2.01 | 3.59 |
| Rest16 (Test) | 328 | 16.54 | 456 | 524 | 1.93 | 3.43 |
+
+Table 2: Dataset Statistics. No. of sentences (#Sent), Avg. sentence length (#ASL), No. of aspect terms (#AT), No. of opinion words (#OT), Avg. dependency distance (#D.Dist) and Avg. sequential distance (#S.Dist) between aspect and opinion.
+
+to train all models. Models that use Glove word vectors are optimized with learning rate $1e^{-3}$ and trained for 100 epochs with batch size 16. Models that use BERT hidden vectors are optimized with learning rate $1e^{-5}$ and trained with batch size 6. Our source code is publicly available.
+
+# 3.4 Performance Comparison
+
+Table 3 presents the results of all methods. Our models that use Glove Input (or BERT Input) are appended with "G"(or "B") to distinguish them. We report precision (Prec), recall (Rec), F1 score and average F1 score (Avg.F1) across all datasets.
+
+Comparison of Text Encoders: We first observe that $\mathrm{CNN}(\mathrm{G})$ is adept at exploiting the information from simpler word representations (Glove), outperforming the Transformer(G) by $+4.52$ Avg.F1. We believe that this behavior is due to the fact that TOWE is a short-sequence task (see #ASL in Table 2). This assumption lies well with previous observations by (Yin et al., 2021), which found that CNNs often perform better than Transformers at short-sequence tasks. However, the Transformer(B) is able to improve performance and even outperform $\mathrm{CNN}(\mathrm{B})$ by $+0.71$ Avg.F1 by using BERT.
+
+In addition, we find that ON-LSTM(G) and ON-LSTM(B) lag behind BiLSTM(G) and BiLSTM(B) by 4.33 and 0.54 Avg.F1 respectively. ON-LSTM performs worse than LSTMs on tasks that require tracking short-term dependencies (Shen et al., 2018). Since opinion words are usually close to the target in the sequence (see #ASL vs.#S.Dist. in Table 2), tracking short-term dependency information is important in TOWE. This explains why
+
+| Model | Lap14 | Res14 | Res15 | Res16 |
| Prec | Rec | F1 | Prec | Rec | F1 | Prec | Rec | F1 | Prec | Rec | F1 | Avg.F1 |
| Distance-rule | 50.13 | 33.86 | 40.42 | 58.39 | 43.59 | 49.92 | 54.12 | 39.96 | 45.97 | 61.90 | 44.57 | 51.83 | 47.04 |
| Dependency-rule | 45.09 | 31.57 | 37.14 | 64.57 | 52.72 | 58.04 | 65.49 | 48.88 | 55.98 | 76.03 | 56.19 | 64.62 | 53.95 |
| LSTMword | 55.71 | 57.53 | 56.52 | 52.64 | 65.47 | 58.34 | 57.27 | 60.69 | 58.93 | 62.46 | 68.72 | 65.33 | 59.78 |
| BiLSTMword | 64.52 | 61.45 | 62.71 | 58.34 | 61.73 | 59.95 | 60.46 | 63.65 | 62.00 | 68.68 | 70.51 | 69.57 | 63.56 |
| Pipeline | 72.58 | 56.97 | 63.83 | 77.72 | 62.33 | 69.18 | 74.75 | 60.65 | 66.97 | 81.46 | 67.81 | 74.01 | 68.50 |
| TC-BiLSTM | 62.45 | 60.14 | 61.21 | 67.65 | 67.67 | 67.61 | 66.06 | 60.16 | 62.94 | 73.46 | 72.88 | 73.10 | 66.22 |
| IOG | 73.24 | 69.63 | 71.35 | 82.85 | 77.38 | 80.02 | 76.06 | 70.71 | 73.25 | 82.25 | 78.51 | 81.69 | 76.58 |
| LOTN | 77.08 | 67.62 | 72.02 | 84.00 | 80.52 | 82.21 | 76.61 | 70.29 | 73.29 | 86.57 | 80.89 | 83.62 | 77.79 |
| ONG | 73.87 | 77.78 | 75.77 | 83.23 | 81.46 | 82.33 | 76.63 | 81.14 | 78.81 | 87.72 | 84.38 | 86.01 | 80.73 |
| Glove Input |
| Transformer(G) | 68.33 | 61.91 | 64.91 | 71.77 | 70.29 | 70.98 | 78.90 | 59.07 | 67.41 | 83.59 | 70.57 | 76.49 | 69.94 |
| CNN(G) | 64.81 | 73.83 | 69.00 | 75.86 | 78.83 | 77.29 | 68.21 | 73.87 | 70.91 | 76.93 | 84.77 | 80.64 | 74.46 |
| ON-LSTM(G) | 69.27 | 69.70 | 69.47 | 83.01 | 76.98 | 79.87 | 76.19 | 74.24 | 75.20 | 84.17 | 82.90 | 83.52 | 77.02 |
| BiLSTM(G) | 76.49 | 70.94 | 73.59 | 86.22 | 83.44 | 84.80 | 81.49 | 77.93 | 79.66 | 88.96 | 84.05 | 87.36 | 81.35 |
| Transformer+GCN(G) | 66.32 | 70.83 | 68.45 | 82.98 | 75.14 | 78.82 | 76.80 | 69.45 | 72.88 | 84.71 | 79.92 | 82.25 | 75.60 |
| CNN+GCN(G) | 66.88 | 74.88 | 70.65 | 82.45 | 80.12 | 81.24 | 75.32 | 73.75 | 74.51 | 82.17 | 84.89 | 83.48 | 77.47 |
| ON-LSTM+GCN(G) | 71.63 | 74.04 | 72.75 | 87.06 | 80.97 | 83.90 | 80.18 | 77.53 | 78.83 | 89.89 | 83.97 | 86.82 | 80.58 |
| BiLSTM+GCN(G) | 76.49 | 74.46 | 75.46 | 87.60 | 83.66 | 85.57 | 82.32 | 78.82 | 80.52 | 91.63 | 85.65 | 88.52 | 82.52 |
| BERT Input |
| Transformer(B) | 78.88 | 78.03 | 78.13 | 83.97 | 84.40 | 84.18 | 82.37 | 78.21 | 80.22 | 88.22 | 84.05 | 86.06 | 82.14 |
| CNN(B) | 77.94 | 75.91 | 76.87 | 86.35 | 82.16 | 84.20 | 80.01 | 78.62 | 79.30 | 88.50 | 82.41 | 85.33 | 81.43 |
| ON-LSTM(B) | 77.96 | 77.53 | 77.71 | 85.58 | 83.25 | 84.39 | 82.57 | 78.34 | 80.38 | 87.76 | 83.55 | 86.54 | 82.26 |
| BiLSTM(B) | 78.38 | 78.27 | 78.25 | 86.38 | 84.82 | 85.60 | 82.17 | 78.78 | 80.41 | 89.94 | 84.16 | 86.94 | 82.80 |
| Transformer+GCN(B) | 79.38 | 77.04 | 78.19 | 85.43 | 84.18 | 84.79 | 82.21 | 79.55 | 80.84 | 89.34 | 84.16 | 86.66 | 82.62 |
| CNN+GCN(B) | 79.19 | 76.19 | 77.62 | 84.96 | 84.08 | 84.50 | 82.39 | 77.36 | 79.77 | 88.16 | 84.09 | 86.06 | 81.98 |
| ON-LSTM+GCN(B) | 80.33 | 76.01 | 77.96 | 85.68 | 84.03 | 84.83 | 82.14 | 78.18 | 80.07 | 89.35 | 83.93 | 86.54 | 82.35 |
| BiLSTM+GCN(B) | 79.72 | 78.06 | 78.82 | 86.45 | 85.06 | 85.74 | 83.37 | 77.93 | 80.54 | 88.98 | 85.80 | 87.35 | 83.11 |
+
+Table 3: Results of experiments across baseline methods (across 5 runs). Results of compared models are retrieved from (Veyseh et al., 2020). The best F1 performance is bold-typed.
+
+| Model | Lap14 | Res14 | Res15 | Res16 |
| Prec | Rec | F1 | Prec | Rec | F1 | Prec | Rec | F1 | Prec | Rec | F1 |
| BiLSTM+GCN(G) | 76.49 | 74.46 | 75.46 | 87.60 | 83.66 | 85.57 | 82.32 | 78.82 | 80.52 | 91.63 | 85.65 | 88.52 |
| — GCN | 76.49 | 70.94 | 73.59 | 86.22 | 83.44 | 84.80 | 81.49 | 77.93 | 79.66 | 88.96 | 84.05 | 87.36 |
| — GCN, POST | 75.38 | 70.12 | 72.63 | 86.83 | 82.94 | 84.83 | 82.45 | 75.58 | 78.85 | 88.71 | 84.01 | 86.29 |
| — GCN, POST, POSN | 61.65 | 62.08 | 61.80 | 63.17 | 56.63 | 59.66 | 62.16 | 61.54 | 61.76 | 70.11 | 70.23 | 70.08 |
| BiLSTM+GCN(B) | 79.72 | 78.06 | 78.82 | 86.45 | 85.06 | 85.74 | 83.37 | 77.93 | 80.54 | 88.98 | 85.80 | 87.35 |
| — GCN | 78.38 | 78.27 | 78.25 | 86.38 | 84.82 | 85.60 | 82.17 | 78.78 | 80.41 | 89.94 | 84.16 | 86.94 |
| — GCN, POSN | 62.92 | 72.17 | 67.21 | 60.84 | 64.42 | 62.54 | 63.88 | 64.42 | 63.97 | 69.59 | 71.45 | 70.39 |
+
+Table 4: Precision, Recall and F1 scores of ablated models on the benchmark datasets (across 5 runs).
+
+BiLSTM(G)(or BiLSTM(B)) achieves a better performance over ON-LSTM(G)(or ON-LSTM(B)).
+
+The performance of BiLSTM(G) over $\mathrm{BiLSTM}_{\mathrm{word}}$ suggests that the substantial boost in performance comes from either part-of-speech or position embeddings. We later perform an ablation experiment to examine which information is more useful. Interestingly, BiLSTM(G) outperforms the current state-of-the-art ONG by $+0.62$ Avg.F1 despite its simple architecture, demonstrating the importance to first experiment with simpler methods before designing more complex structures.
+
+Comparison of Text+GCN Encoders: Adding a GCN over any text encoder generally improves performance. This happens because the GCN provides additional syntactic information that is helpful for representation learning. We find that BiLSTM+GCN(G) achieves few gains over BiLSTM(G) while other text encoders including Transformer+GCN(G) and CNN+GCN(G) achieve relatively higher gains than their counterparts. This
+
+suggest that BiLSTM(G) has an inductive bias appropriate for the TOWE task and the performance mostly depends on the quality of the input representation. We observe that when using BERT embeddings, there is a minimal performance difference between using GCNs or not. We attribute this to the expressiveness of BERT embeddings and its ability to capture syntactic dependencies (Jawahar et al., 2019). Overall results suggest that our proposed method outperforms SOTA consistently across datasets.
+
+# 3.5 Ablation Study
+
+We perform ablation experiments on the two best performing models, BiLSTM+GCN(G) and BiLSTM+GCN(B), to study the contribution of their different components. The results are shown in Table 4. On BiLSTM+GCN(G), as we consecutively remove the GCN and POST embeddings from the input representation, we observe a slight drop in performance. The results indicate that POST embeddings as well as the GCN are not critical compo
+
+nents for BiLSTM+GCN(G). Therefore, they can be ignored to reduce model complexity. However, we observe a substantial drop in performance by removing the position embedding from the input representation, obtaining an F1 score equivalent to $\mathrm{BiLSTM}_{\mathrm{word}}$ across datasets. Similarly, removing the position embeddings in BiLSTM+GCN(B) causes a substantial drop in performance. The results suggest that leveraging position embeddings is crucial for TOWE performance.
+
+# 4 Conclusion
+
+We presented through extensive experiments that by employing a simple BiLSTM architecture that uses input representations from pre-trained word embeddings or language models, POST embeddings and position embeddings, we can obtain competitive, if not better results than the more complex current state-of-the-art methods Veyseh et al. (2020). The BiLSTM succeeds in exploiting position embeddings to improve performance. By adapting a GCN to incorporate syntactic information from the sentence we achieve further gains. In future work, we will explore how to improve existing TOWE models by effectively leveraging position embeddings.
+
+# Acknowledgements
+
+Samuel Mensah and Nikolaos Aletras are supported by a Leverhulme Trust Research Project Grant.
+
+# References
+
+Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160-167.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina N. Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
+Chris Dyer, Gábor Melis, and Phil Blunsom. 2019. A critical analysis of biased parsers in unsupervised parsing. arXiv preprint arXiv:1909.09428.
+Zhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2019a. Target-oriented opinion
+
+words extraction with target-fused neural sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2509-2518. Association for Computational Linguistics.
+Zhifang Fan, Zhen Wu, Xinyu Dai, Shujian Huang, and Jiajun Chen. 2019b. Target-oriented opinion words extraction with target-fused neural sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2509-2518.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
+Minqing Hu and Bing Liu. 2004a. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168-177.
+Minqing Hu and Bing Liu. 2004b. Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, Washington, USA, August 22-25, 2004, pages 168-177. ACM.
+Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does bert learn about the structure of language? In ACL 2019-57th Annual Meeting of the Association for Computational Linguistics.
+Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. In *ICLR (Poster)*.
+Yann LeCun, Bernhard E Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne E Hubbard, and Lawrence D Jackel. 1990. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, pages 396-404.
+Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1-167.
+Pengfei Liu, Shafiq R. Joty, and Helen M. Meng. 2015. Fine-grained opinion mining with recurrent neural networks and word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1433-1443. The Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, volume 26, pages 3111-3119.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
+Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In International workshop on semantic evaluation, pages 19-30.
+Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 486-495.
+Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), page 27-35.
+Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics, 37(1):9-27.
+Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large corpora, pages 157-176. Springer.
+Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. 2018. Ordered neurons: Integrating tree structures into recurrent neural networks. In International Conference on Learning Representations.
+Kai Sun, Richong Zhang, Yongyi Mao, Samuel Mensah, and Xudong Liu. 2020. Relation extraction with convolutional network over learnable syntax-transport graph. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8928-8935.
+Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanian Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In 7th International Conference on Learning Representations, ICLR 2019.
+
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, volume 30, pages 5998-6008.
+Amir Pouran Ben Veyseh, Nasim Nouri, Franck Dernoncourt, Dejing Dou, and Thien Huu Nguyen. 2020. Introducing syntactic structures into target opinion word extraction with deep learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8947-8956. Association for Computational Linguistics.
+Zhen Wu, Fei Zhao, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2020a. Latent opinions transfer network for target-oriented opinion words extraction. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9298-9305. AAAI Press.
+Zhen Wu, Fei Zhao, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2020b. Latent opinions transfer network for target-oriented opinion words extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9298-9305.
+Xiaoyu Yin, Dagmar Gromann, and Sebastian Rudolph. 2021. Neural machine translating from natural language to sparql. Future Generation Computer Systems, 117:510-519.
+Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th international conference on computational linguistics: technical papers, pages 2335-2344.
+Jie Zhou, Jimmy Xiangji Huang, Qinmin Vivian Hu, and Liang He. 2020. Is position important? deep multi-task learning for aspect-based sentiment analysis. Applied Intelligence, 50:3367-3378.
+Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006a. Movie review mining and summarization. In Proceedings of the 15th ACM international conference on Information and knowledge management, pages 43-50.
+Li Zhuang, Feng Jing, and Xiaoyan Zhu. 2006b. Movie review mining and summarization. In Proceedings of the 2006 ACM CIKM International Conference on Information and Knowledge Management, Arlington, Virginia, USA, November 6-11, 2006, pages 43-50. ACM.
\ No newline at end of file
diff --git a/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/images.zip b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4f8f35c32e28330cee369a68101779eafc7ecad8
--- /dev/null
+++ b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1dec7a5e011a6609ceca19e3e152b2ead8b68df4cff9bfc3ab54ac191989b16f
+size 280817
diff --git a/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/layout.json b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7cfe3da80fa1fc94d4aeb5db6348a56d43251a7f
--- /dev/null
+++ b/anempiricalstudyonleveragingpositionembeddingsfortargetorientedopinionwordsextraction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c6a4af7ec6e7bc8073c046dc5534a30d68fd35723ec52a8499179bb204e837b
+size 233840
diff --git a/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_content_list.json b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..df248bd03de7606f122646c868b0ba5f81294a40
--- /dev/null
+++ b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a474b15271e778c49941f55bec498154f7e2a7a7bf3f0f952a64050250559a5
+size 77707
diff --git a/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_model.json b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..168a2d67a1324b58820ea29a2f8b81a317a910e0
--- /dev/null
+++ b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:37493d7538b432316f9b07bf46bbf1f3aa81b85d0d5c91e56518c0f1926fd0f6
+size 92207
diff --git a/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_origin.pdf b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4eb189ffdb026f3ba7f3125187ccfa4a8c8507b2
--- /dev/null
+++ b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/7172498c-054a-4ef7-93cd-b0b4193bfbbf_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9260d8445fcc587edfbb3e713a8b33e9a24a5dab5c3e07eeac544cddcffb3027
+size 739098
diff --git a/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/full.md b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..33624a0331cafae40046ca60d43f30a7aa89b474
--- /dev/null
+++ b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/full.md
@@ -0,0 +1,363 @@
+# An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing
+
+Yi Chen $^{1}$ , Haiyun Jiang $^{1}$ , Lemao Liu $^{1}$ , Shuming Shi $^{1}$ , Chuang Fan Min Yang $^{2}$ , Ruifeng Xu $^{3*}$
+
+$^{1}$ Tencent AI Lab, Shenzhen, China
+
+$^{2}$ Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
+
+$^{3}$ Peng Cheng Laboratory, Shenzhen, China
+
+yichenlp@gmail.com, haiyunjiang@tencent.com
+
+lemaoliu@gmail.com, shumingshi@tencent.com
+
+fanchuanghit@gmail.com, min.yang@siat.ac.cn
+
+xuruifenghitsz@gmail.com
+
+# Abstract
+
+Auxiliary information from multiple sources has been demonstrated to be effective in zero-shot fine-grained entity typing (ZFET). However, there lacks a comprehensive understanding about how to make better use of the existing information sources and how they affect the performance of ZFET. In this paper, we empirically study three kinds of auxiliary information: context consistency, type hierarchy and background knowledge (e.g., prototypes and descriptions) of types, and propose a multi-source fusion model (MSF) targeting these sources. The performance obtains up to $11.42\%$ and $22.84\%$ absolute gains over state-of-the-art baselines on BBN and Wiki respectively with regard to macro F1 scores. More importantly, we further discuss the characteristics, merits and demerits of each information source and provide an intuitive understanding of the complementarity among them.
+
+# 1 Introduction
+
+Fine-grained entity typing (FET) aims to detect the types of an entity mention given its context (Abhishek et al., 2017; Xu and Barbosa, 2018; Jin et al., 2019). The results of FET benefit lots of downstream tasks (Chen et al., 2020; Hu et al., 2019; Zhang et al., 2020a; Liu et al., 2021; Chu et al., 2020). In many scenarios, the type hierarchy is continuously evolving, which requires newly emerged types to be accounted into FET systems. As a result, zero-shot FET (ZFET) is welcomed to handle the new types which are unseen during training stage (Ma et al., 2016; Ren et al., 2020; Zhang et al., 2020b).
+
+The major challenge of ZFET is to build the semantic connections between the seen types (during training) and the unseen ones (during inference).
+
+
+Figure 1: Illustration of the proposed multi-source fusion model (MSF).
+
+Auxiliary information has been proved to be essential in this regard (Xian et al., 2019), with a variety of approaches focused on scattered information (Ma et al., 2016; Zhou et al., 2018; Obeidat et al., 2019; Ren et al., 2020; Zhang et al., 2020b). However, the power of auxiliary information has not been sufficiently exploited in existing solutions. Besides, the effects of each information source also remain to be clearly understood.
+
+In this paper, we propose a Multi-Source Fusion model (MSF) integrating three kinds of popular auxiliary information for ZFET, i.e., context consistency, type hierarchy, and background knowledge, as illustrated in Figure 1. (i) Context consistency means a correct type should be se
+
+mantically consistent with the context if we replace the mention with the type name in the context. Type name is the surface form of a type, which is a word or a phase, e.g., type name of /organization/corporation is corporation. (ii) Type hierarchy is the ontology structure connecting seen and unseen types. (iii) Background knowledge provides the external prior information that depicts types in detail, e.g., prototypes (Ma et al., 2016) and descriptions (Obeidat et al., 2019).
+
+MSF is composed of three modules, with each targeting a specific information source. (i) In the CA (Context-Consistency Aware) module, we measure the context consistency by large-scale pretrained language models, e.g., BERT (Devlin et al., 2019). By masking mentions and predicting the names of ground truth types through finetuning on the data of seen types, CA is expected to measure the context consistency of unseen types more precisely. (ii) In the HA (Type-Hierarchy Aware) module, we use Transformer encoder (Vaswani et al., 2017) to model the hierarchical dependency among types. There have been substantial works exploring type hierarchy in the supervised typing task (Shimaoka et al., 2017; Xu and Barbosa, 2018; Xiong et al., 2019), but only some preliminary research in ZFET (Ma et al., 2016; Zhang et al., 2020b). (iii) In the KA (Background-Knowledge Aware) module, we introduce prototypes (Ma et al., 2016) and WordNet descriptions (Miller, 1995) as background knowledge of types. KA is embodied as natural language inference with a translation-based solution to better incorporate knowledge.
+
+Extensive experiments are carried out to verify the effectiveness of the proposed fusion model. We also conduct a deep analysis on the characteristics, merits and demerits of each information source. We find that, similar to type hierarchy, background knowledge also implies some hierarchical information through the shared prototypes and the descriptions semantically similar with their parent types. Besides, the context consistency is an essential clue in handling long-tail unseen types and longer contexts. Moreover, we further discuss the complementarity among different information sources and their contributions to the proposed fusion model.
+
+In summary, our contributions are as follows:
+
+- We propose a multi-source fusion model integrating multiple information sources for ZFET, which achieves new state-of-the-art results on BBN and Wiki.
+
+- We are the first work to conduct a comprehensive study on the strengths and weaknesses of three auxiliary information sources for ZFET. Besides, we also make a deep analysis about how different information sources complement each other and how they contribute to the proposed fusion model.
+
+# 2 A Multi-Source Fusion Model
+
+# 2.1 Overview
+
+Zero-shot Fine-grained Entity Typing (ZFET) is defined on a type set $\mathcal{T} = \mathcal{T}_{train} \cup \mathcal{T}_{test}$ , which forms a hierarchy. During inference, ZFET aims to identify the correct types for a mention $m$ based on its context $c$ , where the target types are unseen during the training stage, i.e., $\mathcal{T}_{train} \cap \mathcal{T}_{test} = \emptyset$ .
+
+As shown in Figure 1, we propose a Multi-Source Fusion model (MSF) that captures information from these sources and integrates them to make a better prediction under the zero-shot scenario. In the following, we first describe the details of each module (Sec 2.2, 2.3 and 2.4), and then present the joint loss function and inference details (Sec 2.5).
+
+# 2.2 Context-Consistency-Aware (CA) Module
+
+We base the CA module upon the pre-trained BERT (Devlin et al., 2019) and fine-tune it for assessment of context consistency.
+
+# 2.2.1 Fine-tuning by Masking Mentions
+
+Vanilla BERT randomly masks some input tokens and then predicts them. Nevertheless, in finetuning stage for ZFET, CA only masks the entity mentions and predicts their type names instead. For instance, given the context in Figure 1, we replace the entity mention Northwest with a [MASK] token and let CA module predict the name corporation of the target type /organization/corporation with a higher score. In more general cases, the length of a type name may exceed 1. Thus, the number of [MASK] tokens for replacement depends on the length of the type name (e.g., for type name living thing, we replace the corresponding mention with [MASK] [MASK]).
+
+# 2.2.2 Loss Function for CA Module
+
+For each mention $m$ in the training set, we denote its ground-truth types as $\mathcal{T}_{pos}$ . For each type $t$ in $\mathcal{T}_{pos}$ , we replace $m$ with $l$ [MASK] tokens in the
+
+context of $m$ , where $l$ is the length of $t$ 's type name. We define the score $s_t$ and loss $\ell_t$ for type $t$ as
+
+$$
+s _ {t} = \frac {1}{| n _ {t} |} \sum_ {k = 1} ^ {| n _ {t} |} p _ {n _ {t, k}}, \quad \ell_ {t} = - \frac {1}{| n _ {t} |} \sum_ {k = 1} ^ {| n _ {t} |} \log p _ {n _ {t, k}} \tag {1}
+$$
+
+where $p_{n_{t,k}}$ is the probability for the $k$ -th token of type name $n_t$ predicted by BERT. Considering all the types in $\mathcal{T}_{pos}$ , the overall loss for mention $m$ is:
+
+$$
+\mathcal {L} _ {m, C A} = \sum_ {t \in \mathcal {T} _ {p o s}} \ell_ {t} \tag {2}
+$$
+
+Note, the vocabulary of BERT contains all the constituent tokens of all the type names in $\mathcal{T}_{train}$ and $p_{n_{t,k}}$ is the output of the Softmax function over the vocabulary, minimizing the loss above will also punish scores of negative types in $\mathcal{T}_{train}$ .
+
+# 2.3 Type Hierarchy-Aware (HA) Module
+
+In HA module, we use Transformer encoder (Vaswani et al., 2017) with mask-self-attention to capture the hierarchical information for better type representations. Besides, we take the encoder from Lin and Ji (2019) to learn the features of mentions and contexts. Then a similarity function is defined to compute the matching score between a mention and a candidate type based on the context.
+
+# 2.3.1 Mention-Context Encoder
+
+In the mention-context encoder, an entity mention and its context are represented as the weighted sum of their ELMo word representations. Then the mention representation $\mathbf{r}_m$ and context representation $\mathbf{r}_c$ are concatenated as the final representation: $\mathbf{r}_{mc} = \mathbf{r}_m \oplus \mathbf{r}_c$ , where $\mathbf{r}_m, \mathbf{r}_c \in \mathbb{R}^{d_m}$ , $\mathbf{r}_{mc} \in \mathbb{R}^{2d_m}$ , $\oplus$ denotes concatenation.
+
+# 2.3.2 Hierarchy-Aware Type Encoder
+
+Given a type set $\mathcal{T} = \mathcal{T}_{train} \cup \mathcal{T}_{test}$ and its hierarchy structure $\Psi$ , we denote the initialized type embeddings as $\pmb{E} = [e_{t_1}, e_{t_2}, \dots, e_{t_N}]$ , which are the inputs of Transformer encoder, where $e_{t_i}$ is the embedding for the $i$ -th type $t_i$ , $N$ is the size of $\mathcal{T}$ . Note that the positional embeddings are removed since the input type sequence is disordered. To inject the hierarchical information, we perform the mask-self-attention operation on types. Specifically, during the process of computing self-attention in Transformer encoder, a type only attends to its parent type in the hierarchy and itself, while the attention
+
+to the remaining types will be masked. We omit other details and denote the final representation for each type $t \in \mathcal{T}$ as $\boldsymbol{r}_t \in \mathbb{R}^{d_t}$ .
+
+# 2.3.3 Loss Function for HA Module
+
+Given a mention $m$ and a candidate type $t \in \mathcal{T}_{train}$ , we first map the mention representation $\boldsymbol{r}_{mc}$ and type representation $\boldsymbol{r}_t$ into a shared space by
+
+$$
+\phi \left(\boldsymbol {r} _ {m c}, \boldsymbol {A}\right): \boldsymbol {r} _ {m c} \rightarrow \boldsymbol {A r} _ {m c} \tag {3}
+$$
+
+$$
+\theta \left(\boldsymbol {r} _ {t}, \boldsymbol {B}\right): \boldsymbol {r} _ {t} \rightarrow \boldsymbol {B r} _ {t},
+$$
+
+where $A \in \mathbb{R}^{d_s \times 2d_m}$ and $B \in \mathbb{R}^{d_s \times d_t}$ are learnable matrices. The matching score is defined as
+
+$$
+y _ {t} = \phi (\boldsymbol {r} _ {m c}, \boldsymbol {A}) \cdot \theta (\boldsymbol {r} _ {t}, \boldsymbol {B}) = (\boldsymbol {A} \boldsymbol {r} _ {m c}) ^ {\top} \boldsymbol {B} \boldsymbol {r} _ {t} (4)
+$$
+
+During training, we match mention $m$ with all the types in $\mathcal{T}_{train}$ , so the loss function for $m$ is:
+
+$$
+\mathcal {L} _ {m, H A} = \text {C r o s s E n t r o p y} (\boldsymbol {y}, \hat {\boldsymbol {y}}) , \tag {5}
+$$
+
+where $\hat{\pmb{y}}\in \mathbb{R}^{|\mathcal{T}_{train}|}$ denotes the binary vector for the ground-truth types of $m$ with 1 for positive and 0 for negative. $|\mathcal{T}_{train}|$ denotes the size of $\mathcal{T}_{train}$ . $\pmb {y}\in \mathbb{R}^{|\mathcal{T}_{train}|}$ denotes the predicted score vector.
+
+Although the HA module does not directly learn any knowledge from instances of $\mathcal{T}_{test}$ , by encoding the type hierarchy $\Psi$ using mask-self-attention, Transformer encoder will capture the semantic correlation between types in $\mathcal{T}_{train}$ and $\mathcal{T}_{test}$ , thus producing reliable representations for types in $\mathcal{T}_{test}$ .
+
+# 2.4 Background Knowledge-Aware (KA) Module
+
+We introduce prototypes and descriptions as two kinds of knowledge in the KA module.
+
+Prototypes refer to the carefully selected mentions for a type based on Normalized Point-wise Mutual Information (NPMI), which provide a mention-level summary for types (Ma et al., 2016).
+
+Descriptions are queried from WordNet glosses (Miller, 1995) by type names, which provide a brief high-level summary for each type.
+
+# 2.4.1 Inference from Background Knowledge
+
+We hope to infer whether a mention $m$ matches a candidate type $t$ , given the prototypes, type description and the context. In this work, we embody the KA module as natural language inference (NLI) from multiple premises (Lai et al., 2017). An example is presented in Figure 2, with input the same
+
+| Multiple Premises |
| • Context-based premise: Northwest and Midway are two of the five airlines with which Budget has agreements.
+• Prototypes-based premise: /organization/corporation has the following prototypes: western_union, ...
+• Description-based premise: /organization/corporation denotes a collection of business firms whose articles of incorporation have been approved in some state. |
| Hypothesis |
| • /organization/corporation is a correct type for the mention Northwest. |
+
+Figure 2: An example to illustrate the multiple-premises and the hypothesis for KA.
+
+as Figure 1. We construct three premises corresponding to the context, prototypes and description respectively. The target hypothesis encodes that "the type is correct for the mention". For both premises and hypothesis, we organize them into natural language sentences.
+
+We reuse the Mention-Context Encoder in Sec 2.3.1 to obtain representations for the context-based premise, i.e., $\boldsymbol{r}_{mc} = \boldsymbol{r}_m \oplus \boldsymbol{r}_c$ , where $\boldsymbol{r}_m$ and $\boldsymbol{r}_c$ represent the mention and context respectively. To encode the prototypes-based and description-based premises, we also use the same encoder, where the type is aligned with the mention while the rest of the sentence is aligned with the context of the mention. We denote the premises based on prototypes and description as $\boldsymbol{r}_{tp} = \boldsymbol{r}_t \oplus \boldsymbol{r}_p$ and $\boldsymbol{r}_{td} = \boldsymbol{r}_t \oplus \boldsymbol{r}_d$ , where $\boldsymbol{r}_t, \boldsymbol{r}_p, \boldsymbol{r}_d \in \mathbb{R}^{d_m}$ are considered as the representations for the type, prototypes-based and description-based sentences respectively. Since the hypotheses for the same mention targeting different types have the same word sequences except for the type spans, we simplify the representation of hypothesis as $\boldsymbol{r}_h = \boldsymbol{r}_t \oplus \boldsymbol{r}_m \in \mathbb{R}^{2d_m}$ , where $\boldsymbol{r}_t$ and $\boldsymbol{r}_m$ are the type and mention representations directly taken from $\boldsymbol{r}_{mc}$ and $\boldsymbol{r}_{tp}$ . In the KA module, the encoders for all the premises and hypothesis share the parameters in ELMo.
+
+Loss Function for KA Module Motivated by TransE (Bordes et al., 2013) and TransR (Lin et al., 2015), we propose a simple translation-based solution for NLI by extending the translation operations over triples to quadruples, i.e., (context-based premise, prototypes-based premise, description-based premise, hypothesis).
+
+Given a mention $m$ and a candidate type $t$ , we first use the matrix $\mathbf{W}$ to project all the representations to a new space for inference:
+
+$$
+\{\tilde {\boldsymbol {r}} _ {m c}, \tilde {\boldsymbol {r}} _ {t p}, \tilde {\boldsymbol {r}} _ {t d}, \tilde {\boldsymbol {r}} _ {h} \} = \boldsymbol {W} \{\boldsymbol {r} _ {m c}, \boldsymbol {r} _ {t p}, \boldsymbol {r} _ {t d}, \boldsymbol {r} _ {h} \} (6)
+$$
+
+where $\pmb{W} \in \mathbb{R}^{d_w \times 2d_m}$ . We hope that $\tilde{\pmb{r}}_{mc} + \tilde{\pmb{r}}_{tp} + \tilde{\pmb{r}}_{td} \approx \tilde{\pmb{r}}_h$ when the hypothesis can be inferred from the premises, i.e., the type $t$ is correct for the mention $m$ under the context $c$ . Thus, we try to minimize their squared euclidean distance
+
+$$
+\mathcal {D} _ {t} = \left\| \tilde {\boldsymbol {r}} _ {m c} + \tilde {\boldsymbol {r}} _ {t p} + \tilde {\boldsymbol {r}} _ {t d} - \tilde {\boldsymbol {r}} _ {h} \right\| _ {2} ^ {2}, \tag {7}
+$$
+
+with norm constraints, i.e., $\| \tilde{r}_{mc}\| _2^2 = \| \tilde{r}_{tp}\| _2^2 =$ $\| \tilde{r}_{td}\| _2^2 = \| \tilde{r}_h\| _2^2 = 1$ . Then the score for type $t$ is defined as: $p_t = -\mathcal{D}_t$ . The closer the distance, the higher the score. Finally, the loss function for mention $m$ is:
+
+$$
+\mathcal {L} _ {m, K A} = \sum_ {t \in \mathcal {T} _ {\text {p o s}}, t ^ {\prime} \in \mathcal {T} _ {\text {n e g}}} \frac {\max \left(0 , 1 - \left(p _ {t} - p _ {t ^ {\prime}}\right)\right)}{\left| \mathcal {T} _ {\text {p o s}} \right|}, \tag {8}
+$$
+
+where $\mathcal{T}_{pos}$ are the ground-truth types of $\mathcal{T}_{train}$ for $m$ with size $|\mathcal{T}_{pos}|$ , while $\mathcal{T}_{neg}$ are the negative types in $\mathcal{T}_{train}$ , i.e., $\mathcal{T}_{neg} = \mathcal{T}_{train} \setminus \mathcal{T}_{pos}$ .
+
+# 2.5 Training and Inference
+
+Overall Loss Given a training mention $m$ , we derived the loss from the aforementioned modules. Finally, the overall loss to train the fusion model is:
+
+$$
+\mathcal {L} = \sum_ {m \in \mathcal {M}} \mathcal {L} _ {m, C A} + \mathcal {L} _ {m, H A} + \mathcal {L} _ {m, K A}, \tag {9}
+$$
+
+where $\mathcal{M}$ denotes the training mention set.
+
+Inference Given a test mention $m$ and a candidate type $t$ in $\mathcal{T}_{test}$ , we first compute the scores from each module: $s_t$ (by CA module), $y_t$ (by HA module) and $p_t$ (by KA module). Then we normalize them according to
+
+$$
+x ^ {\prime} = \operatorname {s i g m o i d} \left(\frac {x - \mu_ {\boldsymbol {x}}}{\sigma_ {\boldsymbol {x}}}\right), x \in \left\{s _ {t}, y _ {t}, p _ {t} \right\}, \tag {10}
+$$
+
+where $\pmb{x}$ is the score vector from a module for mention $m$ towards all types $t \in T_{test}$ with $x$ as component. $\mu_{\pmb{x}}$ and $\sigma_{x}$ denote the mean and standard deviation of the vector $\pmb{x}$ . The final decision score by our fusion model for type $t$ is:
+
+$$
+\operatorname {s c o r e} _ {t} = \lambda_ {1} s _ {t} ^ {\prime} + \lambda_ {2} y _ {t} ^ {\prime} + \lambda_ {3} p _ {t} ^ {\prime}, \tag {11}
+$$
+
+where $\lambda_1, \lambda_2, \lambda_3 \geq 0$ are hyper-parameters and $\lambda_1 + \lambda_2 + \lambda_3 = 1$ .
+
+# 3 Experimental Setup
+
+# 3.1 Datasets and Evaluation Metrics
+
+We evaluate our model on two widely-used datasets: BBN (Weischedel and Brunstein, 2005) and Wiki
+
+(Ling and Weld, 2012). The version processed by Ren et al. (2016) is adopted for our experiments. Detailed statistics on two datasets are listed in Table 1. We do not use OntoNotes (Gillick et al., 2014) since it is hard to define the name, description and hierarchy for its special type /other. Types of both BBN and Wiki are organized into a 2-level hierarchy. There are 47 types in BBN and 113 types in Wiki. Following Ma et al. (2016); Zhang et al. (2020b), we use the coarse-grained (Level-1) types such as /organization for training (denoted as seen types), while the fine-grained (Level-2) types such as /organization/corporation are reserved for testing (denoted as unseen types).
+
+| Dataset | BBN | Wiki |
| train | test | train | test |
| # sentences | 32.7K | 6.3K | 1.5M | 276 |
| # mentions | 86.1K | 12.3K | 2.7M | 563 |
+
+Table 1: Statistics of training and test datasets.
+
+Following prior works (Ling and Weld, 2012; Ma et al., 2016), we report all the popular metrics in our main results for a better comparison, i.e., strict accuracy (Acc), macro-averaged F1 (Ma-F1), micro-averaged F1 (Mi-F1) and micro-averaged precision (Mi-P).
+
+# 3.2 Comparison Models
+
+We abbreviate our Multi-Source Fusion model as MSF, and compare it with the following baselines: (1) Proto-HLE (Ma et al., 2016) which introduces prototype-driven hierarchical label embedding for ZFET; (2) ZOE (Zhou et al., 2018) which infers the types of a given mention according to its type-compatible Wikipedia entries; (3) DZET (Obeidat et al., 2019) which derives type representations from Wikipedia pages and leverages a context-description matching approach for type inference; (4) NZFET* (Ren et al., 2020) which employs entity type attention to make the model focus on information relevant to the entity type; (5) MZET* (Zhang et al., 2020b) which adopts a memory network to connect the seen and unseen types.
+
+Specifically, we compare MSF with its single-source modules: the Context-Consistency-Aware module (CA), the Type-Hierarchy-Aware module (HA) and the Background-Knowledge-Aware module (KA), as well as the variation $\mathbf{MSF}_{avg}$ which simply averages scores from single-source modules (i.e., $\lambda_1, \lambda_2, \lambda_3 = 1/3$ in Equation 11).
+
+All the results are implemented except the ones indicated by $*$ . The implementation details and
+
+hyperparameter settings (e.g., $\lambda_1, \lambda_2, \lambda_3$ for MSF) are presented in Appendix A.
+
+# 4 Experimental Results
+
+# 4.1 Main Results
+
+Table 2 and Table 3 present the results on BBN and Wiki, evaluated on both the unseen fine-grained types and the seen coarse-grained types.
+
+Zero-shot Performance From Table 2, we see that our model significantly outperforms the baselines across the metrics. MSF gains up to $11.42\%$ over DZET on BBN and $22.84\%$ over ZOE on Wiki according to Ma-F1. Compared with $\mathrm{MSF}_{avg}$ , which treats each information source as equally important, MSF considers the importance of each source and achieves better performance on both datasets. Besides, the single-source modules of MSF (i.e., CA, HA and KA) also produce relatively promising results, among which KA yields the best scores. Nevertheless, MSF still surpasses these modules by a large margin, which verifies the necessity of information fusion for the ZFET task.
+
+Supervised Performance Table 3 demonstrates the advantage of MSF in predicting the seen types, with Ma-F1 increased by $2.01\%$ over Proto-HLE on BBN and $2.25\%$ over DZET on Wiki. Besides, CA, HA and KA still maintain a highly competitive performance in this regard. Combined with Table 2, we find that the proposed MSF has a particular superiority on the unseen types, since the auxiliary information from multiple sources tends to be more helpful when short of annotated training samples.
+
+# 4.2 Ablation Studies
+
+We conduct ablation studies on the single-source modules of MSF. The results are shown in Table 4.
+
+Ablations of CA We observe that the vanilla CA (i.e., the BERT-based CA module without fine-tuning, denoted as "CA w/o finetuning") has reached a certain level of performance. This indicates the potential of BERT for context consistency assessment thanks to its large-scale unsupervised pre-training technique. After fine-tuning with our modified mask mechanism, CA surpasses its vanilla version by $23.13\%$ and $10.28\%$ on BBN and Wiki respectively.
+
+Ablations of HA We show that Transformer-based type encoder greatly contributes to the HA module. To validate it, we replace Transformer
+
+| Model | BBN | Wiki |
| Acc (%) | Ma-F1(%) | Mi-F1(%) | Mi-P(%) | Acc(%) | Ma-F1(%) | Mi-F1(%) | Mi-P(%) |
| Proto-HLE | 49.65 | 49.65 | 49.65 | 49.65 | 23.76 | 23.76 | 23.36 | 23.76 |
| ZOE | 58.00 | 58.95 | 62.16 | 65.33 | 33.67 | 34.82 | 34.50 | 35.03 |
| DZET | 62.60 | 62.60 | 62.60 | 62.60 | 32.67 | 32.67 | 32.12 | 32.67 |
| NZFET* | - | - | - | 45.91 | - | - | - | 24.25 |
| MZET* | 28.80 | 30.10 | 31.60 | - | - | - | - | - |
| CA | 50.36 | 50.36 | 50.36 | 50.36 | 36.63 | 37.37 | 36.98 | 37.62 |
| HA | 62.49 | 62.49 | 62.49 | 62.49 | 35.15 | 36.99 | 36.98 | 37.62 |
| KA | 66.32 | 66.32 | 66.32 | 66.32 | 41.58 | 43.80 | 43.80 | 44.55 |
| MSFavg | 70.90 | 70.90 | 70.90 | 70.90 | 50.00 | 52.58 | 52.55 | 53.47 |
| MSF (ours) | 74.02 | 74.02 | 74.02 | 74.02 | 55.45 | 57.66 | 57.42 | 58.42 |
+
+Table 2: Performance on the unseen types. The best scores of baselines and all the models are underlined and bold-faced respectively. Since all the test mentions from BBN correspond to only one ground truth seen/unseen type, our implementations simply predict the candidate type with the highest score for BBN. This makes some results of different metrics the same on BBN.
+
+| Model | BBN | Wiki |
| Acc (%) | Ma-F1(%) | Mi-F1(%) | Mi-P(%) | Acc(%) | Ma-F1(%) | Mi-F1(%) | Mi-P(%) |
| Proto-HLE | 87.25 | 87.25 | 87.25 | 87.25 | 68.17 | 72.37 | 70.62 | 73.92 |
| ZOE | 58.86 | 63.06 | 59.82 | 66.28 | 68.30 | 68.62 | 71.13 | 70.24 |
| DZET | 86.02 | 86.02 | 86.02 | 86.02 | 82.73 | 87.21 | 84.88 | 88.85 |
| NZFET* | - | - | - | - | - | - | - | - |
| MZET* | 70.70 | 71.00 | 71.00 | - | - | - | - | - |
| CA | 80.98 | 80.98 | 80.98 | 80.98 | 75.90 | 79.59 | 77.32 | 80.94 |
| HA | 84.57 | 84.57 | 84.57 | 84.57 | 83.99 | 88.46 | 86.08 | 90.11 |
| KA | 86.05 | 86.05 | 86.05 | 86.05 | 82.55 | 87.43 | 85.22 | 89.21 |
| MSFavg | 88.65 | 88.65 | 88.65 | 88.65 | 84.17 | 88.91 | 86.60 | 90.65 |
| MSF (ours) | 89.26 | 89.26 | 89.26 | 89.26 | 84.71 | 89.46 | 87.11 | 91.19 |
+
+Table 3: Performance on the seen types.
+
+| Source | Model | BBN | Wiki |
| context consistency | CA | 50.36 | 37.37 |
| CA w/o finetuning | 27.23 | 27.09 |
| type hierarchy | Proto-HLE | 49.65 | 23.76 |
| MZET* | 30.10 | - |
| HA | 62.49 | 36.99 |
| HA-Glove | 52.48 | 18.32 |
| HA-HierMatrix | 58.96 | 20.67 |
| background knowledge | Proto-HLE | 49.65 | 23.76 |
| DZET | 62.60 | 32.67 |
| KA | 66.32 | 43.80 |
| KA w/o Description | 63.28 | 32.67 |
| KA w/o Prototypes | 59.54 | 26.10 |
+
+Table 4: Ablation results of CA, HA and KA, evaluated on the unseen types of BBN by Ma-F1 $(\%)$ .
+
+encoder with averaged Glove word embeddings to obtain type representations and denote it as "HA-Glove". Besides, we also implement the variation of HA that removes the Transformer encoder and simply multiplies the type embeddings by a binary hierarchical matrix as (Ma et al., 2016) to model the type hierarchy (denoted as "HA-HierMatrix"). We see that HA greatly advances its counterparts that do not use Transformer encoder. Also notice that HA-HierMatrix performs better than HA-Glove, indicating hierarchical constraint enforced by HierMatrix is also important for type representation
+
+learning. In addition, HA also shows a strong advantage over Proto-HLE and $\mathsf{MZET^{*}}$ which also take the relationships among types into account.
+
+Ablations of KA We remove either descriptions or prototypes from KA and denote them as "KA w/o Description" and "KA w/o Prototypes". The results reveal that, both descriptions and prototypes consistently contribute to KA, wherein prototypes seem to play a more important role on both datasets. In fact, the prototypes used in KA are carefully selected by Ma et al. (2016) while the descriptions from WordNet only contain the brief high-level summaries of types. Additionally, two baselines (i.e., Proto-HLE and DZET) which also leverages background knowledge are included for a more comprehensive comparison. We notice that KA w/o Prototypes is slightly inferior to DZET which also uses type descriptions by a type-description matching approach. However, when prototypes and descriptions are combined, the superiority of KA with NLI framework is obvious.
+
+# 4.3 Characteristics, Merits and Demerits of Each Information Source
+
+In this section, we focus on the impact of long-tail types and context length for ZFET. Based on the
+
+observations, we discuss the characteristics, merits and demerits of different modules targeting each information source (i.e., CA, HA and KA).
+
+# 4.3.1 Impact of Long-tail Types
+
+We examine the performance of each module on the test subset of long-tail (with less than 200 test cases) unseen types. We compute the precision, recall and F1 value for each type and report the average values over all these types in Table 5. The results show CA obtains the best $\mathrm{F1}_{avg}$ score on the long-tail types. In fact, CA is based on the pretrained BERT that contains much implicit information of the unseen long-tail types. Moreover, CA masks the mentions and completely depends on the contexts for prediction. This reduces the risk for CA to remember the mentions for prediction and improves the generalization capability.
+
+KA produces better $\mathrm{P}_{avg}$ score than HA, which verifies that background knowledge is helpful in distinguishing among easily confused types. However, KA often makes mistakes on the unseen types that share little knowledge with the seen types, which makes KA perform poorly in $\mathrm{R}_{avg}$ .
+
+We also notice that the combination of different information sources brings a significant improvement to the performance of MSF regarding $\mathrm{P}_{avg}$ , but a drop regarding $\mathrm{R}_{avg}$ on the contrary. This inspires us to take more advantages of CA while minimizing the disturbance from KA and HA to promote the model's generalization capacity on long-tail types in the future.
+
+| Model | Pavg(%) | Ravg(%) | F1avg(%) |
| CA | 28.03 | 35.07 | 25.25 |
| HA | 6.99 | 14.24 | 5.92 |
| KA | 12.77 | 8.11 | 7.06 |
| MSF | 43.72 | 19.34 | 21.16 |
+
+Table 5: The results on long-tail unseen types in BBN.
+
+# 4.3.2 Impact of Context Length
+
+We separate the test samples into three groups by the context length, and compare the Ma-F1 scores in each group, as shown in Figure 3. We see that CA, HA, KA and MSF all perform better on the mentions with longer contexts, since longer contexts tend to be more informative than the shorter ones. MSF outperforms the single-source modules CA, HA and KA in both the situations with short and median contexts. Nevertheless, the performance of MSF is poorer than CA in the long-context scenario. This indicates that the informa
+
+tion from context consistency is with higher confidence in handling longer contexts. Whereas introducing HA and KA modules may prevent the performance growth compared with only using CA module in this case. Conversely, a distinct drop appears when CA is evaluated on the mentions with short contexts.
+
+
+Figure 3: Performance on the unseen types of BBN relative to the context length.
+
+# 4.4 Complementarity among Different Information Sources
+
+We present the overlaps and disjoint parts of the true cases predicted by the single-source modules in Figure 4. About $31.33\%$ of the test mentions are successfully categorized by all the three modules, while the rest are misidentified by at least one module. We notice that HA and KA share the most true cases (up to $61.04\%$ , i.e., $31.33\% + 29.71\%$ ) among the pairwise intersections. A possible reason is that HA and KA use the same mention-context encoder based on ELMo. Another reason is that the premises and hypothesis constructed by KA implicitly encode some hierarchical information like HA. For example, part of the prototypes are shared between the parent and child types.
+
+
+Figure 4: Venn diagram of the true test cases of unseen types correctly predicted by CA, HA and KA on BBN. The annotated percentages (Acc) are proportional to the entire test set.
+
+
+(a)
+
+
+(b)
+Figure 5: The intersections and differences between the true case sets of unseen types predicted by MSF and CA (a), HA (b), KA (c) or CA $\cup$ HA $\cup$ KA (d) on BBN. CA $\cup$ HA $\cup$ KA denotes the union of true cases correctly predicted by CA, HA or KA.
+
+
+(c)
+
+
+(d)
+
+KA demonstrates greater capacity than HA with $5.27\%$ (i.e., $2.17\% + 3.1\%$ ) additional true cases that HA fails to recognize, since background knowledge helps to distinguish among the confusing sibling types sharing the same parent type. However, there still exist $1.44\%$ (i.e., $0.65\% + 0.79\%$ ) cases where HA does better than KA. This is because the hierarchy-wise information incorporated to KA is less obvious than that inside HA. Meanwhile, KA also suffers from the problem of low recall in long-tail types as discussed in Sec 4.3.1.
+
+Another noticeable observation is that quite a proportion of cases $(16.2\%)$ are difficult for HA and KA to recognize, but easy for CA. This indicates the consistency between type names and contexts is a nonnegligible clue for the improvement of performance in ZFET.
+
+# 4.5 Contributions of Multiple Information Sources to MSF
+
+We also look into the intersections and differences between the true case sets of MSF and CA/HA/KA, as well as their union in Figure 5. We see that MSF takes more advantage of HA and KA, with $57.73\%$ and $61.5\%$ overlaps, respectively. Although CA provides lots of auxiliary information for MSF, there still exist $6.84\%$ true cases of CA wrongly predicted by MSF after fusion. Besides, the $4.76\%$ missing part of HA and the $4.82\%$ of KA also remain to be more fully exploited. Thus, it is worth exploring deeply to make the best of each information source during model fusion. In addition, Figure 5(d) shows that $2.07\%$ complex examples are correctly predicted by MSF while are mistaken by all the three single-source modules. $12.01\%$ samples are correctly identified by at least one of the modules but are mistaken by MSF. Besides, there are $13.96\%$ hard examples misidentified by both the single-source modules (i.e., CA $\cup$ HA $\cup$ KA)
+
+and the multi-source fusion model (i.e., MSF).
+
+# 5 Related Work
+
+As a zero-shot paradigm of FET, ZFET suffers from a huge information gap between the seen and unseen types due to the lack of annotated data. In spite of simply computing type representations by averaging the embeddings of words comprising their names (Yuan and Downey, 2018), a variety of auxiliary information has been explored to fill this gap. Huang et al. (2016) proposes a hierarchical clustering model with domain-specific knowledge base for unsupervised entity typing. Ma et al. (2016) first introduces prototypical information to learn type embeddings and encodes type hierarchy by multiplying the type embeddings with a binary hierarchical matrix. Zhou et al. (2018) matches the entity mention with a set of Wikipedia entries and classifies the mention based on the Freebase types of its type-compatible entries. Obeidat et al. (2019) leverages Wikipedia descriptions of types and designs a context-description matching model. Ren et al. (2020) employs entity type attention to make the model focus on context semantically relevant to the type. Zhang et al. (2020b) transfers the knowledge from seen types to the unseen ones through memory network. As for context consistency, Xin et al. (2018) first takes the language models as constraint in supervised typing tasks. Recently, Qian et al. (2021) studies unsupervised entity typing without using knowledge base, where pseudo data with fine-grained labels are automatically created from large unlabeled dataset.
+
+# 6 Conclusion
+
+In this paper, we explored multiple information sources for ZFET. We proposed a multi-source fusion model to better integrate these sources, which has achieved state-of-the-art performance in ZFET.
+
+Besides, we conducted a deep analysis about the characteristics, merits and demerits of each information source, and discussed the complementarity among different sources. In particular, the context consistency information from the pre-trained language model is relatively useful in complex scenarios with long-tail types or long contexts. Along this way, we will conduct more in-depth research to take full advantage of context consistency. Besides, we will also explore more reasonable methods for information fusion in ZFET.
+
+# References
+
+Abhishek, Ashish Anand, and Amit Awekar. 2017. Fine-grained entity type classification by jointly learning representations and label embeddings. In EACL.
+Antoine Bordes, Nicolas Usunier, Alberto García-Durán, J. Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In NIPS.
+Shuang Chen, Jinpeng Wang, Feng Jiang, and ChinYew Lin. 2020. Improving entity linking by modeling latent entity type information. In AAAI.
+Zhendong Chu, Haiyun Jiang, Yanghua Xiao, and Wei Wang. 2020. Insrl: A multi-view learning framework fusing multiple information sources for distantly-supervised relation extraction. arXiv preprint arXiv:2012.09370.
+J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*.
+Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Context-dependent fine-grained entity type tagging. arXiv preprint arXiv:1412.1820.
+Linmei Hu, T. Yang, C. Shi, Houye Ji, and Xiaoli Li. 2019. Heterogeneous graph attention networks for semi-supervised short text classification. In EMNLP/IJCNLP.
+Lifu Huang, Jonathan May, Xiaoman Pan, and Heng Ji. 2016. Building a fine-grained entity typing system overnight for a new x (x = language, domain, genre). ArXiv, abs/1603.03112.
+Hailong Jin, Lei Hou, Juan-Zi Li, and T. Dong. 2019. Fine-grained entity typing via hierarchical multi graph convolutional networks. In EMNLP/IJCNLP.
+A. Lai, Yonatan Bisk, and J. Hockenmaier. 2017. Natural language inference from multiple premises. In IJCNLP.
+
+Yankai Lin, Zhiyuan Liu, M. Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In AAAI.
+Ying Lin and Heng Ji. 2019. An attentive fine-grained entity typing model with latent type representation. In EMNLP/IJCNLP.
+Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. In AAAI.
+Lemao Liu, Haisong Zhang, Haiyun Jiang, Yangming Li, Enbo Zhao, Kun Xu, Linfeng Song, Suncong Zheng, Botong Zhou, Jianchen Zhu, et al. 2021. Texsmart: A system for enhanced natural language understanding.
+Yukun Ma, E. Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In COLING.
+G. Miller. 1995. Wordnet: a lexical database for english. Commun. ACM, 38:39-41.
+Rasha Obeidat, Xiaoli Z. Fern, Hamed Shahbazi, and P. Tadepalli. 2019. Description-based zero-shot fine-grained entity typing. In NAACL-HLT.
+Jing Qian, yibin liu, Lemao Liu, Yangming Li, Haiyun Jiang, Haisong Zhang, and Shuming Shi. 2021. Fine-grained entity typing without knowledge base. EMNLP.
+Xiang Ren, W. He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016. Afet: Automatic fine-grained entity typing by hierarchical partial-label embedding. In EMNLP.
+Yankun Ren, J. Lin, and Jun Zhou. 2020. Neural zero-shot fine-grained entity typing. Companion Proceedings of WWW Conference 2020.
+Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and S. Riedel. 2017. Neural architectures for fine-grained entity type classification. ArXiv, abs/1606.01341.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, $\backslash$ Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008.
+Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. Linguistic Data Consortium, Philadelphia, 112.
+Yongqin Xian, Christoph H. Lampert, B. Schiele, and Zeynep Akata. 2019. Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41:2251-2265.
+J. Xin, Hao Zhu, Xu Han, Zhiyuan Liu, and M. Sun. 2018. Put it back: Entity typing with language model enhancement. In EMNLP.
+
+Wenhan Xiong, Jiawei Wu, Deren Lei, Mo Yu, S. Chang, Xiaoxiao Guo, and William Yang Wang. 2019. Imposing label-relational inductive bias for extremely fine-grained entity typing. ArXiv, abs/1903.02591.
+Peng Xu and Denilson Barbosa. 2018. Neural fine-grained entity type classification with hierarchy-aware loss. In *NAACL-HLT*, pages 16-25.
+Zheng Yuan and Doug Downey. 2018. Otyper: A neural architecture for open named entity typing. In AAAI.
+Haisong Zhang, Lemao Liu, Haiyun Jiang, Yangming Li, Enbo Zhao, Kun Xu, Linfeng Song, Suncong Zheng, Botong Zhou, Jianchen Zhu, et al. 2020a. Texsmart: A text understanding system for fine-grained ner and enhanced semantic analysis. arXiv preprint arXiv:2012.15639.
+T. Zhang, Congying Xia, Chun-Ta Lu, and Philip S. Yu. 2020b. Mzet: Memory augmented zero-shot fine-grained named entity typing. COLING.
+Ben Zhou, Daniel Khashabi, Chen-Tse Tsai, and D. Roth. 2018. Zero-shot open entity typing as type-compatible grounding. In EMNLP.
+
+# A Implementation Details
+
+For Proto-HLE and DZET we employ their type representation methods but reuse our ELMo-based mention-context encoder for representations of mentions and contexts. In ZOE, we remove the test mentions of target dataset from the Wikipedia entry source and report the performance under our zero-shot setting.
+
+For CA, our implementation is based on the pre-trained BERT (BERT-base, uncased) available in the HuggingFace Library. For HA, we adopt GloVe 200-dimensional word embeddings for the initialization of type embeddings. The type embeddings are frozen during training. The Transformer encoder is trained from scratch with 4 heads and 2 layers with hidden dimension of 2048. For KA, the numbers of prototypes used for BBN and Wiki are 5 and 30 respectively. For the fusion of CA, HA and KA, $\lambda_1$ , $\lambda_2$ , $\lambda_3$ are tuned according to the performance on the development set by Macro F1, and their values are as follows.
+
+| Dataset | λ1 | λ2 | λ3 |
| BBN | 0.393 | 0.041 | 0.566 |
| Wiki | 0.348 | 0.424 | 0.228 |
+
+Table 6: Values of ${\lambda }_{1},{\lambda }_{2},{\lambda }_{3}$ for MSF on BBN and Wiki.
\ No newline at end of file
diff --git a/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/images.zip b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ca9ed0d9c951face93abe86463293d9c5613ee04
--- /dev/null
+++ b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f2f495d1d6e9484b1e84ea38aa5c771c76c7c0aa1096cf6473d430e74a2b959
+size 431630
diff --git a/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/layout.json b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1b998fe61c60bd3f7e25ba2c30715793700106ff
--- /dev/null
+++ b/anempiricalstudyonmultipleinformationsourcesforzeroshotfinegrainedentitytyping/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a3c342ca437766d3a60fdc045bb340fd7036d09cfd527be1aad0901958fb96d6
+size 443328
diff --git a/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_content_list.json b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..195131701aa65104c8ea918311a533285be9b25f
--- /dev/null
+++ b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d17b8ddc67c5862a3de22f17f81a03bd4c5c537161fc4b2906331f0b385dae0d
+size 46988
diff --git a/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_model.json b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5094532467a23376fbed0047541909fea3adefdb
--- /dev/null
+++ b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0af69726cae135ac633ef14ec8a206bf5fe4ffbff4f06617b0a0bf34434e93f
+size 57076
diff --git a/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_origin.pdf b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..142e36c2021fd1d46effff36945345b1767cf6a9
--- /dev/null
+++ b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/e1443b59-3ba9-4916-85dd-6088e2140ae9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0beaa5f602b8c3661a8d351b3c2590118e630fc786f4566c3cc89473935897d
+size 530405
diff --git a/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/full.md b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6f9cbf5e7c43349c4b62b51fa0c1c136ce3244e9
--- /dev/null
+++ b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/full.md
@@ -0,0 +1,199 @@
+# An Evaluation Dataset and Strategy for Building Robust Multi-turn Response Selection Model
+
+Kijong Han $^{1*}$ , Seojin Lee $^{2*†}$ , Woin Lee $^{1}$ , Joosung Lee $^{1}$ , and Dong-hun Lee $^{1‡}$
+
+$^{1}$ Kakao Enterprise, South Korea
+
+$^{2}$ SK Telecom, South Korea
+
+{mat.h,dan.kes,rung.joo,hubert.std}@kakaoenterprise.com
+
+seojin.lee@sktair.com
+
+# Abstract
+
+Multi-turn response selection models have recently shown comparable performance to humans in several benchmark datasets. However, in the real environment, these models often have weaknesses, such as making incorrect predictions based heavily on superficial patterns without a comprehensive understanding of the context. For example, these models often give a high score to the wrong response candidate containing several keywords related to the context but using the inconsistent tense. In this study, we analyze the weaknesses of the open-domain Korean Multi-turn response selection models and publish an adversarial dataset to evaluate these weaknesses. We also suggest a strategy to build a robust model in this adversarial environment.
+
+# 1 Introduction
+
+Multi-turn response selection is a task that selects the best response among given candidates for a given dialogue context. Response selection models have recently shown comparable performance to humans (Cui et al., 2020) in the several indomain/held-out benchmarks (Lowe et al., 2015; Zhang et al., 2018a; Dinan et al., 2020). However, in the actual service environment, these models are often found to have weaknesses. For example, the model gives the highest score to the wrong response, which has high word overlap with the context (Yuan et al., 2019) or semantically similar to the context (Whang et al., 2021).
+
+Held-out evaluation often overestimates the real-world performance of the model (Ribeiro et al., 2020), so adversarial datasets for evaluating weaknesses have been constructed for each task, such as NLI (Naik et al., 2018; McCoy et al., 2019), and MRC (Jia and Liang, 2017; Rajpurkar et al., 2018).
+
+A framework for comprehensively evaluating the general linguistic abilities of the model was also studied (Ribeiro et al., 2020).
+
+Several works evaluated adversarial cases for the response selection task (Yuan et al., 2019; Whang et al., 2021). However, they just automatically generate adversarial responses by copying words in the context. In this study, we analyze the weaknesses of the various aspects of the open-domain Korean Multi-turn Response Selection models and construct an adversarial dataset manually. A total of 2,220 test cases are constructed, and each test case are classified by type.
+
+Neural networks do not generalize well to such an adversarial setting because they tend to use superficial patterns and spurious correlation of the dataset overly, which makes models biased(Clark et al., 2019; Nam et al., 2020). Thus, various debiasing methods have been studied to alleviate this phenomenon (He et al., 2019; Utama et al., 2020). In this study, we show that debasing method is also effective in adversarial evaluation for multi-turn response selection task.
+
+In the retrieval-based chatbot system where response selection is used, response candidates are composed as follows. All utterances in the database are used as response candidates (Humeau et al., 2020), or part of them filtered through search engines are used (Zhou et al., 2020). To filter the candidates, machine learning-based embeddings or word-level similarity algorithms(e.g., BM25), which also have weaknesses in an adversarial setting, are used (Zhou et al., 2020). Therefore, almost every time a response is selected by the actual system, adversarial cases are included in the candidates. Thus, robustness to adversarial cases is more important for response selection task. We also construct a real environment test set and experiment that the model robust to an adversarial case has high performance in the real environment.
+
+| Type | Context | Adversarial Response | # cases |
| Repetition | [A] I'm hungry / [B] What do you want to eat? | I'm hungry | 400 |
| Negation | [A] Wrap up before you go outside / [B] Why?
+[A] It's freezing cold. | Yes, indeed. it's not that cold today. | 454 |
| Tense | [A] I can't wait to watch "Joker" / [B] I watched the movie.
+It was really impressive. / [A] Wow! I should watch it. | You really enjoyed it. | 158 |
| Subject-
+Object | [A] I'm in love with BTS / [B] Why do you like them so much? / [A] φ(their) Songs are great | Thanks (for complimenting me) | 374 |
| Lexical
+Contradiction | [A] It's freezing cold today. | Yes, indeed. It's way too hot out today. | 254 |
| Interrogative
+Word | [A] I saw Jennie today / [B] What does she look like? /
+[A] φ(she) Looks so pretty | Who's so pretty? | 236 |
| Topic | [A] Isn't the weather nice today? / [B] Oh, is it?
+[A] Yeah, it's sunny and warm. | Bring your umbrella with you. | 344 |
+
+Table 1: Examples of adversarial data for each type. $\phi$ denotes a zero anaphora in Korean.
+
+# 2 Adversarial Test Dataset
+
+We analyze the incorrect responses in the internal service log and categorize the types of frequent errors. There are a total of seven types, and details of each type are as follows.
+
+Repetition An incorrect response repeating one of the utterances in the context.
+
+Negation A negation is either added to or omitted from a correct response, generating an erroneous response with reversed affirmative or negative meaning. A test set for a negation error intentionally generates a negative response by adding or removing 'an' or 'm', which are negative adverbs in Korean (short-form negation) or '-zi', '-zi', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo', 'lo'
+
+Tense A morpheme or expression marking tense is added to or removed from a correct response, generating an erroneous response in tense that is inconsistent with the given context. A test set for tense errors adds or replaces morphemes or expressions marking the future tense such as '-' or ones marking the past tense such as '-' to test whether the model fully understands the context disconnection triggered by such tense change.
+
+Subject-Object A test set for subject-object errors generates a response inconsistent with the context due to confusion of the subject and object for a certain action. In particular, since zero anaphora can be found frequently in Korean sentences, incorrect responses are often made because of a failure in identifying the hidden subject of the previous context. This test set uses a subject or an object differently from the ones used in a correct response to examine whether the model fully understands
+
+the context disconnection caused by such errors.
+
+Lexical Contradiction A key lexicon of a correct response is replaced with one that holds either conflicting or opposite meaning against the said key lexicon, generating an incorrect response. A test set for lexical contradiction errors replaces a key lexicon in a sentence with an antonym (e.g. hot vs cold) or a word that cannot be used instead (e.g. rain vs snow) to check whether the model understands the precise meaning of such lexicon.
+
+Interrogative Word A test set for interrogative word errors generates a response in a form of 5W1H questions to ask for information that has already been explicitly or implicitly shared in previous dialogues.
+
+Topic A key sentence or vocabulary is replaced with another sentence or term that does not fit in the previous context even though they frequently appear together in the given topic. While this error is similar to the lexical contradiction error to a certain extent, the replacement words used in this test do not hold conflicting or opposite meanings but instead have less semantic relevance to the context of the previous dialogue (e.g. sunny vs umbrella). The test set assesses whether a model fully understands the fact that while the replacement vocabulary is the one that is frequently used in the same given topic, the response does not correctly reflect the context of the previous dialogue.
+
+Five annotators generate a total of 200 dialogue sessions. For each session $i$ , annotators create two correct responses and an arbitrary number $(M_i)$ of incorrect responses based on the instruction described above. All sessions and responses are reviewed and filtered by experts. We set up one test case to consist of context, one correct response, and one incorrect response. Therefore, $2 * M_i$ test
+
+cases were extracted for each session, and a total of 2,220 test cases are constructed. It evaluates whether the model gives the correct answer a higher score than the incorrect one for a given context. Statistics and examples are described in Table 1. We release this data set at https://github.com/kakaoenterprise/KorAdvMRSTestData.
+
+# 3 Method
+
+Suppose that dataset is $D = \{(c_i,r_i,y_i)\}_{i = 1}^N$ where $c_{i}$ denotes a dialogue context, $r_i$ is a response utterance, and $y_{i}\in \{0,1\}$ is a label. The context $c_{i} = \{u_{i,1},u_{i,2},\dots,u_{i,k_{i}}\}$ consists of sequence of $k_{i}$ utterances. The label $y_{i} = 1$ means that $r_i$ is sensible response for context $c_{i}$ .
+
+# 3.1 Baseline: Fine-tuning BERT
+
+We adopt fine-tuning BERT (Devlin et al., 2019) as a baseline. In this work, similar to the previous works that fine-tuned BERT for the Multi-turn Response Selection task (Gu et al., 2020; Whang et al., 2020; Han et al., 2021), the input token sequence of BERT $x_{i}$ is composed as follows.
+
+$$
+\begin{array}{l} x _ {i} = [ C L S ] u _ {i, 1} [ E O U ] \dots u _ {i, k _ {i}} [ E O U ] [ S E P ] \tag {1} \\ r _ {i} [ E O U ] [ S E P ] \\ \end{array}
+$$
+
+The [EOU] is a special token indicating that the utterance is over. The final output hidden vector of the [CLS] token in BERT is fed into a fully connected layer with softmax activation. Then, the BERT is fine-tuned to minimize cross entropy loss between the target label and output of this layer.
+
+# 3.2 Debiasing Strategy
+
+In general, correct dialogue response utilizes keywords or topics in the context. Neural networks tend to use such superficial patterns(e.g., keyword, topic) overly, which makes models biased (Clark et al., 2019; Nam et al., 2020). We see this bias as the main cause of the response selection model's vulnerability to an adversarial environment. Thus, we experimented by applying various debiasing techniques to the response selection task, and DRiFt (He et al., 2019) was the most effective. The main concept of the debiasing strategy we used is to train a debiased model to fit the residual of the biased model, focusing on examples that cannot be predicted well by biased features only (He et al., 2019). Details of the method using DRiFt are as follows.
+
+
+Figure 1: Overall architecture for training debiased model utilizing biased model. The grey line represents that gradient is backpropagated only to the debiased model.
+
+First, we train an auxiliary biased model using only biased features. The biased model is a single fully connected layer with softmax activation and trained with cross-entropy loss. The biased feature vector used as an input $\phi_{i}$ is as follows.
+
+$$
+\begin{array}{l} \phi_ {i} = \left[ J S _ {\text {m o r p h}} \left(c _ {i}, r _ {i}\right), J S _ {\text {m o r p h}} \left(u _ {i, k _ {i}}, r _ {i}\right), \right. \tag {2} \\ J S _ {w o r d p i e c e} \left(c _ {i}, r _ {i}\right), J S _ {w o r d p i e c e} \left(u _ {i, k _ {i}}, r _ {i}\right) ] \\ \end{array}
+$$
+
+We use the Jaccard similarity $(JS)$ between the whole context $(c_{i})$ and response $(r_{i})$ as input features. We also use the JS between the last utterance $(u_{i,k_i})$ and $r_i$ , because the last utterance is most important (Zhang et al., 2018b; Ma et al., 2019). We use two tokenizers: the WordPiece (Wu et al., 2016), and the morpheme analyzer. We assume that these words overlap feature could capture keyword and topic bias.
+
+Second, we train a debiased model utilizing a biased model, as shown in Figure 1. The overall structure of the debiased model is the same as the baseline, but only the learning scheme is different. Let $\mathbf{b}$ is output hidden vector of the biased model, $\mathbf{d}$ is output hidden vector of the debiased model, $p_b = \text{softmax}(\mathbf{b})$ , and $p_d = \text{softmax}(\mathbf{d})$ . DRiFt method minimize cross entropy loss between $p_a = \text{softmax}(\mathbf{b} + \mathbf{d})$ and target labels. Thus, the loss function is defined as follows.
+
+$$
+\begin{array}{l} L o s s = - \log p _ {a} (y _ {i}) = \\ - \log p _ {b} \left(y _ {i}\right) - \log p _ {d} \left(y _ {i}\right) + \log \sum_ {l = 0} ^ {L - 1} p _ {b} (l) p _ {d} (l) \tag {3} \\ \end{array}
+$$
+
+$L$ is the number of classification classes(2 for this task). The gradient is backpropagated only to the
+
+ | Train | Val | In-domain Test | Adv Test | Real Test |
| # pairs | 500K | 10K | 10K | 4,440 | 5,490 |
| # cands | 2 | 10 | 10 | 2 | 10 |
| pos:neg | 1:1 | 1:9 | 1:9 | 1:1 | 4.4:5.6 |
| # turns | 4.6 | 4.5 | 4.6 | 2.9 | 3.3 |
+
+Table 2: Statistics for each dataset.
+
+| Method | In-domain | Adversarial | Real Env. |
| baseline | 86.4±0.5 | 39.4±1.7 | 36.2±0.9 |
| +deb | 85.4±0.7 | 43.5±2.3 | 36.5±0.6 |
| +UMS | 87.5±0.7 | 42.9±2.5 | 38.8±0.6 |
| +UMS+deb | 87.0±0.7 | 47.2±2.1 | 40.4±0.7 |
+
+Table 3: Overall performance of each method.
+
+| Type | Repetition | Negation | Tense | Subject -Object | Lexical Contradiction | Interrogative Word | Topic |
| baseline | 12.9±2.5 | 36.1±2.3 | 41.8±2.3 | 55.0±1.9 | 41.1±1.8 | 46.1±2.2 | 50.7±1.8 |
| +deb | 26.2±5.1 | 40.6±2.8 | 43.0±2.6 | 56.2±1.9 | 43.9±1.6 | 45.5±3.2 | 51.8±1.9 |
| +UMS | 26.2±6.2 | 34.5±1.8 | 45.3±2.0 | 61.2±2.9 | 41.6±3.1 | 46.4±2.0 | 51.1±1.6 |
| +UMS+deb | 40.0±5.4 | 38.5±1.1 | 46.9±2.3 | 63.9±3.0 | 43.3±1.7 | 45.9±2.6 | 52.8±1.7 |
+
+Table 4: Performance for each adversarial type.
+
+debiased model. The last term encourages output from the debiased model $p_d$ , to have minimal projection on output from the biased model $p_b$ (He et al., 2019). Derivation of equation 3 is in Appendix A. At test time, only debiased model is used.
+
+# 3.3 Combination with Multi-task Learning
+
+Recently, self-supervised learning approaches have shown state-of-the-art performance in the response selection task (Whang et al., 2021; Xu et al., 2021). These works devise auxiliary tasks to understand the dialogue better and train the model in a multitask manner. The final loss function in these methods is the weighted sum of losses of auxiliary tasks and main task (i.e., determine given response is a sensible response to the context). Thus, debiasing strategy could be easily combined with these methods by replacing the loss function of the main task with equation 3. We also experiment with self-supervised learning approach UMS (Whang et al., 2021), and we show that it is also effective in not only in-domain but also adversarial and real environments.
+
+# 4 Experiments and Results
+
+# 4.1 Experiment Setup
+
+We construct an experimental dataset using the corpus that we produced in-house and the public Korean dialogue corpus1. We split these corpora into three, and each is for training, validation, and test. Statistics of each dataset are described in Table 2. #pairs denote the number of context-response pairs, #cands denotes the number of candidates per context, pos:neg denotes the ratio of positive and
+
+negative responses in candidates, and #turns denote the average turns per context. Details on the construction are as follows.
+
+Train, valid, and in-domain test The last utterance of the dialogue session is used as a positive response and the rest as context. Negative responses are randomly chosen from the other dialogue.
+
+Adversarial test It is described in the Section 2.
+
+Real environment test In a real environment, response candidates are not sampled randomly but are sampled through a search system (Zhou et al., 2020), or all utterances without sampling are used as candidates (Humeau et al., 2020). There are many adversarial negatives in this situation, as described in Section 1. We build a dataset by simulating this situation in a similar way to the previous works (Wu et al., 2017; Zhang et al., 2018b).
+
+We take a dialogue session from the test corpus and internal service log as context. We trained a bi-encoder-based context and response embedding model (Humeau et al., 2020) and indexed embeddings of all utterances in the corpus. Then, we retrieve the top 10 utterances based on the similarity score between context embedding as response candidates. For each response, three annotators labeled whether it is sensible to the context. The response determined by more than two people as sensible was selected as the positive response.
+
+# 4.2 Results
+
+We measure the performance ten times for each model and report the mean and standard deviation in Table 3. See Appendix B for details of training. The baseline is a fine-tuned BERT described in Section 3.1. "deb" denotes a debiasing strategy described in Section 3.2. UMS denotes a self-supervised multi-task learning method de
+
+scribed in Section 3.3. Precision@1 is used as an evaluation metric for all test sets.
+
+Debiasing strategy significantly improves adversarial test performance in both baseline and UMS model; it achieves absolute improvements of $4.1\%$ and $4.3\%$ on baseline and UMS. A decline in performance is observed in the in-domain test; $-1.0\%$ and $-0.5\%$ on baseline and UMS, as the DRiFt debiasing method (He et al., 2019) shows a slight performance degradation in the in-domain test. However, It improves performance in the comprehensive real environment test; $+0.3\%$ and $+1.6\%$ on baseline and UMS. This supports our argument that robustness to adversarial cases is important in the response selection task. Additionally, $+UMS + \text{deb}$ outperforms $+\text{deb}$ in all test set. From this, it can be seen that the debiasing strategy and UMS have a synergistic effect.
+
+The performance of each adversarial type is reported in Table 4. Since we used word-level Jacard Similarity as a biased feature, the debiasing strategy shows huge performance improvement in the Repetition type, which simply uses word sequence in context as a negative response. There is no improvement in the Interrogative Word type. We assume that the reason for it is that this type is difficult because it requires understanding all 5W1H from the context.
+
+# 5 Conclusion
+
+We analyze the weaknesses of the open-domain Korean Multi-turn Response Selection models and publish an adversarial dataset to evaluate these weaknesses. We suggest a strategy to build a robust model to an adversarial and real environment with the experimental results. We expect that this work and dataset will help improve the response selection model.
+
+# 6 Ethical Considerations
+
+The adversarial dataset we publish is generated manually. All sessions and responses in the dataset are reviewed and filtered by the experts, and we also considered ethical issues in this process. Thus, there is no hate speech or privacy issue in our dataset.
+
+# References
+
+Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don't take the easy way out: Ensem
+
+ble based methods for avoiding known dataset biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4060-4073.
+Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang, and Ming Zhou. 2020. Mutual: A dataset for multi-turn dialogue reasoning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1406-1416.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
+Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2020. The second conversational intelligence challenge (convai2). In *The NeurIPS'18 Competition*, pages 187-208. Springer.
+Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware bert for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2041-2044.
+Janghoon Han, Taesuk Hong, Byoungjae Kim, Youngjoong Ko, and Jungyun Seo. 2021. Fine-grained post-training for improving retrieval-based dialogue systems. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1549-1558.
+He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. EMNLP-IJCNLP 2019, page 132.
+Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In International Conference on Learning Representations.
+Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031.
+Ryan Lowe, Nissan Pow, Iulian Vlad Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285-294.
+
+Wentao Ma, Yiming Cui, Nan Shao, Su He, Weinan Zhang, Ting Liu, Shijin Wang, and Guoping Hu. 2019. Triplenet: Triple attention network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 737-746.
+Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448.
+Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. arXiv preprint arXiv:1806.00692.
+Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. 2020. Learning from failure: Training debiased classifier from biased classifier. NeurIPS.
+Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789.
+Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of nlp models with checklist. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912.
+Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020. Mind the trade-off: Debiasing nlu models without degrading the in-distribution performance. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8717-8729.
+Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuseok Lim. 2020. An effective domain adaptive post-training method for bert in response selection. Proc. Interspeech 2020, pages 1585-1589.
+Taesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han, Dong-hun Lee, and Saebyeok Lee. 2021. Do response selection models really know what's next? utterance manipulation strategies for multi-turn response selection. AAAI.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
+Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection
+
+in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496-505.
+Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, and Rui Yan. 2021. Learning an effective context-response matching model with self-supervised tasks for retrieval-based dialogues. AAAI.
+Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, and Songlin Hu. 2019. Multi-hop selector network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 111-120.
+Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243.
+Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018b. Modeling multi-turn conversation with deep utterance aggregation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3740-3752.
+Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2020. The design and implementation of xiaoice, an empathetic social chatbot. Computational Linguistics, 46(1):53-93.
+
+# A Derivation of Loss Function
+
+Let $b$ is output hidden vector of the biased model, $d$ is output hidden vector of the debiased model, $y_{i} \in \{0,1\}$ is the label value, $p_b = \text{softmax}(b)$ , $p_d = \text{softmax}(d)$ , and $p_a = \text{softmax}(b + d)$ .
+
+$$
+\begin{array}{l} L o s s = - \log p _ {a} (y) \\ = - \log e ^ {b _ {y} + d _ {y}} + \log \sum_ {l} e ^ {b _ {l} + d _ {l}} \\ = - \log e ^ {b _ {y}} - \log e ^ {d _ {y}} + \log \sum_ {l} e ^ {b _ {l}} e ^ {d _ {l}} \\ = - \log e ^ {b _ {y}} - \log e ^ {d _ {y}} + \log \sum_ {l} e ^ {b _ {l}} e ^ {d _ {l}} \\ + \log \sum_ {l} e ^ {b _ {l}} - \log \sum_ {l} e ^ {b _ {l}} \\ + \log \sum_ {l} e ^ {d _ {l}} - \log \sum_ {l} e ^ {d _ {l}} \\ = - (\log e ^ {b _ {y}} - \log \sum_ {l} e ^ {b _ {l}}) \\ - \left(\log e ^ {d _ {y}} - \log \sum_ {l} e ^ {d _ {l}}\right) \\ + \left(\log \sum_ {l} e ^ {b _ {l}} e ^ {d _ {l}} - \log \sum_ {l} e ^ {b _ {l}} \sum_ {l} e ^ {d _ {l}}\right) \\ = - \log \frac {e ^ {b _ {y}}}{\sum_ {l} e ^ {b _ {l}}} - \log \frac {e ^ {d _ {y}}}{\sum_ {l} e ^ {d _ {l}}} \\ + \log \sum_ {l} \frac {e ^ {b _ {l}} e ^ {d _ {l}}}{\sum_ {l} e ^ {b _ {l}} \sum_ {l} e ^ {d _ {l}}} \\ = - \log p _ {b} (y) - \log p _ {d} (y) + \log \sum_ {l} p _ {b} (l) p _ {d} (l) \\ \end{array}
+$$
+
+# B Training Details
+
+The biased model, which consists of a single fully connected layer, is trained using the AdamW optimizer with a learning rate of 5e-4 and for 3 epochs. BERT-based models, including baseline, UMS, and debiased models, are trained using the AdamW optimizer with a learning rate of 2.5e-5 and for 3 epochs on 4 Nvidia Volta v100 GPU. The batch size is 128 for every model. We train and evaluate 10 times for each model and calculate mean and standard deviation. For each model, a checkpoint that shows the best performance in the real environment is selected for performance measure.
\ No newline at end of file
diff --git a/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/images.zip b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..49f4f42dcbc531c0af198cb02da54225adbba7c7
--- /dev/null
+++ b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c986ac7dd06caa7ad3a5bc5f6b823348baaf63e06741beafb787fbc05f72cadf
+size 269158
diff --git a/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/layout.json b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4f24658ee304ee44ffbd66f44bcc6077c1dec040
--- /dev/null
+++ b/anevaluationdatasetandstrategyforbuildingrobustmultiturnresponseselectionmodel/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:20cbc5039518dcc14a501579b1d6ac07be4198f97e8d1259e2aaf99ab6c70b35
+size 217800
diff --git a/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_content_list.json b/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b28641887b71dba5290f171bb0c1f3d566b7acd6
--- /dev/null
+++ b/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1caddbe4bf00cc21f626bf01c84af9503491b1de3647155a596c1c1861f95cc2
+size 40311
diff --git a/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_model.json b/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6711f2165159ece478ba1b52768e17e83ce47ae5
--- /dev/null
+++ b/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0eec5ee7a81061e77e8c72880d189fd59be90f885d06d06b919f714454e510a0
+size 50970
diff --git a/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_origin.pdf b/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9d85ecb4322e82b26423b25e0cdef58580f92a07
--- /dev/null
+++ b/anewrepresentationforspanbasedccgparsing/f6315a89-ceda-4223-a7bd-a5277fd04bea_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7d3754b0b437f49cb5393628e44d71494acb2a9bb3749df1375b9d3560b8abd0
+size 324183
diff --git a/anewrepresentationforspanbasedccgparsing/full.md b/anewrepresentationforspanbasedccgparsing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..33d592d5e1a1ba5c0e17daed00b9c6b978fa2dbb
--- /dev/null
+++ b/anewrepresentationforspanbasedccgparsing/full.md
@@ -0,0 +1,229 @@
+# A New Representation for Span-based CCG Parsing
+
+Yoshihide Kato and Shigeki Matsubara
+Information & Communications, Nagoya University
+Furo-cho, Chikusa-ku, Nagoya, 464-8601 Japan
+yoshihide@icts.nagoya-u.ac.jp
+
+# Abstract
+
+This paper proposes a new representation for CCG derivations. CCG derivations are represented as trees whose nodes are labeled with categories strictly restricted by CCG rule schemata. This characteristic is not suitable for span-based parsing models because they predict node labels independently. In other words, span-based models may generate invalid CCG derivations that violate the rule schemata. Our proposed representation decomposes CCG derivations into several independent pieces and prevents the span-based parsing models from violating the schemata. Our experimental result shows that an off-the-shelf span-based parser with our representation is comparable with previous CCG parsers.
+
+# 1 Introduction
+
+Combinatory Categorical Grammar (CCG) (Steadman, 2000) is a mildly context-sensitive grammar formalism. Several neural CCG parsing methods have been proposed so far (Lewis and Steedman, 2014; Xu et al., 2015; Lewis et al., 2016; Vaswani et al., 2016; Lee et al., 2016; Xu, 2016; Yoshikawa et al., 2017; Stanojevic and Steedman, 2019, 2020; Bhargava and Penn, 2020; Tian et al., 2020; Prange et al., 2021; Liu et al., 2021). Currently, neural span-based models (Cross and Huang, 2016; Stern et al., 2017; Gaddy et al., 2018; Kitaev and Klein, 2018) have been successful in the field of constituency parsing. However, we cannot directly apply this technique to CCG parsing. Span-based models assume that each node label in parse trees can be predicted independently, while, in CCG, each node label (category) is strictly restricted by CCG rule schemata. The independence assumption of span-based models implies that the models are not guaranteed to generate valid CCG derivations.
+
+To solve this problem, we propose a method of representing CCG derivations in a way suitable for span-based parsing models. Our proposed repre
+
+
+Figure 1: CCG rule schemata.
+
+smentation decomposes CCG derivations into several independent pieces and can prevent the span-based parsing models from violating the CCG rule schemata. Furthermore, as a by-product of our representation, the parsing models can assign out-of-vocabulary (OOV) categories, which have not appeared in training data. This characteristic has been attracting attention in CCG parsing research (Bhargava and Penn, 2020; Prange et al., 2021; Liu et al., 2021). Our experimental result shows that an off-the-shelf span-based parser with our representation is comparable with previous CCG parsers and can generate correct OOV categories.
+
+# 2 CCG and Span-based Parsing
+
+This section gives an overview of Combinatory Categorical Grammar (CCG) (Steadman, 2000) and explains why we cannot directly apply the span-based approach to CCG parsing.
+
+# 2.1 Combinatory Categorical Grammar
+
+CCG represents syntactic information by basic categories (e.g., S, NP) and complex categories. Complex categories are in the form of $X / Y$ or $X \backslash Y$ , where $X$ and $Y$ are categories. Intuitively, each category $X / Y$ means that it receives a category $Y$ from its right and returns a category $X$ . In the case of $X \backslash Y$ , the direction is from its left. Formally, categories are combined using CCG rule schemata. Figure 1 shows CCG rule schemata. Here, $X, Y$ and $Z_{i} (1 \leq i \leq d)$ are categories, and $|_{i} \in \{/, \backslash\}$ . $|_{1}Z_{1} \cdots |_{d}Z_{d}$ is called an argument stack (Kuhlmann and Satta, 2014), and we use a Greek letter to represent an argument stack. For example, we use the following notation for the first
+
+rule schema:
+
+$$
+X / Y \quad Y \alpha \Rightarrow X \alpha . \tag {1}
+$$
+
+We define $|\alpha| = d$ and the arity of a category $Y = X\alpha$ where $X$ is a basic category is defined as follows:
+
+$$
+\operatorname {a r i t y} (Y) = | \alpha | \tag {2}
+$$
+
+# 2.2 Span-based Parsing
+
+A span-based parsing model (Stern et al., 2017; Gaddy et al., 2018; Kitaev and Klein, 2018) has a single scoring function $s(i,j,l)$ that scores each label $l$ for each span $(i,j)$ . The score of a tree $T$ is defined as follows:
+
+$$
+s (T) = \sum_ {(i, j, l) \in T} s (i, j, l). \tag {3}
+$$
+
+The parsing problem is formulated as finding the tree $T^{*}$ with the highest score:
+
+$$
+T ^ {*} = \underset {T} {\arg \max } s (T) \tag {4}
+$$
+
+and can be solved using an efficient CKY-like parsing algorithm because of the following characteristic:
+
+- The model can determine each label $l$ for a span $(i,j)$ independently of the other spans.
+
+Unfortunately, CCG parsing cannot take this approach because each label (category) is strictly restricted by the CCG rule schemata. If we apply the span-based approach to CCG parsing forcibly, the following problem occurs:
+
+- The parsing model may generate invalid CCG derivations that violate the CCG rule schemata.
+
+# 3 Span-based representation
+
+To overcome the problem described in the previous section, we propose a new representation for CCG derivations. We call the new ones span-based representations (SBRs for short), which decomposes CCG derivations into several independent pieces to prevent the span-based parsing model from violating the CCG rule schemata. Figure 2 shows an example of CCG derivation and its SBR version.
+
+We realize span-based CCG parsing as follows:
+
+1. Convert CCG derivations into SBRs (Section 3.2).
+2. Train a span-based parsing model using SBRs and parse sentences to generate SBRs.
+3. Convert the output SBRs into CCG derivations. (Section 3.3).
+
+The basic idea behind our method is that each node label in an SBR represents a constraint on the categories of nodes in a CCG derivation. Our method recovers a CCG derivation from its SBR version by satisfying such constraints. Because constraints encoded in SBR's labels are independent, a span-based model using SBRs does not suffer from violating CCG rule schemata.
+
+# 3.1 SBR's label
+
+An SBR's label consists of the following information:
+
+- a CCG rule schema
+- a mapping from variables that occur only in the left-hand side of the rule to categories
+
+For each node $n$ (except leaf nodes) in a CCG derivation, its SBR version has a corresponding node. The SBR's label means that the category of $n$ is created by the specified rule schema, and the categories of $n$ 's children satisfy the constraint represented by the mapping. For example, the label $(>^0, Y := NP)$ means that the left and right children's categories are in the form of $X / NP$ and $NP$ and $X$ is inherited from its parent's category.
+
+# 3.1.1 Additional information
+
+SBR's label cannot encode root categories of CCG derivations and unary rules. To encode this information, we introduce three types of additional information:
+
+- RT: $X$ means that the category of the node $n$ is $X$ , if $n$ is the root node.
+- UL: $X$ means that the left child $l$ is unary branching and the category of $l$ 's child is $X$ .
+- UR: $X$ means that the right child $r$ is unary branching and the category of $r$ 's child is $X$ .
+
+We call these information tags.
+
+
+Figure 2: A CCG derivation (left) and its SBR version (right).
+
+
+
+| CCG derivation's category | SBR's label S |
| left child L | right child R | parent P |
| X/Y | Yα | Xα | (>|α|, Y :=Y) |
| Yα | X\Y | Xα | (<|α|, Y :=Y) |
| X/(Xβ) | (Xβ)α | Xα | (>|α|, beta :=β) |
| (Xβ)α | X\ (Xβ) | Xα | (<|α|, beta :=β) |
+
+Table 1: Conversion from CCG categories to SBR's labels.
+
+# 3.2 Converting CCG derivations into SBRs
+
+Algorithm 1 obtains an SBR from a CCG derivation. Table 1 summarizes the conversion from categories into SBR's labels. Algorithm 1 uses this table in the function SBR1abel that returns a SBR's label. Here, we introduce two additional patterns for adjuncts or type-raised categories (shown in the last two rows). Introducing these patterns reduces the number of SBR's labels.
+
+# 3.3 Converting SBRs into CCG derivations
+
+Algorithm 2 recovers a CCG derivation from an SBR. The recovery process proceeds in a top-down fashion. First, the root label is recovered from an additional tag RT. $^3$ That is, we call recover $(n, \mathrm{RT}(\mathrm{label}(n)))$ for an SBR $n$ . Then, the categories of the children are recovered using Table 1 in reverse (the function recoverCAT $(S, P)$ returns the categories). This process is repeated recursively until the leaf nodes are reached. When the SBR's label is in the form of $(>^d, \dots)$ or $(<^d, \dots)$ and arity $(P) < d$ , $L$ and $R$ cannot be defined. In this case, recoverCAT $(S, P)$ replaces $d$ with
+
+Algorithm 1 convert(n)
+1: $n$ is a CCG derivation node.
+2: label(n) is the label of $n$ .
+3: par(n) is the parent of $n$ .
+4: chiL(n), chiR(n) and chiU(n) are the left, right and unary child of $n$ .
+5: node(l,C) makes a node with a label $l$ and children $C$ .
+6:
+7: if $n$ is a preterminal node then
+8: $n' \gets \text{chi}_u(n)$
+9: else if $n$ is binary branching then
+10: $l, r \gets \text{chi}_L(n), \text{chi}_R(n)$
+11: $L, R, P \gets \text{label}(l), \text{label}(r), \text{label}(n)$
+12: $S \gets \text{SBR1label}(L, R, P)$
+13: if $n$ is a root node then
+14: add RT :P to S
+15: end if
+16: if $l$ is unary branching then
+17: $l \gets \text{chi}_u(l)$
+18: add a tag UL :label(l) to S
+19: end if
+20: if $r$ is unary branching then
+21: $r \gets \text{chi}_u(r)$
+22: add a tag UR :label(r) to S
+23: end if
+24: $n' \gets \text{node}(S, \langle \text{convert}(l), \text{convert}(r) \rangle)$
+25: end if
+26: return $n'$
+
+$\operatorname{arity}(P)$ .
+
+# 4 Generating OOV categories
+
+In our proposed representation, lexical categories are not directly assigned to words. Lexical categories are decomposed into several node labels. This means that lexical categories are not defined by a finite set and that the span-based parsing model learned from SBRs may generate OOV lexical categories that do not appear in the training data.
+
+Algorithm 2 recover $(n,P)$
+1: $n$ is a node in an SBR.
+2: $n^{\prime}$ is a node in a CCG derivation.
+3: $P,L$ and $R$ are categories.
+4: UL(S) is a UL tag if exists.
+5: UR(S) is a UR tag if exists.
+6:
+7: if $n$ is a terminal (word) node then
+8: $n^{\prime} = \mathrm{node}(P,\langle n\rangle)$
+9: else
+10: $S\gets \mathrm{label}(n)$
+11: $L,R\gets \mathrm{recoverCat}(S,P)$
+12: if UL(S) $\neq$ null then
+13: $l\gets \mathrm{node}(L,\langle \mathrm{recover}(\mathrm{chi}_L(n),\mathrm{UL}(S))\rangle)$
+14: else
+15: $l\gets \mathrm{recover}(\mathrm{chi}_L(n),L)$
+16: end if
+17: if UR(S) $\neq$ null then
+18: $r\gets \mathrm{node}(R,\langle \mathrm{recover}(\mathrm{chi}_R(n),\mathrm{UR}(S))\rangle)$
+19: else
+20: $r\gets \mathrm{recover}(\mathrm{chi}_R(n),R)$
+21: end if
+22: $n^{\prime}\gets \mathrm{node}(P,\langle l,r\rangle)$
+23: end if
+24: return $n^{\prime}$
+
+# 5 Experiment
+
+We conducted an experiment using the CCGBank (Hockenmaier and Steedman, 2007) to evaluate the performance of our method. We used the Berkeley Neural Parser (Kitaev and Klein, 2018) with BERT (Devlin et al., 2019) as a span-based parser. We converted the training (sections 02-21) and the development (section 00) data into SBRs and learned the model from the data. The number of SBR's labels in the training data was 486. The hyperparameters for training were identical to those of Kitaev et al. (2019). We evaluated the parsing performance by labeled $\mathbf{F}_1$ on the test data (section 23). We obtained labeled dependencies using the C&C parser's generate program (Clark and Curran, 2007). As a baseline model, we trained a model directly using the CCG derivations.
+
+Table 2 shows parsing performances on the test data. Our proposed and the baseline methods have high precision (92.8% and 94.0%) but low recall (82.2% and 76.3%). One of the reasons for the low
+
+| Method | Pre. | Rec. | F1 |
| Lewis and Steedman (2014) | - | - | 86.1 |
| Xu et al. (2015) | 87.7 | 86.4 | 87.0 |
| Lewis et al. (2016) | 88.6 | 87.5 | 88.1 |
| Vaswani et al. (2016) | - | - | 88.3 |
| Lee et al. (2016) | - | - | 88.7 |
| Xu (2016) | 89.8 | 85.8 | 87.8 |
| Yoshikawa et al. (2017) | - | - | 88.8 |
| Stanojevic and Steedman (2019) | - | - | 90.5 |
| Bhargava and Penn (2020) | - | - | 90.9 |
| Tian et al. (2020) | - | - | 90.7 |
| Prange et al. (2021) | - | - | 90.8 |
| Liu et al. (2021) | - | - | 90.9 |
| Baseline | 94.0 | 76.3 | 84.2 |
| Baseline + markedup | 93.9 | 76.8 | 84.5 |
| Ours | 92.8 | 82.2 | 87.2 |
| Ours + markedup | 91.7 | 87.6 | 89.6 |
+
+Table 2: Labeled ${\mathrm{F}}_{1}$ on the test data.
+
+| Method | Rec. (%) |
| Bhargava and Penn (2020) | 22 |
| Prange et al. (2021) | 3 |
| ours | 18 |
+
+Table 3: Recall for OOV lexical categories on the test data.
+
+recall was that the C&C parser's generate program failed to obtain dependencies from the output CCG derivations. Our proposed and the baseline methods failed to obtain dependencies from 206 and 371 sentences of 2407 test data sentences, respectively. The generate program cannot work when the CCG derivation is invalid or has a lexical category that is not listed in its markedup file. To mitigate this problem, we added such lexical categories to the markedup file.7 Adding lexical categories increased the recall (87.6%) of our method significantly. On the other hand, the recall of the baseline method was still low (76.8%) due to the invalid CCG derivations. This result shows that a span-based parsing using CCG derivations does not work well and that our proposed method improves the parsing performance. The final result of our method was comparable with previous CCG parsers.
+
+# 5.1 OOV categories
+
+Another interesting point of our method is the possibility of generating OOV categories. Table 3 shows the recall for OOV lexical categories. We obtained a similar result with previous research. Our method correctly assigned OOV categories for 4 words. $^{8}$
+
+We can say that our proposed approach can treat OOV categories.
+
+# 6 Conclusion
+
+This paper proposed a new representation for CCG derivations. Our proposed representation realizes a span-based CCG parser that follows the CCG binary rule schemata. Furthermore, the parser can generate OOV categories. One remaining problem in the proposed method is to treat unary rule schemata in CCG. Our method encodes unary rules using the additional information described in Section 3.1.1, but this approach may violate the unary rule schemata. In the future, we will extend the method to treat CCG unary rules validly.
+
+# Acknowledgements
+
+We thank anonymous reviewers for their helpful comments.
+
+# References
+
+Aditya Bhargava and Gerald Penn. 2020. Supertagging with CCG primitives. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 194-204, Online. Association for Computational Linguistics.
+Stephen Clark and James R. Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.
+James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1-11, Austin, Texas. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+David Gaddy, Mitchell Stern, and Dan Klein. 2018. What's going on in neural constituency parsers? an analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 999-1010, New Orleans, Louisiana. Association for Computational Linguistics.
+
+Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3):355-396.
+Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499–3505, Florence, Italy. Association for Computational Linguistics.
+Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics.
+Marco Kuhlmann and Giorgio Satta. 2014. A new parsing algorithm for Combinatory Categorical Grammar. Transactions of the Association for Computational Linguistics, 2:405-418.
+Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016. Global neural CCG parsing with optimality guarantees. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2366-2376, Austin, Texas. Association for Computational Linguistics.
+Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. LSTM CCG parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 221-231, San Diego, California. Association for Computational Linguistics.
+Mike Lewis and Mark Steedman. 2014. Improved CCG parsing with semi-supervised supertagging. Transactions of the Association for Computational Linguistics, 2:327-338.
+Yufang Liu, Tao Ji, Yuanbin Wu, and Man Lan. 2021. Generating CCG categories. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13443-13451.
+Jakob Prange, Nathan Schneider, and Vivek Srikumar. 2021. Supertagging the Long Tail with Tree-Structured Decoding of Complex Categories. Transactions of the Association for Computational Linguistics, 9:243-260.
+Miloš Stanojevic and Mark Steedman. 2019. CCG parsing algorithm with incremental tree rotation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 228-239, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Miloš Stanojevic and Mark Steedman. 2020. Maxmargin incremental CCG parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4111-4122, Online. Association for Computational Linguistics.
+Mark Steedman. 2000. The Syntactic Process. The MIT press.
+Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818-827, Vancouver, Canada. Association for Computational Linguistics.
+Yuanhe Tian, Yan Song, and Fei Xia. 2020. Supertagging Combinatory Categorical Grammar with attentive graph convolutional networks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6037-6044, Online. Association for Computational Linguistics.
+Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with LSTMs. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 232-237, San Diego, California. Association for Computational Linguistics.
+Wenduan Xu. 2016. LSTM shift-reduce CCG parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1754-1764, Austin, Texas. Association for Computational Linguistics.
+Wenduan Xu, Michael Auli, and Stephen Clark. 2015. CCG supertagging with a recurrent neural network. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 250-255, Beijing, China. Association for Computational Linguistics.
+Masashi Yoshikawa, Hiroshi Noji, and Yuji Matsumoto. 2017. A* CCG parsing with a supertag and dependency factored model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 277-287, Vancouver, Canada. Association for Computational Linguistics.
\ No newline at end of file
diff --git a/anewrepresentationforspanbasedccgparsing/images.zip b/anewrepresentationforspanbasedccgparsing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4fd1fdbc38c3f7400edde1558d640fcfec417378
--- /dev/null
+++ b/anewrepresentationforspanbasedccgparsing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2c22c1a5c556c428e3448713bf2208c53c79c73bfa7f877ab3d5085db744f43
+size 142016
diff --git a/anewrepresentationforspanbasedccgparsing/layout.json b/anewrepresentationforspanbasedccgparsing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b1b34d549a2e194a325abfacf259180960c497c2
--- /dev/null
+++ b/anewrepresentationforspanbasedccgparsing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d59dc17c3fbfd2871b50576a53fbb8dcc86a2b1dac54ad3b074e9fe146621090
+size 262794
diff --git a/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_content_list.json b/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5ac350fb12d6a3a0036b88f14b4b4237cf16355d
--- /dev/null
+++ b/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f8379a43c3580f72f48de700fa93bc3aafeb0c6c3f2308cb1497da394dbb5a89
+size 42883
diff --git a/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_model.json b/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..76a51ada0f2687516c835dd002b53ac75dc43f5d
--- /dev/null
+++ b/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d777558d8c84fb28026116483abec0addf9ffb1dc07884e9a85b253c56e86415
+size 51767
diff --git a/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_origin.pdf b/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3a5d18432f925439f908d64475d2035feb75533d
--- /dev/null
+++ b/aninformationtheoreticcharacterizationofmorphologicalfusion/6ff7f928-ce4f-4058-be3f-f2b17cf1094e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8fb9e46533fdd2931f775880ba8e239c92d3a725ea0a5264700b1ce4eb33969d
+size 247733
diff --git a/aninformationtheoreticcharacterizationofmorphologicalfusion/full.md b/aninformationtheoreticcharacterizationofmorphologicalfusion/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f5fadd2a1dd4e6741b68bd9c6d797735c736be4e
--- /dev/null
+++ b/aninformationtheoreticcharacterizationofmorphologicalfusion/full.md
@@ -0,0 +1,183 @@
+# An Information-Theoretic Characterization of Morphological Fusion
+
+Neil Rathi
+
+Palo Alto High School
+
+neilrathi@gmail.com
+
+Michael Hahn*
+
+Stanford University
+
+SFB 1102, Saarland University
+
+mhahn2@stanford.edu
+
+Richard Futrell*
+
+University of California, Irvine
+
+rfutrell@uci.edu
+
+# Abstract
+
+Linguistic typology generally divides synthetic languages into groups based on their morphological fusion (von Humboldt, 1825). However, this measure has long been thought to be best considered a matter of degree (e.g. Greenberg, 1960). We present an information-theoretic measure, called informational fusion, to quantify the degree of fusion of a given set of morphological features in a surface form, which naturally provides such a graded scale. Informational fusion is able to encapsulate not only concatenative, but also nonconcatenative morphological systems (e.g. Arabic), abstracting away from any notions of morpheme segmentation. We then show, on a sample of twenty-one languages, that our measure recapitulates the usual linguistic classifications for concatenative systems, and provides new measures for nonconcatenative ones. We also evaluate the long-standing hypotheses that more frequent forms are more fusional, and that paradigm size anticorrelates with degree of fusion. We do not find evidence for the idea that languages have characteristic levels of fusion; rather, the degree of fusion varies across part-of-speech within languages.
+
+# 1 Introduction
+
+Traditional morphological typology divides synthetic languages into two distinct groups, agglutinative and fusional (von Humboldt, 1825). Agglutinative languages have morphemes which can be separated into identifiable parts corresponding to single features. For example, the Hungarian form ambereknek can be separated into a root and two suffixes, each of which expresses a single morphological feature: *ember-ek-nek* (person-PL-DAT). On the other hand, fusional languages express multiple features in a single morpheme, such as Latin *servis* (servant-DAT.PL), where the suffix $-\bar{\iota} s$ indicates the dative and plural simultaneously and
+
+cannot be analyzed into parts that individually correspond to the genitive or plural features (Brown, 2010; Plank, 1999).
+
+Linguistic typologists have long recognized that this distinction is more of a spectrum than a categorical distinction, with Greenberg (1960) defining an 'index of agglutination' metric to determine the degree to which a language is agglutinative across its morphological paradigms. Interestingly, the notion appears to be graded even within a language. For example, the Latin adjectival feminine genitive plural suffix is $-\bar{a} rum$ , where the thematic vowel $\bar{a}$ corresponds weakly to the feminine.
+
+Here, we provide an information-theoretic characterization of the degree of fusion of any given form in a language, naturally providing a graded measure. Our core intuition is that a form which expresses a given set of features can be classified as fusional if it cannot be predicted given the forms for other sets of morphological features (i.e. the "rest of the paradigm"). For example, the Latin ending $-\bar{\tau} s$ in Table 1 is almost entirely unpredictable from the rest of the paradigm: it does not decompose into parts whose meaning can be determined based on other forms. Therefore, we would say that the degree of fusion of $serv\bar{\tau} s$ is high. On the other hand, the Hungarian $-eknek$ in Table 2 is fully predictable based on the deduction that $-ek$ corresponds to the plural and $-nek$ to the dative, so we would say that $embereknek$ would have a low degree of fusion.
+
+Our measure of fusion abstracts away from issues of morpheme segmentation. 'Agglutination' and 'fusion' traditionally refer to the extent to which individual features correspond to individual concatenated morphemes: for example, the Hungarian example is considered agglutinative because the suffix -nek for the feature DATIVE is concatenated to the morpheme -ek for the feature PLURAL. In contrast, our measure of fusion indicates the extent to which a form may be explained as a result of
+
+ | SG | PL |
| NOM | servus | servī |
| GEN | servī | servōrum |
| DAT | servō | servīs |
| ACC | servum | servōs |
| ABL | servō | servīs |
| VOC | serve | servī |
+
+individual morphological processes corresponding to features, including nonconcatenative processes such as infixation, vowel alternations, reduplication, etc. Effectively, we measure the extent to which a form cannot be predicted or explained in terms of any strict subset of its morphological features. Because our measure abstracts away from the form of the morphological processes involved, we name it informational fusion.
+
+Previous work has argued that the idea of 'fusion' conflates (at least) three distinct ideas: phonological fusion (the extent to which morphemes are phonologically merged or interleaved with the root), flexibility (the degree of allomorphy with the root), and exponentie or cumulativity (the number of distinct features expressed by an unanalyzable morpheme) (Haspelmath, 2009; Bickel and Nichols, 2013). Informational fusion aligns most closely with the idea of exponentie, measuring the extent to which multiple features are expressed by an unanalyzable morphological process.
+
+In the remainder of the paper, we formally state our fusion measure and describe its implementation and estimation from data (Section 2), and then evaluate our measure's ability to capture linguistic intuitions and use it to test linguistic hypotheses (Section 3). Section 4 concludes.
+
+# 2 Definition of Informational Fusion
+
+# 2.1 Preliminaries
+
+Adopting the framework of Wu et al. (2019), we consider a word to be a triple of a lexeme $\ell$ , a feature combination or slot $\sigma$ , and a surface form $w$ . The lexeme is a string that captures an abstract notion, which is then split into slots $\sigma$ containing information about the inflection. For example, a slot $\sigma$ may consist of $\langle \mathrm{GEN}, \mathrm{PL} \rangle$ for a genitive plural form. A paradigm is a mapping from lexemes and slots to surface forms. For example, Table 1
+
+Table 1: Forms of the second declension Latin noun serv "servant". Colors represent syncretic forms.
+
+ | SG | PL |
| NOM | ember | emberek |
| ACC | embert | embereket |
| DAT | embernek | embereknek |
| ALL | emberhez | emberekhez |
| ABL | embertól | emberektól |
| ... | ... | ... |
+
+Table 2: A subset of forms of the Hungarian noun $\text{em-ber}"person." Morphemes are color-coded by meaning.}$
+
+provides the paradigm for the Latin lexeme serv. The form servōrum would be defined as a triple $(\ell = \text{serv}, \sigma = \langle \text{GEN}, \text{PL} \rangle, w = \text{servōrum})$ , such that $(\ell, \sigma)$ is mapped to $w$ according to the Latin nominal paradigm.
+
+# 2.2 Informational fusion
+
+We define the informational fusion $\phi$ of a given surface form $w$ with feature combination $\sigma$ and lexeme $\ell$ by taking the surprisal of the surface form given the "rest of the paradigm":
+
+$$
+\phi (w) = - \log p (w \mid \mathcal {L} _ {- \sigma}, \sigma , \ell), \tag {1}
+$$
+
+where $\mathcal{L}_{-\sigma}$ indicates the language $\mathcal{L}$ without any forms with feature combination $\sigma$ , and the predictive model $p(\cdot \mid \mathcal{L}_{-\sigma}, \sigma, \ell)$ is a conditional probability distribution on forms $w$ given features $\sigma$ and lexemes $\ell$ , which is based only on data from $\mathcal{L}_{-\sigma}$ .
+
+Informational fusion is analogous to Wu et al. (2019)'s definition of the irregularity of $w$ as $-\log p(w|\mathcal{L}_{-\ell},\sigma)$ . However, here we remove the feature combination $\sigma$ from the data used to train the predictive model, instead of the lemma $\ell$ . For example, the informational fusion of servorum would be its negative log probability given every other surface form $w$ in the language outside of those that share $\sigma = \langle \mathrm{GEN},\mathrm{PL}\rangle$ .
+
+If a surface form $w$ is entirely predictable from the paradigm, then it will have an informational fusion of 0, while if it is entirely unpredictable, its informational fusion will be high. A form like servōrum is highly unpredictable from the Latin paradigm, so it should have high fusion, while embereknek would have low fusion in Hungarian.
+
+To handle syncretism, as in Wu et al. (2019) we "collapse" identical forms into one slot, such that during training of the predictive model, the model does not have access to any syncretic forms. Therefore, with serv.ABL.SG in the table above, the
+
+ | lat | hun | tur | que | fra | fro | por | rus | spa | ita | ara | deu | xcl |
| Overall | 9.84 | 8.19 | 2.22 | 0.67 | 9.32 | 6.94 | 6.26 | 21.41 | 7.50 | 10.26 | 8.27 | 4.35 | 11.62 |
| Nouns | 13.36 | 4.73 | 2.22 | 0.67 | | | | 14.88 | | | 6.15 | 4.16 | 9.46 |
| Verbs | 6.37 | 10.36 | | | 9.32 | 6.94 | 6.26 | 25.32 | 7.50 | 10.26 | 2.17 | 4.41 | 8.57 |
| Adjectives | 20.25 | | | | | | | | | | 23.97 | | 14.67 |
+
+ | hye | klr | ell | ces | pol | fin | mkd | hbs |
| Overall | 2.63 | 1.33 | 10.79 | 14.79 | 18.63 | 7.02 | 4.73 | 3.67 |
| Nouns | 1.88 | | 29.61 | 9.93 | 15.80 | 6.78 | 6.26 | 12.66 |
| Verbs | 3.50 | 1.33 | 27.90 | | 2.72 | 2.47 | 4.58 | 9.02 |
| Adjectives | 1.88 | | 6.58 | 16.73 | 23.51 | 8.53 | | 2.31 |
+
+Table 3: Average informational fusion across forms in each language, indicated here and elsewhere by three-letter codes (ISO 639-3:2007). Empty cells represent parts of speech with a lack of training data.
+
+model would not have access to serv.DAT.SG while training. Without this step, the measured fusion of languages such as Latin would be extremely low, because many forms can be predicted from their identical syncretic forms.
+
+# 2.3 Implementation
+
+We estimate $\phi$ from paradigm data for 21 languages drawn from UniMorph (Sylak-Glassman, 2016). For Arabic data, we used a transliteration with the ALA-LC standard. All other languages used had separable characters, and thus did not require romanization.
+
+For the predictive model, we use an LSTM seq2seq model with attention (Sutskever et al., 2014; Kann and Schütze, 2016; Bahdanau et al., 2016). The LSTM takes the feature combination $\sigma$ , POS tag, and lemma $\ell$ (in characters) as input, producing the form $w$ in characters as output. The input is represented as a string: for example, for a noun with $\sigma = \langle \mathrm{GEN}, \mathrm{PL} \rangle$ and $\ell = \mathrm{serv}$ , the input string is serv N GEN PL, and the target output string is serv o r u m. We then estimate the surprisal of the form as:
+
+$$
+- \log p (w \mid \ell , \sigma) = - \sum_ {t} \log p _ {\theta} \left(w _ {t} \mid w _ {< t}, \ell , \sigma\right),
+$$
+
+where $\theta$ represents the LSTM parameters, summing over the characters in the form $w$ . For each language and part-of-speech, for each $\sigma \in \mathcal{L}$ , we train a separate LSTM on $\mathcal{L}_{-\sigma}$ .
+
+Models were not used if the average cross-entropy loss on the final epoch exceeded 0.1. We found a highly bimodal distribution in final loss,
+
+such that nearly all models had either very low $(\sim 0.05)$ or very high $(>0.4)$ loss, with high loss corresponding to feature combinations with little training data. We did not observe a systematic relationship between data size and estimates of $\phi$ .
+
+# 3 Results and Discussion
+
+Here we study whether our fusion measure recapitulates the familiar classifications for selected languages, and study whether it covaries systematically with paradigm size and form frequency, testing linguistic hypotheses.
+
+# 3.1 Basic results
+
+Average fusion scores for paradigms from 21 languages are shown in Table 3 and Figure 1. The scores are largely consistent with typological classifications. We observed that overall, the languages with lowest average fusion were Turkish and Quechua, whose paradigms are usually classified as agglutinative or monoexponential, while the most fused languages were Greek, Russian, Polish, and Czech, again consistent with typical classifications (Bickel and Nichols, 2013). We also observe clustering based on language family. The Slavic languages as a whole appear to have roughly equal fusion levels, and the same was true for the Romance languages. While these were the only families with more than two languages, the results are suggestive for our measure as an indicator of typological relationships.
+
+We find that fusion differs substantially by part of speech even within languages. For example, Latin and Arabic verbs have much lower fusion than their nominal and adjectival counterparts. This result is in line with Haspelmath (2009)'s arguments against the 'Agglutination Hypothesis.'
+
+Some of the more surprising results shed light
+
+
+Figure 1: Boxplots of mean informational fusion values by part-of-speech and language. Middle line indicates median fusion; dot indicates mean fusion; colors indicate language family. $\mathrm{N} =$ nouns, $\mathrm{V} =$ verbs, $\mathrm{A} =$ adjectives.
+
+on the nature of informational fusion. For example, the low level of fusion for Latin verbs contrasts with the typical classification of Latin as fusional, but the result is intuitive upon inspection. For instance, the verb form impugnābāmur can be segmented into impugna-ba-mu-r, where ba represents the feature IMPERFECT, mu represents 1.PL, and $r$ represents PASSIVE (Bennett, 1994). These parts combine predictably, yielding a correspondingly low fusion of 0.35 for this form.
+
+Another interesting result is the low level of fusion for Arabic verbs. This result is sensible: although Arabic morphology is highly nonconcatenative, the morphological processes that convey individual features (person, aspect, voice, etc.) are quite regular and compose with each other transparently (Ryding, 2005). This result illustrates how informational fusion abstracts away from the form of the morphological processes.
+
+Some further less anticipated results can be explained as cases of phonological fusion. For example, Hungarian, while typically classified as agglutinative, undergoes many regular sound changes across its paradigms, including vowel harmony and vowel coalescence. The latter can be seen in forms such as $(\ell = gubó, \sigma = \langle \mathrm{AT} + \mathrm{ESS}, \mathrm{PL} \rangle, w = gubóknál)$ . The suffix for plural is $-ok$ , which, when suffixed to a stem ending in $ó$ , coalesces with the stem; e.g. $gubó-ok-nál \rightarrow gubóknál$ (Szita and Görbe, 2010). As our LSTM learns this phonological process only imperfectly, it falsely predicts $gubóknál$ for this form.
+
+# 3.2 Covariance with Paradigm Size
+
+Plank (1986) proposed that fusion (in the sense of exponentence) limits the number of forms that can exist in a paradigm (i.e. e-complexity: see Acker-
+
+man and Malouf, 2013; Cotterell et al., 2019). This hypothesis can be justified cognitively in terms of informational fusion, which indicates the minimum number of bits of information required to store and learn a form. If there is a limit on paradigm complexity in this sense, then paradigms can be either large or highly fusional, but not both.
+
+Figure 2 shows the relationship between average fusion and paradigm size, calculated as the maximum number of forms per lemma in UniMorph. Although there does appear to be a weak negative correlation, it is not robust: we find Spearman's $\rho = -0.30$ , $p = 0.08$ . Thus, we do not find support for Plank's hypothesis.
+
+However, we do not take this as strong evidence against the hypothesis, because there is a degree of arbitrariness to measuring paradigm size from datasets such as UniMorph in terms of what
+
+
+Figure 2: Correlation of log-transformed paradigm size and average informational fusion per paradigm. Text indicates part of speech and language, and datapoints are colored by language.
+
+
+Figure 3: Correlation and tradeoff between frequency and fusion per feature and language. On the $x$ -axis, log normalized frequency of all forms matching a given feature in a given language. On the $y$ -axis, the average informational fusion for those forms. Text indicates feature and language; step curve indicates Pareto curve.
+
+counts as an entry in a paradigm. For example, the Quechua UniMorph dataset includes possessive forms of nouns, while the Hungarian dataset does not, although both languages express possession using suffixes. Differences in measured paradigm size may reflect the choice of what was included in the corpus rather than real linguistic differences.
+
+# 3.3 Covariance with Form Frequency
+
+We might expect that highly fused forms are also highly frequent in usage. An infrequent but fused form would be unstable, in the sense that language users might forget it in production (defaulting to a more predictable form), or might fail to acquire it in learning. Therefore, here we evaluate the hypothesis that a high degree of informational fusion implies high form frequency; or alternatively, that there is a tradeoff between informational fusion and form frequency.
+
+We test the hypothesis at the level of individual features. We quantify the average fusion of a feature as the average fusion of all forms with that feature, and the frequency of a feature as the total frequency of all tokens expressing that feature in a corpus. Figure 3 shows the relationship between average fusion per feature per language and log feature frequency, estimated from from Wikipedia dumps and normalized by the total number of tokens per Wikipedia corpus. Syncretic forms were removed for this analysis. Average fusion is significantly correlated with frequency (Spearman's
+
+$\rho = 0.39, p < 0.001$ by permutation test).
+
+We find an unoccupied quadrant in the data: we do not find features that are both infrequent and expressed fusionally. For significance testing, we use a nonparametric permutation test with the area under the Pareto frontier (similarly to Cotterell et al., 2019). The $p$ -value is the probability that a stochastically constructed curve—in which the $y$ -values of the data are randomly permuted—has an "emptier" upper left quadrant, i.e. that the area under the null-hypothesis curve is less than or equal to the area under the empirical curve. This was estimated by permuting the data 10,000 times. We find that the upper-left quadrant is significantly empty ( $p < 0.002$ ), indicating a significant tradeoff between fusion and frequency. This still holds with the cognitive explanation provided above.
+
+# 4 Conclusion
+
+We introduced an information-theoretic measure of the fusion of a form within a morphological paradigm, called informational fusion. We have shown that informational fusion recapitulates linguists' intuitions and allows for quantitative tests of linguistic hypotheses, including a tradeoff between fusion and frequency. Our work joins a growing body of recent research that aims to operationalize basic linguistic concepts in terms of information theory (Ackerman and Malouf, 2013; Cotterell et al., 2019; Pimentel et al., 2019; Futrell et al., 2019; Mansfield, 2021).
+
+Informational fusion is the extent to which a form cannot be predicted based on any strict subset of its morphological features. As such, it aligns closely with the linguistic notion of the exponentie of a form. It can be adapted to provide fusion measures for specific morphemes and features by carefully choosing which features are held out during the training of the predictive model.
+
+# Acknowledgments
+
+This work benefited from discussion in the SIG-TYP 2021 Workshop. It was supported by NSF Grant #1947307 and an NVIDIA GPU Grant to R.F. All code and data are available at https://github.com/neilrathi/morphological-fusion.
+
+# References
+
+Farrell Ackerman and Robert Malouf. 2013. Morphological organization: The low conditional entropy conjecture. Language, 89(3):429-464.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2016. Neural machine translation by jointly learning to align and translate.
+Charles E. Bennett. 1994. New Latin grammar. Bolchazy-Carducci Publishers, Wauconda, Ill.
+Balthasar Bickel and Johanna Nichols. 2013. Exponence of selected inflectional formatives. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
+Dunstan Brown. 2010. Morphological Typology. In Jae Jung Song, editor, The Oxford Handbook of Linguistic Typology, Oxford Handbooks in Linguistics. Oxford University Press.
+Ryan Cotterell, Christo Kirov, Mans Hulden, and Jason Eisner. 2019. On the complexity and typology of inflectional morphological systems. Transactions of the Association for Computational Linguistics, 7:327-342.
+Richard Futrell, Peng Qian, Edward Gibson, Evelina Fedorenko, and Idan Blank. 2019. Syntactic dependencies correspond to word pairs with high mutual information. In Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019), pages 3-13, Paris, France. Association for Computational Linguistics.
+Joseph H. Greenberg. 1960. A quantitative approach to the morphological typology of language. International Journal of American Linguistics, 26(3):178-194.
+Martin Haspelmath. 2009. An empirical test of the Agglutination Hypothesis. In Studies in Natural Language and Linguistic Theory, pages 13-29. Springer Netherlands.
+ISO 639-3:2007. 2007. Codes for the representation of names of languages—Part 3: Alpha-3 code for comprehensive coverage of languages. Standard, International Organization for Standardization, Geneva, CH.
+Katharina Kann and Hinrich Schütze. 2016. Single-model encoder-decoder with explicit morphological representation for reinflation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 555-560, Berlin, Germany. Association for Computational Linguistics.
+John Mansfield. 2021. The word as a unit of internal predictability. Linguistics.
+
+Tiago Pimentel, Arya D. McCarthy, Damian Blasi, Brian Roark, and Ryan Cotterell. 2019. Meaning to form: Measuring systematicity as information. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1751-1764, Florence, Italy. Association for Computational Linguistics.
+Frans Plank. 1986. Paradigm size, morphological typology, and universal economy. *Folia Linguistica*, 20:29-48.
+Frans Plank. 1999. Split morphology: how agglutination and flexion mix. Linguistic Typology, 3:279-340.
+Karin C. Ryding, 2005. A reference grammar of Modern Standard Arabic. Cambridge University Press, New York.
+Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104-3112.
+John Sylak-Glassman. 2016. The composition and use of the Universal Morphological feature schema (UniMorph schema).
+Szilvia Szita and Tamás Görbe. 2010. A practical Hungarian grammar. Akademiai Kiado, Budapest. OCLC: 935138375.
+Wilhelm von Humboldt. 1825. Über das Entstehen der grammatischen Formen und ihren Einfluss auf die Ideenentwicklung. In Abhandlungen der Königlichen Akademie der Wissenschaften zu Berlin: Aus den Jahren 1822 und 1823, pages 401-430. Drückerei der Königlichen Akademie der Wissenschaften, Berlin.
+Shijie Wu, Ryan Cotterell, and Timothy J. O'Donnell. 2019. Morphological irregularity correlates with frequency. In Association for Computational Linguistics.
\ No newline at end of file
diff --git a/aninformationtheoreticcharacterizationofmorphologicalfusion/images.zip b/aninformationtheoreticcharacterizationofmorphologicalfusion/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9c656b3b10c7fd52bdeb31537b6ab8d20131dfac
--- /dev/null
+++ b/aninformationtheoreticcharacterizationofmorphologicalfusion/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1399580b7b7bdb4b4cf2d77592d52a29846b998c8ab0855fc77cd5e75064bee9
+size 204532
diff --git a/aninformationtheoreticcharacterizationofmorphologicalfusion/layout.json b/aninformationtheoreticcharacterizationofmorphologicalfusion/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..27c18408ef86c9cd5f8e65cda2ae0d267e7857ab
--- /dev/null
+++ b/aninformationtheoreticcharacterizationofmorphologicalfusion/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0ed309e547649fd9b25ab3dbbc75ad83ae4fbe7014d0ca686b5bded72562a64
+size 225215
diff --git a/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_content_list.json b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b406b741443f0dea6e28aa610bc7d47d3d24b044
--- /dev/null
+++ b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0afddb5a0c3a55e7163ab32df461bfe203b9cf3b61d972e1df29db5db1a91efe
+size 81807
diff --git a/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_model.json b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..707da7a4a0f54552388a2aff453d1fe44e659fd9
--- /dev/null
+++ b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7dc91f786bbe828e41c73a2fbdfb437df2ced24c526b6ae55b98d64e5ef1e50c
+size 96788
diff --git a/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_origin.pdf b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ac1bf482a93d2cec587d962947bb6a053dd4f208
--- /dev/null
+++ b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/7eb2561b-0f11-495a-8bc0-9dc09756d1d7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4efc9dbefe61a9fad510126db08068dfd95e752155b803a2255d0926f1d05379
+size 541119
diff --git a/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/full.md b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f10a253f91d982770d30399635edbec4b1021f6f
--- /dev/null
+++ b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/full.md
@@ -0,0 +1,336 @@
+# A Novel Global Feature-Oriented Relational Triple Extraction Model based on Table Filling
+
+Feiliang Ren $^{\dagger,*}$ , Longhui Zhang $^{\dagger}$ , Shujuan Yin, Xiaofeng Zhao, Shilei Liu Bochao Li, Yaduo Liu
+
+School of Computer Science and Engineering Key Laboratory of Medical Image Computing of Ministry of Education Northeastern University, Shenyang, 110169, China renfeiliang@cse.neu.edu.cn
+
+# Abstract
+
+Table filling based relational triple extraction methods are attracting growing research interests due to their promising performance and their abilities on extracting triples from complex sentences. However, this kind of methods are far from their full potential because most of them only focus on using local features but ignore the global associations of relations and of token pairs, which increases the possibility of overlooking some important information during triple extraction. To overcome this deficiency, we propose a global feature-oriented triple extraction model that makes full use of the mentioned two kinds of global associations. Specifically, we first generate a table feature for each relation. Then two kinds of global associations are mined from the generated table features. Next, the mined global associations are integrated into the table feature of each relation. This "generate-mine-integrate" process is performed multiple times so that the table feature of each relation is refined step by step. Finally, each relation's table is filled based on its refined table feature, and all triples linked to this relation are extracted based on its filled table. We evaluate the proposed model on three benchmark datasets. Experimental results show our model is effective and it achieves state-of-the-art results on all of these datasets. The source code of our work is available at: https://github.com/neukg/GRTE.
+
+# 1 Introduction
+
+Relational triple extraction (RTE) aims to extract triples from unstructured text (often sentences), and is a fundamental task in information extraction. These triples have the form of (subject, relation, object), where both subject and object are entities and they are semantically linked by relation. RTE is important for many downstream applications.
+
+Nowadays, the dominant methods for RTE are the joint extraction methods that extract entities and relations simultaneously in an end-to-end way. Some latest joint extraction methods (Yu et al., 2019; Yuan et al., 2020; Zeng et al., 2020; Wei et al., 2020; Wang et al., 2020; Sun et al., 2021) have shown their strong extraction abilities on diverse benchmark datasets, especially the abilities of extracting triples from complex sentences that contain overlapping or multiple triples.
+
+Among these existing joint extraction methods, a kind of table filling based methods (Wang et al., 2020; Zhang et al., 2017; Miwa and Bansal, 2016; Gupta et al., 2016) are attracting growing research attention. These methods usually maintain a table for each relation, and each item in such a table is used to indicate whether a token pair possesses the corresponding relation or not. Thus the key of these methods is to fill the relation tables accurately, then the triples can be extracted based on the filled tables. However, existing methods fill relation tables mainly based on local features that are extracted from either a single token pair (Wang et al., 2020) or the filled history of some limited token pairs (Zhang et al., 2017), but ignore following two kinds of valuable global features: the global associations of token pairs and of relations.
+
+These two kinds of global features can reveal the differences and connections among relations and among token pairs. Thus they are helpful to both the precision by verifying the extracted triples from multiple perspectives, and the recall by deducing new triples. For example, given a sentence "Edward Thomas and John are from New York City, USA.", when looking it from a global view, we can easily find following two useful facts. First, the triple (Edward Thomas, live_in, New York) is helpful for extracting the triple (John, live_in, USA), and vice versa. This is because the properties of their (subject, object) pairs are highly similar: (i) the types of both subjects are same (both are per
+
+sons); (ii) the types of both objects are same too (both are locations). Thus these two entity pairs are highly possible to possess the same kind of relation. Second, the mentioned two triples are helpful for deducing a new triple (New York, located_in, USA). This is because that: (i) located_in requires both its subjects and objects be locations; (ii) located_in is semantically related to live_in; (iii) live_in indicates its objects are locations. Thus there is a clear inference path from these two known triples to the new triple. Obviously, these global features are impossible to be contained in local features.
+
+Inspired by above analyses, we propose a global feature-oriented table filling based RTE model that fill relation tables mainly based on above two kinds of global associations. In our model, we first generate a table feature for each relation. Then all relations' table features are integrated into a subject-related global feature and an object-related global feature, based on which two kinds of global associations are mined with a Transformer-based method. Next, these two kinds of mined global associations are used to refine the table features. These steps are performed multiple times so that the table features are refined gradually. Finally, each table is filled based on its refined feature, and all triples are extracted based on the filled tables.
+
+We evaluate the proposed model on three benchmark datasets: NYT29, NYT24, and WebNLG. Extensive experiments show that it consistently outperforms the existing best models and achieves the state-of-the-art results on all of these datasets.
+
+# 2 Related Work
+
+Early study (Zelenko et al., 2003; Zhou et al., 2005; Chan and Roth, 2011) often takes a kind of pipeline based methods for RTE, which is to recognize all entities in the input text first and then to predict the relations for all entity pairs. However, these methods have two fatal shortcomings. First, they ignore the correlations between entity recognition and relation prediction. Second, they tend to suffer from the error propagation issue.
+
+To overcome these shortcomings, researchers begin to explore the joint extraction methods that extract entities and relations simultaneously. According to the research lines taken, we roughly classify existing joint methods into three main kinds.
+
+Tagging based methods. This kind of methods (Zheng et al., 2017; Yu et al., 2019; Wei et al., 2020) often first extract the entities by a tagging based
+
+
+Figure 1: Examples of table filling and decoding strategy. Arrows with different colors correspond to different search routes defined in Algorithm 1.
+
+method, then predict relations. In these models, binary tagging sequences are often used to determine the start and end positions of entities, sometimes to determine the relations between two entities too.
+
+Seq2Seq based methods. This kinds of methods (Zeng et al., 2018, 2019, 2020; Nayak and Ng, 2020) often view a triple as a token sequence, and convert the extraction task into a generation task that generates a triple in some orders, such as first generates a relation, then generates entities, etc.
+
+Table filling based methods. This kind of methods (Miwa and Bansal, 2016; Gupta et al., 2016; Zhang et al., 2017; Wang et al., 2020) would maintain a table for each relation, and the items in this table usually denotes the start and end positions of two entities (or even the types of these entities) that possess this specific relation. Accordingly, the RTE task is converted into the task of filling these tables accurately and effectively.
+
+Besides, researchers also explore other kinds of methods. For example, Bekoulis et al. (2018) formulate the RTE task as a multi-head selection problem. Li et al. (2019) cast the RTE task as a multi-turn question answering problem. Fu et al. (2019) use a graph convolutional networks based method and Eberts and Ulges (2019) use a span extraction based method. Sun et al. (2021) propose a multitask learning based RTE model.
+
+# 3 Methodology
+
+# 3.1 Table Filling Strategy
+
+Given a sentence $S = w_{1}w_{2}\ldots w_{n}$ , we will maintain a table $table_{r}$ (the size is $n\times n$ ) for each relation $r$ ( $r\in R$ , and $R$ is the relation set). The core
+
+
+Figure 2: Model Architecture. The dotted arrows to $TFG$ means that $H_{s}^{(1)}$ and $H_{o}^{(1)}$ will be inputted to $TFG$ only at the first iteration. The dotted arrow to $TG$ means that $TF^{(N)}$ will be inputted into $TG$ only at the last iteration.
+
+of our model is to assign a proper label for each table item (corresponding to a token pair). Here we define the label set as \( L = \{ "N/A" \), "MMH" \), "MMT" \), "MSH" \), "MST" \), "SMH" \), "SMT" \), "SS" \}.
+
+For a token pair indexed by the $i$ -th row and the $j$ -th column, we denote it as $(w_{i}, w_{j})$ and denote its label as $l$ . If $l \in \{ "MMH", "MMT", "MSH", "MST", "SMH", "SMT" \}$ , it means $(w_{i}, w_{j})$ is correlated with a (subject, object) pair. In such case, the first character in the label refers to the subject is either a multi-token entity ("M") or a single-token entity ("S"), the second character in the label refers to the object is either a multi-token entity ("M") or a single-token entity ("S"), and the third character in the label refers to either both $w_{i}$ and $w_{j}$ are the head token of the subject and object ("H") or both are the tail token of the subject and object ("T"). For example, $l = "MMH"$ means $w_{i}$ is the head token of a multi-token subject and $w_{j}$ is the head token of a multi-token object. As for other cases, $l = "SS"$ means $(w_{i}, w_{j})$ is an entity pair; $l = "N/A"$ means $w_{i}$ and $w_{j}$ are none of above cases.
+
+Figure 1 demonstrates partial filled results for the live_in relation given the sentence "Edward Thomas and John are from New York City, USA.", where there are (subject, object) pairs of (Edward Thomas, New York City), (Edward Thomas, New York), (Edward Thomas, USA), (John, New York City), (John, New York) and (John, USA).
+
+An main merit of our filling strategy is that each of its label can not only reveal the location information of a token in a subject or an object, but also can reveal whether a subject (or an object) is a single token entity or multi token entity. Thus, the total
+
+number of items to be filled under our filling strategy is generally small since the information carried by each label increases. For example, given a sentence $S = w_{1}w_{2}\dots w_{n}$ and a relation set $R$ , the number of items to be filled under our filling strategy is $n^2|R|$ , while this number is $(2|R| + 1)\frac{n^2+n}{2}$ under the filling strategy used in TPLinker (Wang et al., 2020) (this number is copied from the original paper of TPLinker directly). One can easily deduce that $(2|R| + 1)\frac{n^2+n}{2} > n^2|R|$ .
+
+# 3.2 Model Details
+
+The architecture of our model is shown in Figure 2. It consists of four main modules: an Encoder module, a Table Feature Generation (TFG) module, a Global Feature Mining (GFM) module, and a Triple Generation (TG) module. TFG and GFM are performed multiple time with an iterative way so that the table features are refined step by step. Finally, TG fills each table based on its corresponding refined table feature and generates all triples based on these filled tables.
+
+Encoder Module Here a pre-trained BERT-Base (Cased) model (Devlin et al., 2018) is used as Encoder. Given a sentence, this module firstly encodes it into a token representation sequence (denoted as $H \in \mathbb{R}^{n \times d_h}$ ).
+
+Then $H$ is fed into two separated Feed-Forward Networks (FFN) to generate the initial subjects feature and objects feature (denoted as $H_{s}^{(1)}$ and $H_{o}^{(1)}$ respectively), as written with Eq. (1).
+
+$$
+\begin{array}{l} H _ {s} ^ {(1)} = W _ {1} H + b _ {1} \tag {1} \\ H _ {o} ^ {(1)} = W _ {2} H + b _ {2} \\ \end{array}
+$$
+
+where $W_{1/2} \in \mathbb{R}^{d_h \times d_h}$ are trainable weights and $b_{1/2} \in \mathbb{R}^{d_h}$ are trainable biases.
+
+TFG Module We denote the subjects and objects features at the $t$ -th iteration as $H_{s}^{(t)}$ and $H_{o}^{(t)}$ respectively. Then taking them as input, this module generates a table feature for each relation.
+
+Here the table feature for the relation $r$ at the $t$ -th iteration is denoted as $T F_{r}^{(t)}$ , and it has the same size with $\text{table}_r$ . Each item in $T F_{r}^{(t)}$ represents the label feature for a token pair. Specifically, for a pair $(w_i, w_j)$ , we denoted its label feature as $T F_{r}^{(t)}(i, j)$ , which is computed with Eq.(2).
+
+$$
+T F _ {r} ^ {(t)} (i, j) = W _ {r} \operatorname {R e L U} \left(H _ {s, i} ^ {(t)} \circ H _ {o, j} ^ {(t)}\right) + b _ {r} (2)
+$$
+
+where $\circ$ denotes the Hadamard Product operation, ReLU is the activation function, $H_{s,i}^{(t)}$ and $H_{o,j}^{(t)}$ are the feature representations of tokens $w_{i}$ and $w_{j}$ at the t-th iteration respectively.
+
+GFM Module This module mines the expected two kinds of global features, based on which new subjects and objects features are generated. Then these two new generated features will be fed back to TFG for next iteration. Specifically, this module consists of following three steps.
+
+Step 1, to combine table features. Supposing current iteration is $t$ , we first concatenate the table features of all relations together to generate an unified table feature (denoted as $TF^{(t)}$ ). And this unified table feature will contain the information of both token pairs and relations. Then we use a max pooling operation and an FFN model on $TF^{(t)}$ to generate a subject-related table feature $(TF_{s}^{(t)})$ and an object-related table feature $(TF_{o}^{(t)})$ respectively, as shown in Eq.(3).
+
+$$
+T F _ {s} ^ {(t)} = W _ {s} \underset {s} {\mathrm {m a x p o o l}} (T F ^ {(t)}) + b _ {s}
+$$
+
+$$
+T F _ {o} ^ {(t)} = W _ {o} \max _ {o} \left(T F ^ {(t)}\right) + b _ {o} \tag {3}
+$$
+
+where $W_{s / o}\in \mathbb{R}^{(|L|\times |R|)\times d_h}$ are trainable weights, and $b_{s / o}\in \mathbb{R}^{d_h}$ are trainable biases.
+
+Here the max pooling is used to highlight the important features that are helpful for the subject and object extractions respectively from a global perspective.
+
+Step 2, to mine the expected two kinds of global features. Here we mainly use a Transformer-based model (Vaswani et al., 2017) to mine the global associations of relations and of token pairs.
+
+First, we use a Multi-Head Self-Attention method on $TF_{s/o}^{(t)}$ to mine the global associations of relations. The self-attention mechanism can reveal the
+
+importance of an item from the perspective of other items, thus it is very suitable to mine the expected relation associations.
+
+Then we mine the global associations of token pairs with a Multi-Head Attention method. The sentence representation $H$ is also taken as part of input here. We think $H$ may contain some global semantic information of a token to some extent since the input sentence is encoded as a whole, thus it is helpful for mining the global associations of token pairs from a whole sentence perspective.
+
+Next, we generate new subjects and objects features with an $FFN$ model.
+
+In summary, the whole global association mining process can be written with following Eq.(4).
+
+$$
+\hat {T F} _ {s / o} ^ {(t)} = \mathrm {M u l t i H e a d S e l f A t t} (T F _ {s / o} ^ {(t)})
+$$
+
+$$
+\hat {H} _ {(s / o)} ^ {(t + 1)} = \operatorname {M u l t i H e a d A t t} \left(\hat {T F} _ {s / o} ^ {(t)}, H, H\right) \tag {4}
+$$
+
+$$
+H _ {(s / o)} ^ {(t + 1)} = \operatorname {R e L U} (\hat {H} _ {(s / o)} ^ {(t + 1)} W + b)
+$$
+
+Step 3, to further tune the subjects and objects features generated in previous step.
+
+One can notice that if we flat the iterative modules of $TFG$ and $GFM$ , our model would equal to a very deep network, thus it is possible to suffer from the vanishing gradient issue. To avoid this, we use a residual network to generate the final subjects and objects features, as written in Eq. (5).
+
+$$
+H _ {(s / o)} ^ {(t + 1)} = \operatorname {L a y e r N o r m} \left(H _ {(s / o)} ^ {(t)} + H _ {(s / o)} ^ {(t + 1)}\right) \tag {5}
+$$
+
+Finally, these subjects and objects features are fed back to the TFG module for next iteration. Note that the parameters of TFG and GFM are shared cross different iterations.
+
+TG Module Taking the table features at the last iteration $(TF^{(N)})$ as input, this module outputs all the triples. Specifically, for each relation, its table is firstly filled with the method shown in Eq. (6).
+
+$$
+\operatorname {t a b l e} _ {r} (i, j) = \operatorname {s o f t m a x} \left(T F _ {r} ^ {(N)} (i, j)\right)
+$$
+
+$$
+\operatorname {t a b l e} _ {r} (i, j) = \underset {l \in L} {\operatorname {a r g m a x}} \left(\operatorname {t a b l e} _ {r} (i, j) [ l ]\right) \tag {6}
+$$
+
+where $\hat{table}_r(i,j) \in \mathbb{R}^{|L|}$ , and $table_r(i,j)$ is the labeled result for the token pair $(w_i, w_j)$ in the table of relation $r$ .
+
+Then, $TG$ decodes the filled tables and deduces all triples with Algorithm 1. The main idea of our algorithm is to generate an entity pair set for each relation according to its filled table. And each entity pair in this set would correspond to a minimal
+
+Algorithm 1 Table Decoding Strategy
+Input: The relation set $R$ , the sentence $S = \{w_1, w_2, \dots, w_n\}$ and all $table_r \in \mathbb{R}^{n \times n}$ for each relation $r \in R$ .
+Output: The predicted triplet set, $RT$ .
+1 Define two temporary triple sets $H$ and $T$ , and initialize $H, T, RT \gets \emptyset, \emptyset, \emptyset$ .
+2 for each $r \in R$ do
+3 Define three temporary sets $W_{P_r}^H, W_{P_r}^T$ , and $W_{P_r}^S$ which consist of token pairs whose ending tags in $table_r$ are "H", "T" and "S" respectively.
+4 for each $(w_i, w_j) \in W_{P_r}^H$ do // forward search
+5 1) Find a token pair $(w_k, w_m)$ from $W_{P_r}^T$ that satisfies: $i \leq k, j \leq m, table_r[(w_i, w_j)]$ and $table_r[(w_k, w_m)]$ match, $(w_i, w_j)$ and $(w_k, w_m)$ are closest in the table, and the number of words contained in subject $w_i \dots k$ and object $w_j \dots m$ are consistent with the corresponding tags.
+6 2) Add $(w_i \dots k, r, w_j \dots m)$ to $H$ .
+7 end for
+8 for each $(w_k, w_m) \in W_{P_r}^T$ do // reverse search
+9 1) Find a token pair $(w_i, w_j)$ from $W_{P_r}^H$ with a similar process as forward search.
+10 2) Add $(w_i \dots k, r, w_j \dots m)$ to $T$ .
+11 end for
+12 for each $(w_i, w_j) \in W_{P_r}^S$ do
+13 Add $(w_i, r, w_j)$ to $RT$
+14 end for
+15 end for
+16 RT ← RT ∪ H ∪ T
+17 return RT
+
+continuous token span in the filled table. Then each entity pair would form a triple with the relation that corresponds to the considered table. Specifically, in our decoding algorithm, we design three paralleled search routes to extract entity pairs of each relation. The first one (forward search, red arrows in Figure 1) generates entity pairs in an order of from head tokens to tail tokens. The second one (reverse search, green arrows in Figure 1) generates entity pairs in an order of from tail tokens to head tokens, which is designed mainly to handle the nested entities. And the third one (blue arrows in Figure 1) generates entity pairs that are single-token pairs.
+
+Here we take the sentence shown in Figure 1 as a concrete sample to further explain our decoding algorithm. For example, in the demonstrated table, the token pair (Edward, New) has an "MMH" label, so the algorithm has to search forward to concatenate adjacent token pairs until a token pair that has a label "MMT" is found, so that to form the complete (subject, object) pair. And the forward search would be stopped when it meets the token pair (Thomas, York) that has the label "MMT". However, the formed entity pair (Edward Thomas, New York) is a wrong entity pair in the demonstrated example since the expected pair is (Edward Thomas, New York City). Such kind of errors are caused by
+
+| Category | NYT29 | NYT24 | WebNLG |
| Train | Test | Train | Test | Train | Test |
| Normal | 53444 | 2963 | 37013 | 3266 | 1596 | 246 |
| EPO | 8379 | 898 | 9782 | 978 | 227 | 26 |
| SEO | 9862 | 1043 | 14735 | 1297 | 3406 | 457 |
| ALL | 63306 | 4006 | 56195 | 5000 | 5019 | 703 |
| Relation | 29 | 24 | 216 / 171* |
+
+Table 1: Statistics of datasets. EPO and SEO refer to entity pair overlapping and single entity overlapping respectively (Zeng et al., 2018). Note that a sentence can belong to both EPO and SEO. And ${216}/{171}^{ * }$ means that there are ${216}/{171}$ relations in WebNLG and WebNLG* respectively.
+
+the nested entities in the input sentence, like the "New York" and "New York City". These nested entities will make the forward search stops too early. In such case, the designed reverse search will play an important supplementary role. In the discussed example, the reverse search will first find the token pair (Thomas, City) that has an "MMT" label and has to further find a token pair that has an "MMH" label. Thus it will precisely find the expected entity pair (Edward Thomas, New York City).
+
+Of course, if there are few nested entities in a dataset, the reverse search can be removed, which would be better for the running time. Here we leave it to make our model have a better generalization ability so that can be used in diverse datasets.
+
+# 3.3 Loss Function
+
+We define the model loss as follows.
+
+$$
+\begin{array}{l} \mathcal {L} = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \sum_ {r = 1} ^ {| R |} - \log p \left(y _ {r, (i, j)} = t a b l e _ {r} (i, j)\right) \\ = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \sum_ {r = 1} ^ {| R |} - \log t a b l e _ {r} (i, j) \left[ y _ {r, (i, j)} \right] \tag {7} \\ \end{array}
+$$
+
+where $y_{r,(i,j)}\in [1,|L|]$ is the index of the ground truth label of $(w_{i},w_{j})$ for the relation $r$ .
+
+# 4 Experiments
+
+# 4.1 Experimental Settings
+
+Datasets We evaluate our model on three benchmark datasets: NYT29 (Takanobu et al., 2019), NYT24 (Zeng et al., 2018) and WebNLG (Gardent et al., 2017). Both NYT24 and WebNLG have two different versions according to following two annotation standards: 1) annotating the last token of
+
+each entity, and 2) annotating the whole entity span. Different work chooses different versions of these datasets. To evaluate our model comprehensively, we use both kinds of datasets. For convenience, we denote the datasets based on the first annotation standard as NYT24* and WebNLG*, and the datasets based on the second annotation standard as NYT24 and WebNLG. Some statistics of these datasets are shown in Table 1.
+
+Evaluation Metrics The standard micro precision, recall, and $F1$ score are used to evaluate the results.
+
+Note that there are two match standards for the RTE task: one is Partial Match that an extracted triplet is regarded as correct if the predicted relation and the head tokens of both subject entity and object entity are correct; and the other is Exact Match that a triple would be considered correct only when its entities and relation are completely matched with a correct triple. To fairly compare with existing models, we follow previous work (Wang et al., 2020; Wei et al., 2020; Sun et al., 2021) and use Partial Match on NYT24* and WebNLG*, and use Exact Match on NYT24, NYT29, and WebNLG.
+
+In fact, since only one token of each entity in $\mathrm{NYT24^{*}}$ and WebNLG\* is annotated, the results of Partial Match and Exact Match on these two datasets are actually the same.
+
+Baselines We compare our model with following strong state-of-the-art RTE models: CopyRE (Zeng et al., 2018), GraphRel (Fu et al., 2019), CopyMTL (Zeng et al., 2020), OrderCopyRE (Zeng et al., 2019), ETL-Span (Yu et al., 2019), WDec (Nayak and Ng, 2020), RSAN (Yuan et al., 2020), RIN (Sun et al., 2020), CasRel (Wei et al., 2020), TPLinker (Wang et al., 2020), SPN (Sui et al., 2020), and PMEI (Sun et al., 2021).
+
+Most of the experimental results of these baselines are copied from their original papers directly. Some baselines did not report their results on some of the used datasets. In such case, we report the best results we obtained with the provided source code (if the source codes is available). For simplicity, we denote our model as GRTE, the abbreviation of Global feature oriented RTE model.
+
+Implementation Details Adam (Kingma and Ba, 2015) is used to optimize GRTE. The learning rate, epoch and batch size are set to $3 \times 10^{-5}$ , 50, 6 respectively. The iteration numbers (the hyperparameter $N$ ) on NYT29, NYT24*, NYT24, WebNLG* and WebNLG are set to 3, 2, 3, 2, and 4 respectively. Following previous work (Wei et al., 2020;
+
+Sun et al., 2021; Wang et al., 2020), we also implement a BiLSTM-encoder version of GRTE where 300-dimensional GloVe embeddings (Pennington et al., 2014) and 2-layer stacked BiLSTM are used. In this version, the hidden dimension of these 2 layers are set as 300 and 600 respectively. All the hyperparameters reported in this work are determined based on the results on the development sets. Other parameters are randomly initialized. Following CasRel and TPLinker, the max length of input sentences is set to 100.
+
+# 4.2 Main Experimental Results
+
+The main results are in the top two parts of Table 2, which show GRTE is very effective. On all datasets, it achieves almost all the best results in term of $F1$ compared with the models that use the same kind of encoder (either the BiLSTM based encoder or the BERT based encoder). The only exception is on NYT24*, where the $F1$ of GRTE $_{LSTM}$ is about $1\%$ lower than that of PMEI $_{LSTM}$ . However, on the same dataset, the $F1$ score of GRTE $_{BERT}$ is about $2.9\%$ higher than that of PMEI $_{BERT}$ .
+
+The results also show that GRTE achieves much better results on NYT29, NYT24 and WebNLG: its $F1$ scores improve about $1.9\%$ , $1.1\%$ , and $3.3\%$ over the previous best models on these three datasets respectively. Contrastively, its $F1$ scores improve about $0.5\%$ and $0.5\%$ over the previous best models on NYT24* and WebNLG* respectively. This is mainly because that GRTE could not realize its full potential on NYT24* and WebNLG* where only one token of each entity is annotated. For example, under this annotation standard, except "N/A", "SSH", and "SST", all the other defined labels in GRTE are redundant. But it should be noted that the annotation standard on NYT24* and WebNLG* simplifies the RTE task, there would not be such a standard when a model is really deployed. Thus, the annotation standard on NYT29, NYT24 and WebNLG can better reveal the true performance of a model. Accordingly, GRTE's better performance on them is more meaningful.
+
+We can further see that compared with the previous best models, GRTE achieves more performance improvement on WebNLG than on other datasets. For example, $GRTE_{LSTM}$ even outperforms all other compared baselines on WebNLG, including those models that use BERT. We think this is mainly because the numbers of relations in WebNLG are far more than those of in NYT29 and
+
+| Model | NYT29 | NYT24* | NYT24 | WebNLG* | WebNLG |
| Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 |
| CopyRE | - | - | - | 61.0 | 56.6 | 58.7 | - | - | - | 37.7 | 36.4 | 37.1 | - | - | - |
| GraphRel | - | - | - | 63.9 | 60.0 | 61.9 | - | - | - | 44.7 | 41.1 | 42.9 | - | - | - |
| OrderCopyRE | - | - | - | 77.9 | 67.2 | 72.1 | - | - | - | 63.3 | 59.9 | 61.6 | - | - | - |
| ETL-Span | 74.5* | 57.9* | 65.2* | 84.9 | 72.3 | 78.1 | 85.5 | 71.7 | 78.0 | 84.0 | 91.5 | 87.6 | 84.3 | 82.0 | 83.1 |
| WDec | 77.7 | 60.8 | 68.2 | - | - | - | 88.1 | 76.1 | 81.7 | - | - | - | - | - | - |
| RSAN | - | - | - | - | - | - | 85.7 | 83.6 | 84.6 | - | - | - | 80.5 | 83.8 | 82.1 |
| RIN | - | - | - | 87.2 | 87.3 | 87.3 | 83.9 | 85.5 | 84.7 | 87.6 | 87.0 | 87.3 | 77.3 | 76.8 | 77.0 |
| CasRelLSTM | - | - | - | 84.2 | 83.0 | 83.6 | - | - | - | 86.9 | 80.6 | 83.7 | - | - | - |
| PMEILSTM | - | - | - | 88.7 | 86.8 | 87.8 | 84.5 | 84.0 | 84.2 | 88.7 | 87.6 | 88.1 | 78.8 | 77.7 | 78.2 |
| TPLinkerLSTM | - | - | - | 83.8 | 83.4 | 83.6 | 86.0 | 82.0 | 84.0 | 90.8 | 90.3 | 90.5 | 91.9 | 81.6 | 86.4 |
| CasRelBERT | 77.0* | 68.0* | 72.2* | 89.7 | 89.5 | 89.6 | 89.8* | 88.2* | 89.0* | 93.4 | 90.1 | 91.8 | 88.3* | 84.6* | 86.4* |
| PMEIBERT | - | - | - | 90.5 | 89.8 | 90.1 | 88.4 | 88.9 | 88.7 | 91.0 | 92.9 | 92.0 | 80.8 | 82.8 | 81.8 |
| TPLinkerBERT | 78.0* | 68.1* | 72.7* | 91.3 | 92.5 | 91.9 | 91.4 | 92.6 | 92.0 | 91.8 | 92.0 | 91.9 | 88.9 | 84.5 | 86.7 |
| SPNBERT | 76.0* | 71.0* | 73.4* | 93.3 | 91.7 | 92.5 | 92.5 | 92.2 | 92.3 | 93.1 | 93.6 | 93.4 | 85.7* | 82.9* | 84.3* |
| GRTELSTM | 74.3 | 67.9 | 71.0 | 87.5 | 86.1 | 86.8 | 86.2 | 87.1 | 86.6 | 90.1 | 91.6 | 90.8 | 88.0 | 86.3 | 87.1 |
| GRTEBERT | 80.1 | 71.0 | 75.3 | 92.9 | 93.1 | 93.0 | 93.4 | 93.5 | 93.4 | 93.7 | 94.2 | 93.9 | 92.3 | 87.9 | 90.0 |
| GRTEMw/o GFM | 77.9 | 68.9 | 73.1 | 90.6 | 92.5 | 91.5 | 91.8 | 92.6 | 92.2 | 92.4 | 91.1 | 91.7 | 88.4 | 86.7 | 87.5 |
| GRTEGRU GFM | 78.2 | 71.7 | 74.8 | 92.5 | 92.9 | 92.7 | 93.4 | 92.2 | 92.8 | 93.4 | 92.6 | 93.0 | 90.1 | 88.0 | 89.0 |
| GRTEMw/m-h | 77.8 | 70.9 | 74.2 | 91.9 | 92.9 | 92.4 | 93.2 | 92.9 | 93.0 | 92.9 | 92.1 | 92.5 | 90.5 | 87.6 | 89.0 |
| GRTEMw/oshared | 79.5 | 71.5 | 75.3 | 92.7 | 93.0 | 92.8 | 93.6 | 92.7 | 93.1 | 93.4 | 94.0 | 93.7 | 91.5 | 87.4 | 89.4 |
+
+Table 2: Main results. A model with a subscript LSTM refers to replace its BERT based encoder with the BiLSTM based encoder. $\star$ means the results are produced by us with the available source code.
+
+NYT24 (see Table 1), which means there are more global associations of relations can be mined. Generally, the more relations and entities there are in a dataset, the more global correlations there would be among triples. Accordingly, our model could perform more better on such kind of datasets than other local features based methods. For example, the number of relations in WebNLG is almost 7 times of those in NYT, and GRTE achieves much more performance improvement over the compared baselines on WebNLG than on NYT.
+
+# 4.3 Detailed Results
+
+In this section, we conduct detailed experiments to demonstrate the effectiveness of our model from following two aspects.
+
+First, we conduct some ablation experiments to evaluate the contributions of some main components in GRTE. To this end, we implement following model variants.
+
+(i) $\mathrm{GRTE}_{w / o\text{GFM}}$ , a variant that removes the GFM module completely from GRTE, which is to evaluate the contribution of GFM. Like previous table filling based methods, $\mathrm{GRTE}_{w / o\text{GFM}}$ extracts triples only based on local features.
+(ii) GRTEGRUGIF, a variant that uses GRU (taking $H$ and $TF_{s / o}^{(t)}$ as input) instead of Transformer to generate the results in Eq. (4), which is to evaluate the contribution of Transformer.
+
+(iii) $\mathrm{GRTE}_{w / o m - h}$ , a variant that replaces the multi-head attention method in GFM with a single-head attention method, which is to evaluate the contribution of the multi-head attention.
+(iv) $\mathrm{GRTE}_{w / o\text{shared}}$ , a variant that uses different parameters for the modules of $TFG$ and $GFM$ at different iterations, which is to evaluate the contribution of the parameter share mechanism.
+
+All these variants use the BERT-based encoder. And their results are shown in the bottom part of Table 2, from which we can make following observations.
+
+(1) The performance of $\mathrm{GRTE}_{w / o\text{GFM}}$ drops greatly compared with GRTE, which confirms the importance of using two kinds of global features for table filling. We can further notice that on NYT29, NYT24, and WebNLG, the $F1$ scores of $\mathrm{GRTE}_{w / o\text{GFM}}$ increases by $0.4\%$ , $0.4\%$ , and $0.8\%$ respectively over TPLinker. Both TPLinker and $\mathrm{GRTE}_{w / o\text{GFM}}$ extract triples based on local features, and the main difference between them is the table filling strategy. So these results prove the effectiveness of our table filling strategy. The $F1$ scores of $\mathrm{GRTE}_{w / o\text{GFM}}$ on NYT24* and WebNLG* are slightly lower than those of TPLinker, as explained above, this is because each entity in NYT24* and WebNLG*, only one token is annotated for each entity, $\mathrm{GRTE}_{w / o\text{GFM}}$ could not realize its full potential.
+
+| Model | NYT24* | WebNLG* |
| Normal | SEO | EPO | T = 1 | T = 2 | T = 3 | T = 4 | T ≥ 5 | Normal | SEO | EPO | T = 1 | T = 2 | T = 3 | T = 4 | T ≥ 5 |
| CasRelBERT | 87.3 | 91.4 | 92 | 88.2 | 90.3 | 91.9 | 94.2 | 83.7 | 89.4 | 92.2 | 94.7 | 89.3 | 90.8 | 94.2 | 92.4 | 90.9 |
| TPLinkerBERT | 90.1 | 93.4 | 94.0 | 90.0 | 92.8 | 93.1 | 96.1 | 90.0 | 87.9 | 92.5 | 95.3 | 88.0 | 90.1 | 94.6 | 93.3 | 91.6 |
| SPNBERT | 90.8 | 94.0 | 94.1 | 90.9 | 93.4 | 94.2 | 95.5 | 90.6 | 89.5* | 94.1* | 90.8* | 89.5 | 91.3 | 96.4 | 94.7 | 93.8 |
| GRTEBERT | 91.1 | 94.4 | 95 | 90.8 | 93.7 | 94.4 | 96.2 | 93.4 | 90.6 | 94.5 | 96 | 90.6 | 92.5 | 96.5 | 95.5 | 94.4 |
+
+Table 3: F1 scores on sentences with different overlapping pattern and different triplet number. Results of $\text{CasRel}$ are copied from $\text{TPLinker}$ directly. "T" is the number of triples contained in a sentence. * means the results are produced by us with the provided source codes.
+
+
+Figure 3: F1 results under different $N$ .
+
+(2) The performance of $\mathrm{GRTE}_{\mathrm{GRU} \, \mathrm{GFM}}$ drops compared with GRTE, which indicates Transformer is more suitable for the global feature mining than GRU. But even so, we can see that on all datasets, $\mathrm{GRTE}_{\mathrm{GRU} \, \mathrm{GFM}}$ outperforms almost all previous best models and $\mathrm{GRTE}_{w / o \, \mathrm{GFM}}$ in terms of $F1$ , which further indicates the effectiveness of using global features.
+
+(3) The results of $\mathrm{GRTE}_{w / o m - h}$ are lower than those of GRTE, which shows the multi-head attention mechanism plays an important role for global feature mining. In fact, the importance of different features is different, the multi-head attention mechanism performs the feature mining process from multiple aspects, which is much helpful to highlight the more important ones.
+(4) The results of $\mathrm{GRTE}_{w / o\text{shared}}$ are slightly lower than those of GRTE, which shows the share mechanism is effective. In fact, the mechanism of using distinct parameters usually works well only when the training samples are sufficient. But this condition is not well satisfied in RTE since the training samples of a dataset are not sufficient enough to train too many parameters.
+
+Second, we evaluate the influence of the iteration number $N$ . The results are shown in Figure 3, from which following observations can be made.
+
+(1) On NYT24* and WebNLG*, the annotation standard is relatively simple. So GRTE achieves
+
+the best results with two iterations. But on NYT29, NYT24, and WebNLG, more iterations are usually required. For example, GRTE achieves the best results when $N$ is 3, 3, and 4 respectively on them.
+
+(2) On all datasets, GRTE gets obvious performance improvement (even the maximum performance improvement on some datasets) at $N = 2$ where $GFM$ begins to play its role, which indicates again that using global features can significantly improve the model performance.
+(3) GRTE usually achieves the best results within a small number of iterations on all datasets including WebNLG or WebNLG* where there are lots of relations. In fact, GRTE outperforms all the pervious best models even when $N = 2$ . This is a very important merit because it indicates that even used on some datasets where the numbers of relations are very large, the efficiency would not be a burden for GRTE, which is much meaningful when GRTE is deployed in some real scenarios.
+
+# 4.4 Analyses on Different Sentence Types
+
+Here we evaluate GRTE's ability for extracting triples from sentences that contain overlapping triples and multiple triples. For fair comparison with the previous best models (CasRel, TPLinker, and SPN), we follow their settings which are: (i) classifying sentences according to the degree of overlapping and the number of triples contained in a sentence, and (ii) conducting experiments on different subsets of NYT24* and WebNLG*.
+
+The results are shown in Table 3. We can see that: (i) GRTE achieves the best results on all three kinds of overlapping sentences on both datasets, and (ii) GRTE achieves the best results on almost all kinds of sentences that contain multiple triples. The only exception is on NYT24* where the F1 score of GRTE is slightly lower than that of SPN when $T$ is 1. The main reason is that there are less associations among token pairs when $T$ is 1, which slightly degrades the performance of GRTE.
+
+| Model | NYT24* | WebNLG* |
| Paramsall | Propencoder | Inference Time | Paramsall | Propencoder | Inference Time |
| CasRelBERT | 107,719,680 | 99.96% | 53.9 | 107,984,216 | 99.76% | 77.5 |
| TPLinkerBERT | 109,602,962 | 98.82% | 18.1 / 83.5† | 110,281,220 | 98.21% | 26.9 / 120.4† |
| SPNBERT | 141,428,765 | 76.58% | 26.4 / 107.9† | 150,989,744 | 71.73% | 22.6 / 105.7† |
| GRTEBERT | 119,387,328 | 90.72% | 21.3 / 109.6† | 122,098,008 | 88.70% | 28.7 / 124.1† |
+
+Table 4: Computational efficiency. Paramsall is the number of parameters for the entire model. Propencoder refers to the proportion of encoder parameters in the total model parameters. Inference Time represents the average time (milliseconds) the model takes to process a sample. † marks the inference time when the batch size is set to 1.
+
+In fact, GRTE maintains a table for each relation, and the $TG$ module extracts triples for each relation independently. Thus it can well handle above two kinds of complex sentences by nature.
+
+# 4.5 Analyses on Computational Efficiency
+
+Table 4 shows the comparison results of computational efficiency between GRTE and some previous best models. To be fair, we follow the settings in TPLinker: analyze the parameter scale and the inference time on $\mathrm{NYT^{*}}$ and WebNLG*. All the results are obtained by running the compared models on a TitanXP, and the batch size is set to 6 for all models that can be run in a batch mode.
+
+The parameter number of GRTE is slightly larger than that of TPLinker, which is mainly due to the using of a Transformer-based model. But when compared with SPN that uses the Transformer model too, we can see that GRTE has a smaller number of parameters due to its parameter share mechanism.
+
+We can also see that GRTE achieves a very competitive inference speed. This is mainly because of following three reasons. First, GRTE is a one-stage extraction model and can process samples in a batch mode (CasRel can only process samples one by one). Second, as analyzed previously, it has an efficient table filling strategy that needs to fill fewer table items. Third, as analyzed previously, GRTE often achieves the best results within a small number of iterations, thus the iteration operations will not have too much impact on the inference speed of GRTE.
+
+In fact, as $TPLinker$ pointed out that for all the models that use BERT (or other kinds of pre-trained language models) as their basic encoders, BERT is usually the most time-consuming part and takes up the most of model parameters, so the time cost of other components in a model is not significant.
+
+Besides, there is another important merit of our model: it needs less training time than existing
+
+state-of-the-art models like CasRel, TPLinker, and SPN etc. As pointed out previously, the epoch of our model on all datasets is 50. But on the same datasets, the epochs of all the mentioned models are 100. From Table 4 we can see that all these models have similar inference speed. For each model, the training speed of each epoch is very close to its inference speed (during training, there would be extra time cost for operations like the back propagation), thus we can easily know that our model needs less time for training since our model has a far less epoch number.
+
+# 5 Conclusions
+
+In this study, we propose a novel table filling based RTE model that extracts triples based on two kinds of global features. The main contributions of our work are listed as follows. First, we make use of the global associations of relations and of token pairs. Experiments show these two kinds of global features are much helpful for performance. Second, our model works well on extracting triples from complex sentences containing overlapping triples or multiple triples. Third, our model is evaluated on three benchmark datasets. Extensive experiments show that it consistently outperforms all the compared strong baselines and achieves state-of-the-art results. Besides, our model has a competitive inference speed and a moderate parameter size.
+
+# Acknowledgments
+
+This work is supported by the National Natural Science Foundation of China (No.61572120 and No.U1708261), the Fundamental Research Funds for the Central Universities (No.N181602013 and No.N2016006), Shenyang Medical Imaging Processing Engineering Technology Research Center (17-134-8-00), Ten Thousand Talent Program (No.ZX20200035), and Liaoning Distinguished Professor (No.XLYC1902057).
+
+# References
+
+Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Systems With Applications, 114:34-45.
+Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 551-560.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina N. Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
+Markus Eberts and Adrian Ulges. 2019. Span-based joint entity and relation extraction with transformer pre-training. In ECAI, pages 2006-2013.
+Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. Graphrel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409-1418.
+Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for nlg micro-planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 179-188.
+Pankaj Gupta, Hinrich Schütze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2537-2547, Osaka, Japan. The COLING 2016 Organizing Committee.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1340-1350.
+Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin,
+
+Germany. Association for Computational Linguistics.
+Tapas Nayak and Hwee Tou Ng. 2020. Effective modeling of encoder-decoder architecture for joint entity and relation extraction. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8528-8535. AAAI Press.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
+Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Xiangrong Zeng, and Shengping Liu. 2020. Joint entity and relation extraction with set prediction networks. CoRR, abs/2011.01675.
+Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2020. Recurrent interaction network for jointly extracting entities and classifying relations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3722-3732. Association for Computational Linguistics.
+Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2021. Progressive multitask learning with controlled information flow for joint entity and relation extraction. In Association for the Advancement of Artificial Intelligence (AAAI).
+Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, and Minlie Huang. 2019. A hierarchical framework for relation extraction with reinforcement learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(1):7072-7079.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, and Limin Sun. 2020. TPLinker: Single-stage joint extraction of entities and relations through token pair linking. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1572-1582, Barcelona, Spain (Online).
+Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A novel cascade binary tagging
+
+framework for relational triple extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1476-1488, Online. Association for Computational Linguistics.
+Bowen Yu, Zhenyu Zhang, Xiaobo Shu, Tingwen Liu, Yubin Wang, Bin Wang, and Sujian Li. 2019. Joint extraction of entities and relations based on a novel decomposition strategy. In ECAI, pages 2282-2289.
+Yue Yuan, Xiaofei Zhou, Shirui Pan, Qiannan Zhu, Zeliang Song, and Li Guo. 2020. A relation-specific attention network for joint entity and relation extraction. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, volume 4, pages 4054-4060.
+Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, 3(6):1083-1106.
+Daojian Zeng, Haoran Zhang, and Qianying Liu. 2020. Copymtl: Copy mechanism for joint extraction of entities and relations with multi-task learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(5):9507-9514.
+Xiangrong Zeng, Shizhu He, Daojian Zeng, Kang Liu, Shengping Liu, and Jun Zhao. 2019. Learning the extraction order of multiple relational facts in a sentence with reinforcement learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 367-377.
+Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 506-514.
+Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1730-1740, Copenhagen, Denmark. Association for Computational Linguistics.
+Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1227-1236.
+GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 427-434.
\ No newline at end of file
diff --git a/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/images.zip b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f6b9fca9cf8d24c578564f56f7a1efe78b0d6e7b
--- /dev/null
+++ b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:60401abf5e7874fffa0b4a99b4577b1bec1e59aff94048f103dc815653c8592a
+size 456484
diff --git a/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/layout.json b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..08333361bfac553e4965b24cf8a00af9c6f4265e
--- /dev/null
+++ b/anovelglobalfeatureorientedrelationaltripleextractionmodelbasedontablefilling/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51cd55abdadf72313aaf1e5209a2e69c32852572415695340b9b9df3b72ffb96
+size 436689
diff --git a/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_content_list.json b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..103f5e189d54e434b0d1f562ae9fcc51902855cf
--- /dev/null
+++ b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b47f7fb7e7437f4ad050d4556523c75005cf7ec7a964b0cc96222effa1ad03f
+size 113032
diff --git a/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_model.json b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f4810bc9c7997075f24dd5943d67effeb7ced673
--- /dev/null
+++ b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:988b227f521121f91b6b68a966dfb94eaf3d3cc2e58db96d61de5cfaa770a56d
+size 132387
diff --git a/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_origin.pdf b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..95e08efd40e3db9dadf114db75bc944be1d0ef96
--- /dev/null
+++ b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/663b0f58-e90a-450f-98ca-771636bd21a7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f78ed67e99adb6a430f0b1ac3f02de8f082ff7a9065707a86ebc03a034f1a72
+size 739375
diff --git a/answeringopendomainquestionsofvaryingreasoningstepsfromtext/full.md b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2de43961dc04be6d8ead65332b7c4e61e7ae44d7
--- /dev/null
+++ b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/full.md
@@ -0,0 +1,396 @@
+# Answering Open-Domain Questions of Varying Reasoning Steps from Text
+
+Peng Qi\*
+
+Haejun Lee\*
+
+Oghenetegiri "TG" Sido\*
+
+Christopher D. Manning*
+
+$\spadesuit$ Computer Science Department, Stanford University
+
+$\hat{\mathbb{O}}$ JD AI Research
+
+Samsung Research
+
+{pengqi, osido, manning}@cs.stanford.edu, haejun82.lee@samsung.com
+
+# Abstract
+
+We develop a unified system to answer directly from text open-domain questions that may require a varying number of retrieval steps. We employ a single multi-task transformer model to perform all the necessary subtasks—retrieving supporting facts, reranking them, and predicting the answer from all retrieved documents—in an iterative fashion. We avoid crucial assumptions of previous work that do not transfer well to real-world settings, including exploiting knowledge of the fixed number of retrieval steps required to answer each question or using structured metadata like knowledge bases or web links that have limited availability. Instead, we design a system that can answer open-domain questions on any text collection without prior knowledge of reasoning complexity. To emulate this setting, we construct a new benchmark, called BEERQA, by combining existing one- and two-step datasets with a new collection of 530 questions that require three Wikipedia pages to answer, unifying Wikipedia corpora versions in the process. We show that our model demonstrates competitive performance on both existing benchmarks and this new benchmark. We make the new benchmark available at https://beerqa.github.io/.
+
+# 1 Introduction
+
+Using knowledge to solve problems is a hallmark of intelligence. Since human knowledge is often contained in large text collections, open-domain question answering (QA) is an important means for intelligent systems to make use of the knowledge in large text collections. With the help of large-scale datasets based on Wikipedia (Rajpurkar et al., 2016, 2018) and other large corpora (Trischler et al., 2016; Dunn et al., 2017; Talmor and Berant, 2018), the research community has made substantial progress on tackling this problem in recent years, including
+
+in the direction of complex reasoning over multiple pieces of evidence, or multi-hop reasoning (Yang et al., 2018; Welbl et al., 2018; Chen et al., 2020).
+
+Despite this success, most previous systems are developed with, and evaluated on, datasets that contain exclusively single-hop questions (ones that require a single document or paragraph to answer) or two-hop ones. As a result, their design is often tailored exclusively to single-hop (e.g., Chen et al., 2017; Wang et al., 2018b) or multi-hop questions (e.g., Nie et al., 2019; Min et al., 2019; Feldman and El-Yaniv, 2019; Zhao et al., 2020a; Xiong et al., 2021). Even when the model is designed to work with both, it is often trained and evaluated on exclusively single-hop or multi-hop settings (e.g., Asai et al., 2020). In practice, not only can we not expect open-domain QA systems to receive exclusively single- or multi-hop questions from users, but it is also non-trivial to judge reliably whether a question requires one or multiple pieces of evidence to answer a priori. For instance, "In which U.S. state was Facebook founded?" appears to be single-hop, but its answer cannot be found in the main text of a single English Wikipedia page.
+
+Besides the impractical assumption about reasoning hops, previous work often also assumes access to non-textual metadata such as knowledge bases, entity linking, and Wikipedia hyperlinks when retrieving supporting facts, especially in answering complex questions (Nie et al., 2019; Feldman and El-Yaniv, 2019; Zhao et al., 2019; Asai et al., 2020; Dhingra et al., 2020; Zhao et al., 2020a). While this information is helpful, it is not always available in text collections we might be interested in getting answers from, such as news or academic research articles, besides being labor-intensive and time-consuming to collect and maintain. It is therefore desirable to design a system that is capable of extracting knowledge from text without using such metadata, to maximally emphasize using knowledge available to us in the form of text.
+
+
+Figure 1: The IRRR question answering pipeline answers a complex question in the HotpotQA dataset by iteratively retrieving, reading, and reranking paragraphs from Wikipedia. In this example, the question is answered in five steps: 1. the retriever model selects the words "Ingerophrynus gollum" from the question as an initial search query; 2. the question answering model attempts to answer the question by combining the question with each of the retrieved paragraphs and fails to find an answer; 3. the reranker picks the paragraph about the Ingerophrynus gollum to add to extend the reasoning path; 4. the retriever generates an updated query "Lord of the Rings" to retrieve new paragraphs; 5. the reader correctly predicts the answer "150 million copies" by combining the reasoning path (question + "Ingerophrynus gollum") with the newly retrieved paragraph about "The Lord of the Rings".
+
+To address these limitations, we propose Iterative Retriever, Reader, and Reranker (IRRR), which features a single neural network model that performs all of the subtasks required to answer questions from a large collection of text (see Figure 1). IRRR is designed to leverage off-the-shelf information retrieval systems by generating natural language search queries, which allows it to easily adapt to arbitrary collections of text without requiring well-tuned neural retrieval systems or extra metadata. This further allows users to understand and control IRRR, if necessary, to facilitate trust. Moreover, IRRR iteratively retrieves more context to answer the question, which allows it to easily accommodate questions of different number of reasoning steps.
+
+To evaluate the performance of open-domain QA systems in a more realistic setting, we construct a new benchmark called $\mathrm{BEERQA}^1$ by combining the questions from the single-hop SQuAD Open (Rajpurkar et al., 2016; Chen et al., 2017) and the two-hop HotpotQA (Yang et al., 2018) with a new collection of 530 human-annotated questions that require information from at least three Wikipedia pages to answer. We map all questions to a unified version of the English Wikipedia to reduce stylistic differences that might provide statistical shortcuts to models. As a result, BEERQA provides a more realistic evaluation of open-ended question answering systems in their ability to answer questions without knowledge of the number of reasoning steps required ahead of time. We show that IRRR not
+
+only achieves competitive performance with state-of-the-art models on the original SQuAD Open and HotpotQA datasets, but also establishes a strong baseline for this new dataset.
+
+To recap, our contributions in this paper are: (1) a new open-domain QA benchmark, BeERQA, that features questions requiring variable steps of reasoning to answer on a unified Wikipedia corpus. (2) A single unified neural network model that performs all essential subtasks in open-domain QA purely from text (retrieval, reranking, and reading comprehension), which not only achieves strong results on SQuAD and HotpotQA, but also establishes a strong baseline on this new benchmark.
+
+# 2 Open-Domain Question Answering
+
+The task of open-domain question answering is concerned with finding the answer $a$ to a question $q$ from a large text collection $\mathcal{D}$ . Successful solutions to this task usually involve two crucial components: an information retrieval system that finds a small set of relevant documents $\mathcal{D}_r$ from $\mathcal{D}$ , and a reading comprehension system that extracts the answer from it. Chen et al. (2017) presented the first neural network-based approach to this problem, which was later extended by Wang et al. (2018a) with a reranking system to further reduce the amount of
+
+context the reading comprehension component has to consider to improve answer accuracy.
+
+More recently, Yang et al. (2018) showed that this single-step retrieve-and-read approach to open-domain question answering is inadequate for more complex questions that require multiple pieces of evidence to answer (e.g., "What is the population of Mark Twain's hometown?"). While later work approaches these by extending supporting fact retrieval beyond one step, most assumes that all questions are either exclusively single-hop or multihop during training and evaluation. We propose IRRR, a system that performs variable-hop retrieval for open-domain QA to address these issues, and present a new benchmark, BEERQA, to evaluate systems in a more realistic setting.
+
+# 3 IRRR: Iterative Retriever, Reader, and Reranker
+
+In this section, we present a unified model to perform all of the subtasks necessary for open-domain question answering—Iterative Retriever, Reader, and Reranker (IRRR), which performs the subtasks involved in an iterative manner to accommodate questions with a varying number of steps. IRRR aims at building a reasoning path $p$ from the question $q$ , through all the necessary supporting documents or paragraphs $d \in \mathcal{D}_{\mathrm{gold}}$ to the answer $a$ (where $\mathcal{D}_{\mathrm{gold}}$ is the set of gold supporting facts). As shown in Figure 1, IRRR operates in a loop of retrieval, reading, and reranking to expand the reasoning path $p$ with new documents from $d \in \mathcal{D}$ .
+
+Specifically, given a question $q$ , we initialize the reasoning path with the question itself, i.e., $p_0 = [q]$ , and generate from it a search query with IRRR's retriever. Once a set of relevant documents $\mathcal{D}_1 \subset \mathcal{D}$ is retrieved, they might either help answer the question, or reveal clues about the next piece of evidence we need to answer $q$ . The reader model then attempts to read each of the documents in $\mathcal{D}_1$ to answer the question combined with the current reasoning path $p$ . If more than one answer can be found from these candidate reasoning paths, we predict the answer with the highest answerability score, which we will detail in section 3.2. If no answer can be found, then IRRR's reranker scores each retrieved paragraph against the current reasoning path, and appends the top-ranked paragraph to the current reasoning path,
+
+
+Figure 2: The overall architecture of our IRRR model, which uses a shared Transformer encoder to perform all subtasks of open-domain question answering.
+
+i.e., $p_{i+1} = p_i + \left[ \arg \max_{d \in D_1} \text{reranker}(p_i, d) \right]$ , before the updated reasoning path is presented to the retriever to generate new search queries. This iterative process is repeated until an answer is predicted from one of the reasoning paths, or until the reasoning path has reached a cap of $K$ documents.
+
+To reduce computational cost and improve model representations of reasoning paths from shared statistical learning, IRRR is implemented as a multitask model built on a pretrained Transformer model that performs all three subtasks. At a high level, it consists of a Transformer encoder (Vaswani et al., 2017) which takes the reasoning path $p$ (the question and all retrieved paragraphs so far) as input, and one set of task-specific parameters for each task of retrieval, reranking, and reading comprehension (see Figure 2). The retriever generates natural language search queries by selecting words from the reasoning path, the reader extracts answers from the reasoning path and abstains if its confidence is not high enough, and the reranker assigns a scalar score for each retrieved paragraph as a potential continuation of the current reasoning path.
+
+The input to our Transformer encoder is formatted similarly to that of the BERT model (Devlin et al., 2019). For a reasoning path $p$ that consists of the question and $t$ retrieved paragraphs, the input is formatted as "[CLS] question [SEP] title_1 [CONT] para_1 [SEP] ... title_t [CONT] para_t [SEP]", where [CLS], [SEP], and [CONT] are special tokens to separate different components of the input. The [CONT] embedding is randomly initialized with a truncated normal distribution with a standard deviation of 0.02, and finetuned with other model parameters during training.
+
+We will detail each of the task-specific components in the following subsections.
+
+# 3.1 Retriever
+
+The goal of the retriever is to generate natural language queries to retrieve relevant documents from an off-the-shelf text-based retrieval engine. This allows IRRR to perform open-domain QA in an explainable and controllable manner, where a user can easily understand the model's behavior and intervene if necessary. We extract search queries from the current reasoning path, i.e., the original question and all of the paragraphs that we have already retrieved, similar to GOLDEN Retriever's approach (Qi et al., 2019). This is based on the observation that there is usually a strong semantic overlap between the reasoning path and the next paragraph to retrieve, which helps reduce the search space of potential queries. We note, though, that IRRR differs from GOLDEN Retriever in two important ways: (1) we allow search queries to be any subsequence of the reasoning path instead of limiting it to substrings to allow for more flexible combinations of search phrases; (2) more importantly, we employ the same retriever model across reasoning steps to generate queries instead of training separate ones for each reasoning step, which is crucial for IRRR to generalize to arbitrary reasoning steps.
+
+To predict these search queries from the reasoning path, we apply a token-wise binary classifier on top of the shared Transformer encoder model, to decide whether each token is included in the final query. At training time, we derive supervision signal to train these classifiers with a binary cross entropy loss (which we detail in Section 3.4.1); at test time, we select a cutoff threshold for query words to be included from the reasoning path. In practice, we find that boosting the model to predict more query terms is beneficial to increase the recall of the target paragraphs in retrieval.
+
+# 3.2 Reader
+
+The reader model attempts to find the answer given a reasoning path comprised of the question and retrieved paragraphs. To support unanswerable questions and the special non-extractive answers yes and no from HotpotQA, we train a classifier conditioned on the Transformer encoder representation of the [CLS] token to predict one of the 4 classes SPAN/YES/NO/NOANSWER. The classifier thus simultaneously assigns an answerability score
+
+to this reasoning path to assess the likelihood of the document having the answer to the original question on this reasoning path. Span answers are predicted from the context using a span start classifier and a span end classifier, following Devlin et al. (2019).
+
+We define answerability as the log likelihood ratio between the most likely positive answer and the NOANSWER prediction, and use it to pick the best answer from all the candidate reasoning paths to stop IRRR's iterative process, if found. We find that this likelihood ratio formulation is less affected by sequence length compared to prediction probability, thus making it easier to assign a global threshold across reasoning paths of different lengths to stop further retrieval. We include further details about answerability calculation in Appendix C.
+
+# 3.3 Reranker
+
+When the reader fails to find an answer from the reasoning path, the reranker selects one of the retrieved paragraphs to expand it, so that the retriever can generate new search queries to retrieve new context to answer the question. To achieve this, we assign each potential extended reasoning path a score by linearly transforming the hidden representation of the [CLS] token, and picking the extension that has the highest score. At training time, we normalize the reranker scores across top retrieved paragraphs with softmax, and maximize the log likelihood of selecting gold supporting paragraphs from retrieved ones, which is a noise contrastive estimation (NCE; Mnih and Kavukcuoglu, 2013; Jean et al., 2015) of the reranker likelihood over all retrieved paragraphs.
+
+# 3.4 Training IRRR
+
+# 3.4.1 Dynamic Oracle for Query Generation
+
+Since existing open-domain QA datasets do not include human-annotated search queries, we need to derive supervision signal to train the retriever with a dynamic oracle. Similar to GOLDEN Retriever, we derive search queries from overlapping terms between the reasoning path and the target paragraph with the goal of maximizing retrieval performance.
+
+To reduce computational cost, we limit our attention to overlapping spans of text between the reasoning path and the target document when generating oracle queries. For instance, when "David" is part of the overlapping span "David Dunn", the entire span is either included or excluded from the oracle query to reduce the search space. Once $N$
+
+
+Figure 3: Recall of the two gold supporting documents by the oracle queries of GOLDEN Retriever and IRRR on the HotpotQA dataset, where each question corresponds to two supporting documents.
+
+overlapping spans are found, we approximate the importance of each with the following "importance" metric to avoid enumerating all $2^{N}$ combinations to generate the oracle query
+
+$$
+\mathrm {I m p} (s _ {i}) = \mathrm {R a n k} (t, \{s _ {j} \} _ {j = 1, j \neq i} ^ {N}) - \mathrm {R a n k} (t, \{s _ {i} \}),
+$$
+
+where $s_j$ are overlapping spans, and $\operatorname{Rank}(t, S)$ is the rank of target document $t$ in the search result when spans $S$ are used as search queries (the smaller, the closer $t$ is to the top). Intuitively, the second term captures the importance of the search term when used alone, and the first captures its importance when combined with all other overlapping spans, which helps us capture query terms that are only effective when combined. After estimating importance of each overlapping span, we determine the final oracle query by first sorting all spans by descending importance, then including each in the final oracle query until the search rank of $t$ stops improving. The resulting time complexity for generating these oracle queries is thus $O(N)$ , i.e., linear in the number of overlapping spans between the reasoning path and the target paragraph.
+
+Figure 3 shows that the added flexibility of nonspan queries in IRRR significantly improves retrieval performance compared to that of GOLDEN Retriever, which is only able to extract contiguous spans from the reasoning path as queries.
+
+# 3.4.2 Reducing Exposure Bias with Data Augmentation
+
+With the dynamic oracle, we are able to generate target queries to train the retriever model, retrieve documents to train the reranker model, and expand reasoning paths in the training set by always choosing a gold paragraph, following Qi et al. (2019). However, this might prevent the model from generalizing to cases where model behavior deviates from the oracle. To address this, we augment the training data by occasionally selecting non-gold
+
+Question: How many counties are on the island that is home to the fictional setting of the novel in which Daisy Buchanan is a supporting character?
+
+Wikipedia Page 1: Daisy Buchanan
+Daisy Fay Buchanan is a fictional character in F. Scott Fitzgerald's magnum opus "The Great Gatsby" (1925)...
+
+Wikipedia Page 2: The Great Gatsby
+The Great Gatsby is a 1925 novel ... that follows a cast of characters living in the fictional town of West Egg on prosperous Long Island ...
+
+Wikipedia Page 3: Long Island The Long Island ... comprises four counties in the U.S. state of New York: Kings and Queens ... to the west; and Nassau and Suffolk to the east...
+
+Answer: four
+
+Figure 4: An example of the newly collected challenge questions. This particular question requires three pieces of evidence to answer.
+
+paragraphs to expand reasoning paths, and use the dynamic oracle to generate queries for the model to "recover" from these synthesized retrieval mistakes. We found that this data augmentation significantly improves the performance of IRRR in preliminary experiments, and thus report main results with augmented training data.
+
+# 4 Experiments
+
+Standard Benchmarks. We test IRRR on two standard benchmarks, SQuAD Open and HotpotQA. SQuAD Open (Chen et al., 2017) designates the development set of the original SQuAD dataset as its test set, which features more than 10,000 questions, each based on a single paragraph in a Wikipedia article. For this dataset, we follow previous work and use the 2016 English Wikipedia as the corpus for evaluation. Since the authors did not present a standard development set, we further split part of the training set to construct a development set roughly as large as the test set. HotpotQA (Yang et al., 2018) features more than 100,000 questions that require the introductory paragraphs of two Wikipedia articles to answer, and we focus on its open-domain "fullwiki" setting in this work. For HotpotQA, we use the introductory paragraphs provided by the authors for training and evaluation, which is based on a 2017 Wikipedia dump.
+
+New Benchmark. To evaluate the performance of IRRR as well as future QA systems in a more realistic open-domain setting without a pre-specified number of reasoning steps for each question, we further combine SQuAD Open and HotpotQA with 530 newly collected challenge questions (see Figure 4 for an example, and Appendix E for more details)
+
+ | SQuAD Open | HotpotQA | 3+ Hop | Total |
| Train | 59,285 | 74,758 | 0 | 134,043 |
| Dev | 8,132 | 5,989 | 0 | 14,121 |
| Test | 8,424 | 5,978 | 530 | 14,932 |
| Total | 75,841 | 86,725 | 530 | 163,096 |
+
+Table 1: Counts of QA examples in the new unified benchmark, BEERQA.
+
+to construct a new benchmark. Note that naively combining the datasets by merging the questions and the underlying corpora is problematic, as the corpora not only feature repeated and sometimes contradicting information, but also make them available in two distinct forms (full Wikipedia pages in one and just the introductory paragraphs in the other). This could result in models taking corpus style as a shortcut to determine question complexity, or even result in plausible false answers due to corpus inconsistency.
+
+To construct a high-quality unified benchmark, we begin by mapping the paragraphs each question is based on to a more recent version of Wikipedia. We discarded examples where the Wikipedia pages have either been removed or significantly edited such that the answer can no longer be found from paragraphs that are similar enough to the original contexts the questions are based on. As a result, we filtered out 22,328 examples from SQuAD Open, and 18,649 examples from HotpotQA's fullwiki setting. We add newly annotated challenge questions to the test set of the new benchmark, which require at least three steps of reasoning to answer. This allows us to test the generalization capabilities of QA models to this unseen scenario. The statistics of the final dataset, which we name BEERQA, can be found in Table 1. For all benchmark datasets, we report standard answer exact match (EM) and unigram $\mathrm{F}_1$ metrics.
+
+Training details. We use ELECTRALARGE (Clark et al., 2020) as the pre-trained initialization for our Transformer encoder. We train the model on a combined dataset of SQuAD Open and HotpotQA questions where we optimize the joint loss of the retriever, reader, and reranker components simultaneously in an multi-task learning
+
+fashion. Training data for the retriever and reranker components is derived from the dynamic oracle on the training set of these datasets, where reasoning paths are expanded with oracle queries and by picking the gold paragraphs as they are retrieved for the reader component. We augment the training data with the technique in Section 3.4.2 and expand reasoning paths up to 3 reasoning steps on HotpotQA and 2 on SQuAD Open, and find that this results in a more robust model. After an initial model is finetuned on this expanded training set, we apply our iterative training technique to further reduce exposure bias of the model by generating more data with the trained model and the dynamic oracle.
+
+# 5 Results
+
+In this section, we present the performance of IRRR when evaluated against previous systems on standard benchmarks, and demonstrate its efficacy on our new, unified benchmark, especially with the help of iterative training.
+
+# 5.1 Performance on Standard Benchmarks
+
+We first compare IRRR against previous systems on SQuAD Open and the fullwiki setting of HotpotQA. On each dataset, we compare the performance of IRRR against best previously published systems, as well as unpublished ones on public leaderboards. For a fairer comparison to previous work, we make use of their respective Wikipedia corpora, and limit the retriever to retrieve 150 paragraphs of text from Wikipedia at each step of reasoning. We also compare IRRR against the Graph Recurrent Retriever (GRR; Asai et al., 2020) on our newly collected 3-hop question challenge test set, using the author's released code and models trained on HotpotQA. In these experiments, we report IRRR performance both from training on the dataset it is evaluated on, and from combining the training data we derived from both SQuAD Open and HotpotQA.
+
+As can be seen in Tables 2 and 3, IRRR achieves competitive performance with previous work, and further outperforms previously published work on SQuAD Open by a large margin when trained on combined data. It also outperforms systems that were submitted after IRRR was initially submitted to the HotpotQA leaderboard. On the $3+$ hop challenge set, we similarly notice a large performance margin between IRRR and GRR, although neither is trained with questions requiring three or more hops, demonstrating that IRRR generalizes well to
+
+| System | SQuAD Open |
| EM | F1 |
| DrQA (Chen et al., 2017) | 27.1 | — |
| DensePR (Karpukhin et al., 2020) | 38.1 | — |
| BERTserini (Yang et al., 2019) | 38.6 | 46.1 |
| MUPPET (Feldman and El-Yaniv, 2019) | 39.3 | 46.2 |
| RE3 (Hu et al., 2019) | 41.9 | 50.2 |
| Knowledge-aided (Zhou et al., 2020) | 43.6 | 53.4 |
| Multi-passage BERT (Wang et al., 2019) | 53.0 | 60.9 |
| GRR (Asai et al., 2020) | 56.5 | 63.8 |
| FiD (Izacard and Grave, 2020) | 56.7 | — |
| SPARTA (Zhao et al., 2020b) | 59.3 | 66.5 |
| IRRR (SQuAD) | 56.8 | 63.2 |
| IRRR (SQuAD+HotpotQA) | 61.8 | 68.9 |
+
+Table 2: End-to-end question answering performance on SQuAD Open, evaluated on the same set of documents as Chen et al. (2017).
+
+| System | HotpotQA | 3+ hop |
| EM | F1 | EM | F1 |
| GRR (Asai et al., 2020) | 60.0 | 73.0 | 27.2† | 31.9† |
| Step-by-step® | 63.0 | 75.4 | — | — |
| DDRQA (Zhang et al., 2021) | 62.3 | 75.3 | — | — |
| MDR (Xiong et al., 2021) | 62.3 | 75.3 | — | — |
| EBS-SH® | 65.5 | 78.6 | — | — |
| TPRR® | 67.0 | 79.5 | — | — |
| HopRetriever (Li et al., 2020) | 67.1 | 79.9 | — | — |
| IRRR (HotpotQA) | 65.2 | 78.0 | 29.2 | 34.2 |
| IRRR (SQuAD + HotpotQA) | 65.7 | 78.2 | 32.5 | 36.7 |
+
+Table 3: End-to-end question answering performance on HotpotQA and the new $3+$ hop challenge questions, evaluated on the official HotpotQA Wikipedia paragraphs. $\otimes$ denotes anonymous/preprint unavailable at the time of writing of this paper. $\dagger$ indicates results we obtained using the publicly available code and pretrained models.
+
+questions that require more retrieval steps than the ones seen during training. We note that the systems that outperform IRRR on these datasets typically make use of trainable neural retrieval components, which IRRR can potentially benefit from adopting as well. Specifically, SPARTA (Zhao et al., 2020b) introduces a neural sparse retrieval system that potentially works well with IRRR's oracle query generation procedure to further improve retrieval performance, thanks to its use of natural language queries. HopRetriever (Li et al., 2020) introduces a novel representation of documents for retrieval that is particularly suitable for discovering documents connected by the same entity to answer multi-hop questions, which IRRR could benefit from as well. We leave exploration of these directions to future work.
+
+To better understand the behavior of IRRR on
+
+
+
+
+
+
+Figure 5: The retrieval behavior of IRRR and its relation to the performance of end-to-end question answering. Top: The distribution of reasoning path lengths as determined by IRRR. Bottom: Total number of paragraphs retrieved by IRRR vs. the end-to-end question answering performance as measured by answer $\mathrm{F}_1$ .
+
+
+
+these benchmarks, we analyze the number of paragraphs retrieved by the model when varying the number of paragraphs retrieved at each reasoning step among $\{50, 100, 150\}$ . As can be seen in Figure 5, IRRR stops its iterative process as soon as all necessary paragraphs to answer the question have been retrieved, effectively reducing the total number of paragraphs retrieved and read by the model compared to always retrieving a fixed number of paragraphs for each question. Further, we note that the optimal cap for the number of reasoning steps is larger than the number of gold paragraphs necessary to answer the question on each benchmark, which we find is due to IRRR's ability to recover from retrieving and selecting non-gold paragraphs (see the example in Figure 6). Finally, we note that increasing the number of paragraphs retrieved at each reasoning step remains an effective, if computationally expensive, strategy, to improve the end-to-end performance of IRRR. However, the tradeoff between retrieval budget and model performance is more effective than that of previous work (e.g., GRR), and we note that the queries generated by IRRR are explainable to humans and can help humans easily control its behavior.
+
+# 5.2 Performance on the Unified Benchmark
+
+To demonstrate the performance of IRRR in a more realistic setting of open-domain QA, we evaluate it on the new, unified benchmark. As is shown in Table 4, IRRR's performance remains competitive on all questions from different origins in the unified benchmark, despite the difference in reasoning complexity when answering these questions.
+
+ | Dev | Test |
| EM | F1 | EM | F1 |
| SQuAD Open | 50.65 | 60.99 | 60.59 | 67.51 |
| HotpotQA | 59.01 | 70.33 | 58.61 | 69.86 |
| 3+ hop | — | — | 33.02 | 39.59 |
| Micro-averaged | 54.20 | 64.95 | 58.82 | 67.46 |
| Macro-averaged | 54.83 | 65.66 | 50.74 | 58.99 |
+
+Table 4: End-to-end question answering performance of IRRR on the unified benchmark, evaluated on the 2020 copy of Wikipedia. These results are not directly comparable with those in Tables 2 and 3 because the set of questions and Wikipedia documents differ.
+
+| System | SQuAD | HotpotQA |
| Ours (joint dataset) | 58.69 | 68.74 |
| vs. fixed retrieval steps (K=3) | 31.70 | 66.60 |
| vs. remove HotpotQA / SQuAD data | 54.35 | 66.91 |
| replace ELECTRA w/ BERTLarge-WWM | 57.19 | 63.86 |
+
+The model also generalizes to the 3-hop questions despite having never been trained on them. We note that the large performance gap between the development and test settings for SQuAD Open questions is due to the fact that test set questions (the original SQuAD dev set) are annotated with multiple human answers, while the dev set ones (originally from the SQuAD training set) are not.
+
+To better understand the contribution of the various components and techniques we proposed for IRRR, we performed ablation studies on the model iterating up to 3 reasoning steps with 50 paragraphs for each step, and present the results in Table 5. First of all, we find it is important to allow IRRR to dynamically stop retrieving paragraphs to answer the question. Compared to its fixed-step retrieval counterpart, dynamically stopping IRRR improves $\mathrm{F_1}$ on both SQuAD and HotpotQA questions by 27.0 and 2.1 points respectively (we include further analyses for dynamic stopping in Appendix D). We also find combining SQuAD and HotpotQA datasets beneficial for both datasets in an open-domain setting, and that ELECTRA is an effective alternative to BERT for this task.
+
+# 6 Related Work
+
+The availability of large-scale question answering (QA) datasets has greatly contributed to the research progress on open-domain QA. SQuAD (Rajpurkar
+
+Table 5: Ablation study of different design choices in IRRR, as evaluated by Answer $\mathrm{F}_{1}$ on the dev set of the unified benchmark. Results differ from those in Table 4 because fewer reasoning steps are used (3 vs. 5) and fewer paragraphs retrieved at each step (50 vs. 150).
+
+| Question | The Ingerophrynus gollum is named after a character in a book that sold how many copies? |
| Step 1 (Non-Gold) | Ingerophrynus is a genus of true toads with 12 species. ... In 2007 a new species, "Ingerophrynus gollum", was added to this genus. This species is named after the character Gollum created by J. R. R. Tolkien." |
| Query | Ingerophrynus gollum book sold copies J. R. R. Tolkien |
| Step 2 (Gold) | Ingerophrynus gollum (Gollum's toad) is a species of true toad. ... It is called "gollum" with reference of the eponymous character of The Lord of the Rings by J. R. R. Tolkien. |
| Query | Ingerophrynus gollum character book sold copies J. R. R. Tolkien true Lord of the Rings |
| Step 3 (Gold) | The Lord of the Rings is an epic high fantasy novel written by English author and scholar J. R. R. Tolkien. ... is one of the best-selling novels ever written, with 150 million copies sold. |
| Answer/GT | 150 million copies |
+
+Figure 6: An example of IRRR answering a question from HotpotQA by generating natural language queries to retrieve paragraphs, then rerank them to compose reasoning paths and read them to predict the answer. Here, IRRR recovers from an initial retrieval/eranking mistake by retrieving more paragraphs, before arriving at the gold supporting facts and the correct answer.
+
+et al., 2016, 2018) is among the first question answering datasets adopted for this purpose by Chen et al. (2017) to build QA systems over Wikipedia articles. Similarly, TriviaQA (Joshi et al., 2017) and Natural Questions (Kwiatkowski et al., 2019) feature Wikipedia-based questions that are written by trivia enthusiasts and extracted from Google search queries, respectively. More recently, Petroni et al. (2021) presented, KILT, a new benchmark based on Wikipedia where many knowledge-based tasks are evaluated in a unified version of Wikipedia, including open-domain question answering, entity linking, dialogue, etc. Unlike BEERQA, however, single-hop and multi-hop QA are held completely separate during evaluation in KILT, which makes the evaluation of open-domain QA less realistic. Aside from Wikipedia, researchers have also used news articles (Trischler et al., 2016) and search results from the web (Dunn et al., 2017; Talmor and Berant, 2018) as the corpus for open-domain QA.
+
+Inspired by the TREC QA challenge, $^{8}$ Chen et al. (2017) were the first to combine information retrieval systems with accurate neural network-based reading comprehension models for open-domain QA. Recent work has improved open-domain QA performance by enhancing various components in this retrieve-and-read approach. While much research focused on improving the reading comprehension model (Seo et al., 2017; Clark and Gardner, 2018), especially with pretrained language models like BERT (Devlin et al., 2019), researchers have
+
+also demonstrated that neural network-based information retrieval systems achieve competitive, if not better, performance compared to traditional IR engines (Lee et al., 2019; Khattab et al., 2020; Guu et al., 2020; Xiong et al., 2021). Aside from the reading comprehension and retrieval components, researchers have also found value from reranking search results (Wang et al., 2018a) or answer candidates (Wang et al., 2018b; Hu et al., 2019).
+
+While most work focuses on questions that require only a local context of supporting facts to answer, Yang et al. (2018) presented HotpotQA, which tests whether open-domain QA systems can generalize to more complex questions that require evidence from multiple documents to answer. Researchers have explored various techniques to extend retrieve-and-read systems to this problem, including making use of hyperlinks between Wikipedia articles (Nie et al., 2019; Feldman and El-Yaniv, 2019; Zhao et al., 2019; Asai et al., 2020; Dhingra et al., 2020; Zhao et al., 2019) and iterative retrieval (Talmor and Berant, 2018; Das et al., 2019; Qi et al., 2019). While most previous work on iterative retrieval makes use of neural retrieval systems that directly accept real vectors as input, our work is similar to that of Qi et al. (2019) in using natural language search queries. A crucial distinction between our work and previous work on multi-hop open-domain QA, however, is that we don't train models to exclusively answer single-hop or multi-hop questions, but demonstrate that one single set of parameters performs well on both tasks.
+
+# 7 Conclusion
+
+In this paper, we presented Iterative Retriever, Reader, and Reranker (IRRR), a system that uses a single model to perform subtasks to answer open-domain questions of arbitrary reasoning steps. IRRR achieves competitive results on standard open-domain QA benchmarks, and establishes a strong baseline on BEERQA, the new unified benchmark we present, which features questions with mixed levels of complexity.
+
+# Acknowledgments
+
+The authors would like to thank the anonymous reviewers for discussions and comments on earlier versions of this paper. This research is funded in part by Samsung Electronics Co., Ltd. and in part by the SAIL-JD Research Initiative.
+
+# References
+
+Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over Wikipedia graph for question answering. In International Conference on Learning Representations.
+Giuseppe Attardi. 2015. WikiExtractor. https://github.com/attardi/wikiextractor.
+Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.
+Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In *Findings of the Association for Computational Linguistics: EMNLP* 2020.
+Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
+Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In International Conference on Learning Representations.
+Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever-reader interaction for scalable open-domain question answering. In International Conference on Learning Representations.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
+Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, and William W. Cohen. 2020. Differentiable reasoning over a virtual knowledge base. In International Conference on Learning Representations.
+Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179.
+Yair Feldman and Ran El-Yaniv. 2019. Multi-hop paragraph retrieval for open-domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
+
+Clinton Gormley and Zachary Tong. 2015. Elastic-search: The definitive guide: A distributed real-time search and analytics engine. O'Reilly Media, Inc.
+Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. arXiv preprint arXiv:2002.08909.
+Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019. Retrieve, read, rerank: Towards end-to-end multi-document reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
+Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282.
+Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers).
+Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
+V. Karpukhin, Barlas Ouguz, Sewon Min, Patrick Lewis, Ledell Yu Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. 2020. Dense passage retrieval for opendomain question answering. arXiv, abs/2004.04906.
+Omar Khattab, Christopher Potts, and Matei Zaharia. 2020. Relevance-guided supervision for OpenQA with ColBERT. arXiv preprint arXiv:2007.00814.
+Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7.
+Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
+Shaobo Li, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Chengjie Sun, Zhenzhou Ji, and Bingquan Liu. 2020. HopRetriever: Retrieve hops over wikipedia to answer complex questions. arXiv preprint arXiv:2012.15534.
+
+Yuanhua Lv and ChengXiang Zhai. 2011. When documents are very long, BM25 fails! In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 1103-1104.
+Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60.
+Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019. Multi-hop reading comprehension through question decomposition and rescoring. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6097-6109, Florence, Italy. Association for Computational Linguistics.
+Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in neural information processing systems, pages 2265-2273.
+Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Revealing the importance of semantic retrieval for machine reading at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
+Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523-2544.
+Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. 2019. Answering complex open-domain questions through iterative query generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).
+
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.
+Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at TREC-3. NIST Special Publication, pages 109-126.
+Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations.
+Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641-651.
+Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. NewsQA: A machine comprehension dataset. arXiv preprint arXiv:1611.09830.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
+Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018a. $\mathbb{R}^3$ : Reinforced reader-ranker for open-domain question answering. In AAAI Conference on Artificial Intelligence.
+Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2018b. Evidence aggregation for answer re-ranking in open-domain question answering. In International Conference on Learning Representations.
+Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
+Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics, pages 287-302.
+Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas
+
+Oguz. 2021. Answering complex open-domain questions with multi-hop dense retrieval. In International Conference on Learning Representations.
+Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), Minneapolis, Minnesota. Association for Computational Linguistics.
+Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
+Yuyu Zhang, Ping Nie, Arun Ramamurthy, and Le Song. 2021. IDRQA: Iterative document reranking for open-domain multi-hop question answering. In SI-GIR.
+Chen Zhao, Chenyan Xiong, Xin Qian, and Jordan Boyd-Graber. 2020a. Complex factoid question answering with a free-text knowledge graph. In Proceedings of The Web Conference 2020, pages 1205-1216.
+Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary. 2019. Transformer-XH: Multi-evidence reasoning with extra hop attention. In International Conference on Learning Representations.
+Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2020b. Sparta: Efficient open-domain question answering via sparse transformer matching retrieval. arXiv preprint arXiv:2009.13013.
+Mantong Zhou, Zhouxing Shi, Minlie Huang, and Xiaoyan Zhu. 2020. Knowledge-aided open-domain question answering. arXiv preprint arXiv:2006.05244.
+
+# A Data processing
+
+In this section, we describe how we process the English Wikipedia and the SQuAD dataset for training and evaluating IRRR.
+
+For the standard benchmarks (SQuAD Open and HotpotQA fullwiki), we use the Wikipedia corpora prepared by Chen et al. (2017) and Yang et al. (2018), respectively, so that our results are comparable with previous work on these benchmarks. Specifically, for SQuAD Open, we use the processed English Wikipedia released by Chen et al. (2017) which was accessed in 2016, and contains 5,075,182 documents. For HotpotQA, Yang et al. (2018) released a processed set of Wikipedia introductory paragraphs from the English Wikipedia originally accessed in October 2017.
+
+While it is established that the SQuAD dev set is repurposed as the test set for SQuAD Open for ease of evaluation, most previous work make use of the entire training set during training, and as a result a proper development set for SQuAD Open does not exist. We therefore resplit the SQuAD training set into a proper development set that is not used during training, and a reduced training set that we use for all of our experiments. As a result, although IRRR is evaluated on the same test set as previous systems, it is likely disadvantaged due to the reduced amount of training data and hyperparameter tuning on this new dev set. We split the training set by first grouping questions and paragraphs by the Wikipedia entity/title they belong to, then randomly selecting entities to add to the dev set until the dev set contains roughly as many questions as the test set (original SQuAD dev set). The statistics of our resplit of SQuAD can be found in Table 6. We make our resplit publicly available to the community at https://beerqa.github.io/.
+
+For the unified benchmark, we started by processing the English Wikipedia12 with the WikiExtractor (Attardi, 2015). We then tokenized this dump and the supporting context used in SQuAD and HotpotQA with Stanford CoreNLP 4.0.0 (Manning et al., 2014) to look for paragraphs in the
+
+| Split | Origin | # Entities | #QAs |
| train | train | 387 | 77,087 |
| dev | train | 55 | 10,512 |
| test | dev | 48 | 10,570 |
+
+Table 6: Statistics of the resplit SQuAD dataset for proper training and evaluation on the SQuAD Open setting.
+
+2020 Wikipedia dump that might correspond to the context paragraphs in these datasets. Since many Wikipedia articles have been renamed or removed since, we begin by following Wikipedia redirect links to locate the current title of the corresponding Wikipedia page (e.g., the page "Madonna (entertainer)" has been renamed "Madonna"). After the correct Wikipedia article is located, we look for combinations of one to two consecutive paragraphs in the 2020 Wikipedia dump that have high overlap with context paragraphs in these datasets. We calculate the recall of words and phrases in the original context paragraph (because Wikipedia paragraphs are often expanded with more details), and pick the best combination of paragraphs from the article. If the best candidate has either more than $66\%$ unigrams in the original context, or if there is a common subsequence between the two that covers more than $50\%$ of the original context, we consider the matching successful, and map the answers to the new context paragraphs. The main causes of mismatches are a) Wikipedia pages that have been permanently removed (due to copyright issues, unable to meet notability standards, etc.); b) significantly edited to improve presentation (see Figure 7(a)); c) significantly edited because the world has changed (see Figure 7(b)).
+
+As a result, 20,182/2,146 SQuAD train/dev examples (that is, 17,802/2,380/2,146 train/dev/test examples after data resplit) and 15,806/1,416/1,427 HotpotQA train/dev/fullwiki test examples have been excluded from the unified benchmark. To understand the data quality after converting SQuAD Open and HotpotQA to the newer version of Wikipedia, we sampled 100 examples from the training split of each dataset. We find that $6\%$ of SQuAD questions and $10\%$ of HotpotQA questions are no longer answerable from their context paragraphs due to edits in Wikipedia or changes in the world, despite the presence of the answer span. We also find that $43\%$ of HotpotQA examples contain more than the minimal set of necessary paragraphs
+
+Madonna Louise Ciccone (born August 16, 1958) is an American singer, songwriter, actress, and businesswoman. She achieved popularity by pushing the boundaries of lyrical content in mainstream popular music and imagery in her music videos, which became a fixture on MTV. Madonna is known for reinventing both her music and image, and for maintaining her autonomy within the recording industry. Music critics have acclaimed her musical productions, which have generated some controversy. Referred to as the "Queen of Pop", Madonna is often cited as an influence by other artists.
+
+Madonna Louise Ciccone (born August 16, 1958) is an American singer-songwriter, author, actress and record executive. She has been referred to as the "Queen of Pop" since the 1980s. Madonna is noted for her continual reinvention and versatility in music production, songwriting, and visual presentation. She has pushed the boundaries of artistic expression in popular culture, while remaining completely in charge of every aspect of her career. Her works, which incorporate social, political, sexual, and religious themes, have made a cultural impact which has generated both critical acclaim and controversy. Madonna is often cited as an influence by other artists.
+
+(a) The Wikipedia page about Madonna, on December 20, 2016 (on the left, which is in the version SQuAD Open used) versus July 31, 2020 (on the right, which is in the version BEERQA used).
+
+Peter Langkjaer Madsen (born 12 January 1971) is a Danish aerospace engineering enthusiast, "art engineer", submarine builder, entrepreneur, co-founder of the non-profit organization Copenhagen Suborbitals, and founder and CEO of RML Spacelab ApS. He was arrested in August 2017 for involvement in the death of Swedish journalist Kim Wall; the investigation is ongoing.
+
+Peter Langkjaer Madsen (I; born 12 January 1971) is a Danish convicted murderer. In April 2018 he was convicted of the 2017 murder of Swedish journalist Kim Wall on board his submarine, UC3 Nautilus, and sentenced to life imprisonment. He had previously been an engineer and entrepreneur.
+
+(b) The Wikipedia page about Peter Madsen, on September 27, 2017 (on the left, which is in the version HotpotQA used) versus July 26, 2020 (on the right, which is in the version BEERQA used).
+
+Figure 7: Changes in Wikipedia that present challenges in matching them across years. We highlight portions of the text that have been deleted in red underlined text, that have been added in green boldface text, and that have been significantly paraphrased in orange italics, and leave near-verbatim text in the normal font and color.
+
+to answer the question as a result of the mapping process.
+
+# B Elasticsearch Setup
+
+We set up Elasticsearch in standard benchmark settings (SQuAD Open and HotpotQA fullwiki) following practices in previous work (Chen et al., 2017; Qi et al., 2019), with minor modifications to unify these approaches.
+
+Specifically, to reduce the context size for the Transformer encoder in IRRR to avoid unnecessary computational cost, we primarily index the individual paragraphs in the English Wikipedia. To incorporate the broader context from the entire article, as was done by Chen et al. (2017), we also index the full text for each Wikipedia article to help with scoring candidate paragraphs. Each paragraph is associated with the full text of the Wikipedia article it originated from, and the search score is calculated as the summation of two parts: the similarity between query terms and the paragraph text, and the similarity between the query terms and the full text of the article.
+
+For query-paragraph similarity, we use the standard BM25 similarity function (Robertson et al., 1994) with default hyperparameters $(k_{1} = 1.2, b =$
+
+0.75). For query-article similarity, we find BM25 to be less effective, since the length of these articles overwhelm the similarity score stemming from important rare query terms, which has also been reported in the information retrieval literature (Lv and Zhai, 2011). Instead of boosting the term frequency score as considered by Lv and Zhai (2011), we extend BM25 by taking the square of the IDF term and setting the TF normalization term to zero $(b = 0)$ , which is similar to the TF-IDF implementation by Chen et al. (2017) that is shown effective for SQuAD Open.
+
+Specifically, given a document $D$ and query $Q$ , the score is calculated as
+
+$$
+\operatorname {s c o r e} (D, Q) = \sum_ {i = 1} ^ {n} \operatorname {I D F} _ {+} ^ {2} \left(q _ {i}\right) \cdot \frac {f (D , q _ {i}) \cdot \left(1 + k _ {1}\right)}{f (D , q _ {i}) + k _ {1}}, \tag {1}
+$$
+
+where $\mathrm{IDF}_{+}(q_{i}) = \max (0,\log ((N - n(q_{i}) + 0.5) / (n(q_{i}) + 0.5))$ , with $N$ denoting the total number of documents and $n(q_{i})$ the document frequency of query term $q_{i}$ , and $f(q_{i},D)$ is the term frequency of query term $q_{i}$ in document $D$ . We set $k_{1} = 1.2$ in all of our experiments. Intuitively, compared to the standard BM25, this scoring function puts more emphasis on important, rare term over
+
+| Parameter | Value |
| Learning rate | 3 × 10-5 |
| Batch size | 320 |
| Iteration | 10,000 |
| Warming-up | 1,000 |
| Training tokens | 1.638 × 109 |
| Reranker Candidates | 5 |
+
+laps while it is less dampened by document length, making it ideal for an initial sift to find relevant documents for open-domain question answering.
+
+# C Further Training and Prediction Details
+
+We include the hyperparameters used to train the IRRR model in Table 7 for reproducibility.
+
+For our experiments using SQuAD for training, we also follow the practice of Asai et al. (2020) to include the data for SQuAD 2.0 (Rajpurkar et al., 2018) as negative examples for the reader component. Hyperparameters like the prediction threshold of binary classifiers in the query generator are chosen on the development set to optimize end-to-end QA performance.
+
+We also include how we use the reader model's prediction to stop the IRRR pipeline for completeness. Specifically, when the most likely answer is yes or no, the answerability of the reasoning path is the difference between the yes/no logit and the NOANSWER logit. For reasoning paths that are not answerable, we further train the span classifiers to predict the [CLS] token as the "output span", and thus we also include the likelihood ratio between the best span and the [CLS] span if the positive answer is a span. Therefore, when the best predicted answer is a span, its answerability score is computed by including the score of the "[CLS] span" as well, i.e.,
+
+$$
+\begin{array}{l} \operatorname {A n s w e r a b i l i t y} _ {\text {s p a n}} (p) = \operatorname {l o g i t} _ {\text {s p a n}} - \operatorname {l o g i t} _ {\text {N O A N S W E R}} \\ + \frac {\operatorname {l o g i t} _ {s} ^ {\text {s t a r t}} - \operatorname {l o g i t} _ {[ \mathrm {C L S} ]} ^ {\text {s t a r t}}}{2} \\ + \frac {\operatorname {l o g i t} _ {e} ^ {\text {e n d}} - \operatorname {l o g i t} _ {[ \mathrm {C L S} ]} ^ {\text {e n d}}}{2}, \tag {2} \\ \end{array}
+$$
+
+where $\mathrm{logit}_{\mathrm{span}}$ is the logit of predicting span answers from the 4-way classifier, while $\mathrm{logit}^{\mathrm{start}}$ and $\mathrm{logit}^{\mathrm{end}}$ are logits from the span classifiers for selecting the predicted span from the reasoning path.
+
+Table 7: Hyperparameter setting for IRRR training.
+
+| Question | What team was the AFC champion? |
| Step1
+(Non-Gold) | However, the eventual-AFC Champion Cincinnati Bengals, playing in their first AFC Championship Game, defeated the Chargers 27-7 in what became known as the Freezer Bowl. ... |
| Step2
+(Non-Gold) | Super Bowl XXVII was an American football game between the American Football Conference (AFC) champion Buffalo Bills and the National Football Conference (NFC) champion Dallas Cowboys to decide the National Football League (NFL) champion for the 1992 season. ... |
| Gold | Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24-10 to earn their third Super Bowl title. ... |
+
+Figure 8: An example where there are false negative answers in Wikipedia for the question from SQuAD Open.
+
+# D Further Analyses of Model Behavior
+
+In this section, we perform further analyses and introduce further case studies to demonstrate the behavior of the IRRR system. We start by analyzing the effect of the dynamic stopping criterion for reasoning path retrieval, then move on to the end-to-end performance and leakages in the pipeline, and end with a few examples to demonstrate typical failure modes we have identified that might point to limitations with the data.
+
+Effect of Dynamic Stopping. We begin by studying the effect of using the answerability score as a criterion to stop the iterative retrieval, reading, and reranking process within IRRR. We compare the performance of a model with dynamic stopping to one that is forced to stop at exactly $K$ steps of reasoning, neither more nor fewer, where $K = 1,2,\ldots ,5$ . As can be seen in Table 8, IRRR's dynamic stopping criterion based on the answerability score is very effective in achieving good end-to-end question answering performance for questions of arbitrary complexity without having to specify the complexity of questions ahead of time. On both SQuAD Open and HotpotQA, it achieves competitive, if not superior question answering performance, even without knowing the true number of gold paragraphs necessary to answer each question.
+
+Aside from this, we note four interesting findings: (1) the performance of HotpotQA does not peak at two steps of reasoning, but instead is helped by performing a third step of retrieval for the average question; (2) for both datasets, forcing the model to retrieve more paragraphs after a point consistently hurt QA performance; (3) dynamic stopping slightly hurts QA performance on SQuAD Open compared to a fixed number of reasoning steps ( $K = 1$ );
+
+| Steps | SQuAD Open | HotpotQA |
| EM | F1 | EM | F1 |
| Dynamic | 49.92 | 60.91 | 65.74 | 78.41 |
| 1 step | 51.07 | 61.74 | 13.75 | 18.95 |
| 2 step | 38.74 | 48.61 | 65.12 | 77.75 |
| 3 step | 32.14 | 41.66 | 65.37 | 78.16 |
| 4 step | 29.06 | 38.33 | 63.89 | 76.72 |
| 5 step | 19.53 | 25.86 | 59.86 | 72.79 |
+
+(4) when IRRR is allowed to select a dynamic stopping criterion for each example independently, the resulting question answering performance is better than a one-size-fits-all solution of applying the same number of reasoning steps to all examples. While the last confirms the effectiveness of our answerability-based stopping criterion, the cause behind the first three warrants further investigation. We will present further analyses to shed light on potential causes of these in the remainder of this section.
+
+Case Study for Failure Cases. Besides model inaccuracy, one common reason for IRRR to fail at finding the correct answer provided with the datasets is the existence of false negatives (see Figure 8 for an example from SQuAD Open). We estimate that there are about $9\%$ such cases in the HotpotQA part of the training set, and $26\%$ in the SQuAD part of the training set.
+
+These false negatives hurt the quality of data generation as well, especially when generating the SQuAD part of the training set. We investigate randomly selected question-context pairs in the training set and find $24\%$ of our SQuAD training set and $13\%$ of GRR's SQuAD training set are false negatives. This means our methods find better candidate documents but true answers in those documents become false positives. That results in worse performance for our model when it is trained with only the SQuAD part of training set as shown in Table 2.
+
+# E Three+ Hop Challenge Set Analysis
+
+Although SQuAD Open and HotpotQA probe our model's ability on single and two-hop questions, we lacked insight into the ability of our model to
+
+Table 8: SQuAD and HotpotQA performance using adaptive vs. fixed-length reasoning paths, as measured by answer exact match (EM) and $\mathrm{F}_1$ . The dynamic stopping criterion employed by IRRR achieves comparable performance to its fixed-step counterparts, without knowledge of the true number of gold paragraphs.
+
+| # of Documents to answer the question | 3 | 4 | 5 | 6 | 7 | 8 |
| # of questions | 495 | 17 | 8 | 0 | 9 | 1 |
+
+Table 9: Distribution of reasoning steps for questions in Three+ Hop Challenge Set.
+
+| Reasoning Type | % |
| Comparison | 25.6 |
| Bridge-Comparison | 25.3 |
| Bridge | 49.1 |
+
+Table 10: Reasoning types required for Three+ Hop Challenge Set.
+
+generalize to questions that require three or more reasoning steps/hops, which is more than what our model is trained on. Therefore, we built a challenge set comprised of questions that require at least three hops of reasoning to answer (see Table 9 for a breakdown of the number of documents required to answer each question in the challenge set). While the vast majority of challenge set questions require three documents, questions that require four or more documents are also present, hence the “Three+ Hop Challenge Set” name. Although we intend to use the challenge set for testing only, we will share a few key insights into the question sourcing process, the reasoning types required, and the answer types present.
+
+Question Sourcing Process. We annotated 530 examples that require three or more paragraphs to be answered on the 2020 Wikipedia dump. We developed roughly 50-100 question templates that cover a diverse set of topics, including science, literature, film, music, history, sports, technology, politics, and geography. We then annotated approximately ten to twenty examples from each of these question templates to ensure that the resulting challenge set contained a diverse set of topics and questions.
+
+Reasoning Types. During the annotation process for the challenge set, we recorded the types of reasoning required to answer each question (Table 10). Roughly half of the questions require chain reasoning (Bridge), where the reader must identify bridge entities that link the question to the first context paragraph, the first context paragraph to the second, and finally the second to the third where the answer can be found. In the case
+
+| Answer Type | % | Example(s) |
| Person | 29 | Kate Elizabeth Winslet |
| Number | 20 | 388,072,5.5 million |
| Yes / No | 15 | — |
| Group / Org | 11 | CNN |
| Date | 8 | March 28, 1930 |
| Other Proper Noun | 7 | Boeing 747-400 |
| Creative Work | 5 | “California Dreams” |
| Location | 4 | New York City |
| Common Noun | 1 | comedy-drama |
+
+Table 11: Types of answers in Three+ Hop Challenge Set. These statistics are based on 100 randomly sampled examples.
+
+that four or more hops of reasoning are required, this chain of reasoning will extend past the third paragraph to the $n$ -th paragraph where the answer can be found. Additionally, approximately $25\%$ of the questions require the comparison of three or more entities (Comparison). For these questions, the reader needs to retrieve three or more context paragraphs identified in the question that are not directly connected to each other and then compare them on certain aspects specified in the question, similar to the comparison questions in HotpotQA. The remaining $25\%$ of the questions require both chain reasoning and the comparison of two or more entities (Bridge-Comparison). For these questions, the reader must first identify a bridge entity that links the question to the first context paragraph. They then must identify two or more entities to compare within the first context paragraph. Afterwards, they retrieve context paragraphs for each of the aforementioned entities and compare them on certain aspects specified in the question.
+
+Answer Types. We also analyze the types of answers present in the challenge set. As shown in Table 11, the challenge set features a diverse set of answers. We find that roughly half of the questions ask about people $(29\%)$ and numeric quantities $(20\%)$ . Additionally, we find a considerable number of questions that require a yes or no answer $(15\%)$ , ask about groups or organizations $(11\%)$ , dates $(8\%)$ , and other proper nouns $(7\%)$ . The challenge set also contains a non-negligible amount of questions that ask about creative works $(5\%)$ , locations $(4\%)$ , and common nouns $(1\%)$ .
\ No newline at end of file
diff --git a/answeringopendomainquestionsofvaryingreasoningstepsfromtext/images.zip b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..555621b90e99b7b1e3a0e0f5adc52e5c89b30990
--- /dev/null
+++ b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bca45b4a5431fad9ba67f93b6b27e00ba3e0c7b8539a84ed239eba6b519e4f30
+size 540289
diff --git a/answeringopendomainquestionsofvaryingreasoningstepsfromtext/layout.json b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1681efd2a18551b9c1b7c80ce96e3a3ac30f883e
--- /dev/null
+++ b/answeringopendomainquestionsofvaryingreasoningstepsfromtext/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51116109d5aa13067ab1120190c48222fc7aa99b6e875572a8c61499299d830a
+size 464820
diff --git a/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_content_list.json b/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ccb745857b6123619bec7bbb3d58ad1c5723c255
--- /dev/null
+++ b/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35bd3bb4f080d6d49c065ceb871fb11d85aebcd266cde5905f683e12509891dd
+size 98225
diff --git a/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_model.json b/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0c3982a7300274721c2f3cc6c4040338ad2946a6
--- /dev/null
+++ b/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d8fa8f8a83c9b36b3df211694a6025d5104d7b2219133a062145b6621c0f3d1
+size 118269
diff --git a/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_origin.pdf b/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0fcb793c950c80e2384eb04ed3b27df2d43fdc6c
--- /dev/null
+++ b/apartitionfilternetworkforjointentityandrelationextraction/5b9dfdb3-a807-4ec2-9749-0a50b650ec7c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2b5747f8e29a156ac34f8ac11fd313d669251784a10d02c159fcb209268303a
+size 825315
diff --git a/apartitionfilternetworkforjointentityandrelationextraction/full.md b/apartitionfilternetworkforjointentityandrelationextraction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..aa502298da11aa751919d44f96e7229c694c6fdf
--- /dev/null
+++ b/apartitionfilternetworkforjointentityandrelationextraction/full.md
@@ -0,0 +1,440 @@
+# A Partition Filter Network for Joint Entity and Relation Extraction
+
+Zhiheng Yan $^{1}$ , Chong Zhang $^{1}$ , Jinlan Fu $^{1,2}$ , Qi Zhang $^{1*}$ and Zhongyu Wei $^{3}$
+
+1School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing,
+
+Fudan University, Shanghai, China
+
+$^{2}$ National University of Singapore, Singapore
+
+$^{3}$ School of Data Science, Fudan University, Shanghai, China
+
+{zhyan20, chongzhang20, qz, zywei}@fudan.edu.cn
+
+jinlanjonna@gmail.com
+
+# Abstract
+
+In joint entity and relation extraction, existing work either sequentially encode task-specific features, leading to an imbalance in inter-task feature interaction where features extracted later have no direct contact with those that come first. Or they encode entity features and relation features in a parallel manner, meaning that feature representation learning for each task is largely independent of each other except for input sharing. We propose a partition filter network to model two-way interaction between tasks properly, where feature encoding is decomposed into two steps: partition and filter. In our encoder, we leverage two gates: entity and relation gate, to segment neurons into two task partitions and one shared partition. The shared partition represents inter-task information valuable to both tasks and is evenly shared across two tasks to ensure proper two-way interaction. The task partitions represent intra-task information and are formed through concerted efforts of both gates, making sure that encoding of task-specific features is dependent upon each other. Experiment results on six public datasets show that our model performs significantly better than previous approaches. In addition, contrary to what previous work has claimed, our auxiliary experiments suggest that relation prediction is contributory to named entity prediction in a non-negligible way. The source code can be found at https://github.com/Coopercoppers/PFN.
+
+# 1 Introduction
+
+Joint entity and relation extraction intend to simultaneously extract entity and relation facts in the given text to form relational triples as (s, r, o). The extracted information provides a supplement to many studies, such as knowledge graph construction (Riedel et al., 2013), question
+
+answering (Diefenbach et al., 2018) and text summarization (Gupta and Lehal, 2010).
+
+
+Cell State
+
+
+NER
+
+
+RE
+Figure 1: Partition process of cell neurons. Entity and relation gate are used to divide neurons into task-related and task-unrelated ones. Neurons relating to both tasks form the shared partition while the rest form two task partitions.
+
+
+Partition
+
+Conventionally, Named Entity Recognition (NER) and Relation Extraction (RE) are performed in a pipelined manner (Zelenko et al., 2002; Chan and Roth, 2011). These approaches are flawed in that they do not consider the intimate connection between NER and RE. Also, error propagation is another drawback of pipeline methods. In order to conquer these issues, joint extracting entity and relation is proposed and demonstrates stronger performance on both tasks. In early work, joint methods mainly rely on elaborate feature engineering to establish interaction between NER and RE (Yu and Lam, 2010; Li and Ji, 2014; Miwa and Sasaki, 2014). Recently, end-to-end neural network has shown to be successful in extracting relational triples (Zeng et al., 2014; Gupta et al., 2016; Katiyar and Cardie, 2017; Shen et al., 2021) and has since become the mainstream of joint entity and relation extraction.
+
+According to their differences in encoding task-specific features, most of the existing methods can be divided into two categories: sequential encoding and parallel encoding. In sequential encoding, task-specific features are generated sequentially, which means features extracted first are not affected by those that are extracted later. Zeng et al. (2018) and Wei et al. (2020) are
+
+typical examples of this category. Their methods extract features for different tasks in a predefined order. In parallel encoding, task-specific features are generated independently using shared input. Compared with sequential encoding, models build on this scheme do not need to worry about the implication of encoding order. For example, Fu et al. (2019) encodes entity and relation information separately using common features derived from their GCN encoder. Since both task-specific features are extracted through isolated submodules, this approach falls into the category of parallel encoding.
+
+However, both encoding designs above fail to model two-way interaction between NER and RE tasks properly. In sequential encoding, interaction is only unidirectional with a specified order, resulting in different amount of information exposed to NER and RE task. In parallel encoding, although encoding order is no longer a concern, interaction is only present in input sharing. Considering adding two-way interaction in feature encoding, we adopt an alternative encoding design: joint encoding. This design encodes task-specific features jointly with a single encoder where there should exist some mutual section for inter-task communication.
+
+In this work, we instantiate joint encoding with a partition filter encoder. Our encoder first sorts and partitions each neuron according to its contribution to individual tasks with entity and relation gates. During this process, two task partitions and one shared partition are formed (see figure 1). Then individual task partitions and shared partition are combined to generate task-specific features, filtering out irrelevant information stored in the opposite task partition.
+
+Task interaction in our encoder is achieved in two ways: First, the partitions, especially the task-specific ones, are formed through concerted efforts of entity and relation gates, allowing for interaction between the formation of entity and relation features determined by these partitions. Second, the shared partition, which represents information useful to both task, is equally accessible to the formation of both task-specific features, ensuring balanced two-way interaction. The contributions of our work are summarized below:
+
+1. We propose partition filter network, a framework designed specifically for joint encoding. This method is capable of encoding task-specific features and guarantees proper two-way interaction between NER and RE.
+
+2. We conduct extensive experiments on six datasets. The main results show that our method is superior to other baseline approaches, and the ablation study provides insight into what works best for our framework.
+3. Contrary to what previous work has claimed, our auxiliary experiments suggest that relation prediction is contributory to named entity prediction in a non-negligible way.
+
+# 2 Related Work
+
+In recent years, joint entity and relation extraction approaches have been focusing on tackling triple overlapping problem and modelling task interaction. Solutions to these issues have been explored in recent works (Zheng et al., 2017; Zeng et al., 2018, 2019; Fu et al., 2019; Wei et al., 2020). The triple overlapping problem refers to triples sharing the same entity (SEO, i.e. SingleEntityOverlap) or entities (EPO, i.e. EntityPairOverlap). For example, In "Adam and Joe were born in the USA", since triples (Adam, birthplace, USA) and (Joe, birthplace, USA) share only one entity "USA", they should be categorized as SEO triples; or in "Adam was born in the USA and lived there ever since", triples (Adam, birthplace, USA) and (Adam, residence, USA) share both entities at the same time, thus should be categorized as EPO triples. Generally, there are two ways in tackling the problem. One is through generative methods like seq2seq (Zeng et al., 2018, 2019) where entity and relation mentions can be decoded multiple times in output sequence, another is by modeling each relation separately with sequences (Wei et al., 2020), graphs (Fu et al., 2019) or tables (Wang and Lu, 2020). Our method uses relation-specific tables (Miwa and Sasaki, 2014) to handle each relation separately.
+
+Task interaction modeling, however, has not been well handled by most of the previous work. In some of the previous approaches, Task interaction is achieved with entity and relation prediction sharing the same features (Tran and Kavuluru, 2019; Wang et al., 2020b). This could be problematic as information about entity and relation could sometimes be contradictory. Also, as models that use sequential encoding (Bekoulis et al., 2018b; Eberts and Ulges, 2019; Wei et al., 2020) or parallel encoding (Fu et al., 2019) lack proper two-way interaction in feature extraction, predictions made on these features suffer the
+
+problem of improper interaction. In our work, the partition filter encoder is built on joint encoding and is capable of handling communication of inter-task information more appropriately to avoid the problem of sequential and parallel encoding (exposure bias and insufficient interaction), while keeping intra-task information away from the opposite task to mitigate the problem of negative transfer between the tasks.
+
+# 3 Problem Formulation
+
+Our framework split up joint entity and relation extraction into two sub-tasks: NER and RE. Formally, Given an input sequence $s = \{w_1, \ldots, w_L\}$ with $L$ tokens, $w_i$ denotes the i-th token in sequence $s$ . For NER, we aim to extract all typed entities whose set is denoted as $S$ , where $\langle w_i, e, w_j \rangle \in S$ signifies that token $w_i$ and $w_j$ are the start and end token of an entity typed $e \in \mathcal{E}$ . $\mathcal{E}$ represents the set of entity types. Concerning RE, the goal is to identify all head-only triples whose set is denoted as $T$ , each triple $\langle w_i, r, w_j \rangle \in T$ indicates that tokens $w_i$ and $w_j$ are the corresponding start token of subject and object entity with relation $r \in \mathcal{R}$ . $\mathcal{R}$ represents the set of relation types. Combining the results from both NER and RE, we should be able to extract relational triples with complete entity spans.
+
+# 4 Model
+
+We describe our model design in this section. Our model consists of a partition filter encoder and two task units, namely NER unit and RE unit. The partition filter encoder is used to generate task-specific features, which will be sent to task units as input for entity and relation prediction. We will discuss each component in detail in the following three sub-sections.
+
+# 4.1 Partition Filter Encoder
+
+Similar to LSTM, the partition filter encoder is a recurrent feature encoder with information stored in intermediate memories. In each time step, the encoder first divides neurons into three partitions: entity partition, relation partition and shared partition. Then it generates task-specific features by selecting and combining these partitions, filtering out information irrelevant to each task. As shown in figure 2, this module is designed specifically to jointly extract task-specific features, which strictly follows two steps: partition and filter.
+
+Partition This step performs neuron partition to divide cell neurons into three partitions: Two task partitions storing intra-task information, namely entity partition and relation partition, as well as one shared partition storing inter-task information. The neuron to be divided are candidate cell $\tilde{c}_t$ representing current information and previous cell $c_{t-1}$ representing history information. $c_{t-1}$ is the direct input from the last time step and $\tilde{c}_t$ is calculated in the same manner as LSTM:
+
+$$
+\tilde {c} _ {t} = \tanh \left(\operatorname {L i n e a r} \left(\left[ x _ {t}; h _ {t - 1} \right]\right)\right) \tag {1}
+$$
+
+where Linear stands for the operation of linear transformation.
+
+We leverage entity gate $\tilde{e}$ and relation gate $\tilde{r}$ , which are referred to as master gates in (Shen et al., 2019), for neuron partition. As illustrated in figure 1, each gate, which represents one specific task, will divide neurons into two segments according to their usefulness to the designated task. For example, entity gate $\tilde{e}$ will separate neurons into two partitions: NER-related and NER-unrelated. The shared partition is formed by combining partition results from both gates. Neurons in the shared partition can be regarded as information valuable to both tasks. In order to model two-way interaction properly, inter-task information in the shared partition is evenly accessible to both tasks (which will be discussed in the filter subsection). In addition, information valuable to only one task is invisible to the opposing task and will be stored in individual task partitions. The gates are calculated using cummax activation function $cummax(\cdot) = cumsum(\text{softmax}(\cdot))^1$ , whose output can be seen as approximation of a binary gate with the form of $(0,\dots ,0,1,\dots ,1)$ :
+
+$$
+\begin{array}{l} \tilde {e} = \operatorname {c u m m a x} \left(\operatorname {L i n e a r} \left([ x _ {t}; h _ {t - 1} ]\right)\right) \\ \tilde {r} = 1 - \operatorname {c u m m a x} \left(\operatorname {L i n e a r} \left([ x _ {t}; h _ {t - 1} ]\right)\right) \\ \end{array}
+$$
+
+The intuition behind equation (2) is to identify two cut-off points, displayed as scissors in figure 2, which naturally divide a set of neurons into three segments.
+
+As a result, the gates will divide neurons into three partitions, entity partition $\rho_{e}$ , relation partition $\rho_{r}$ and shared partition $\rho_{s}$ . Partitions for
+
+
+(a) Framework of Partition Filter Network
+Figure 2: (a) Overview of PFN. The framework consists of three components: partition filter encoder, NER unit and RE unit. In task units, we use table-filling for word pair prediction. Orange, yellow and green represents NER-related, shared and RE-related component or features. (b) Detailed depiction of partition filter encoder in one single time step. We decompose feature encoding into two steps: partition and filter (shown in the gray area). In partition, we first segment neurons into two task partitions and one shared partition. Then in filter, partitions are selected and combined to form task-specific features and shared features, filtering out information irrelevant to each task.
+
+
+(b) Inner Mechanism of Partition Filter
+
+previous cell $c_{t - 1}$ are formulated as below:
+
+$$
+\rho_ {s, c _ {t - 1}} = \tilde {e} _ {c _ {t - 1}} \circ \tilde {r} _ {c _ {t - 1}}
+$$
+
+$$
+\rho_ {e, c _ {t - 1}} = \tilde {e} _ {c _ {t - 1}} - \rho_ {s, c _ {t - 1}} \tag {3}
+$$
+
+$$
+\rho_ {r, c _ {t - 1}} = \tilde {r} _ {c _ {t - 1}} - \rho_ {s, c _ {t - 1}}
+$$
+
+Note that if you add up all three partitions, the result is not equal to one. This guarantees that in forward message passing, some information is discarded to ensure that message is not overloaded, which is similar to the forgetting mechanism in LSTM.
+
+Then, we aggregate partition information from both target cells, and three partitions are formed as a result. For all three partitions, we add up all related information from both cells:
+
+$$
+\rho_ {e} = \rho_ {e, c _ {t - 1}} \circ c _ {t - 1} + \rho_ {e, \tilde {c} _ {t}} \circ \tilde {c} _ {t}
+$$
+
+$$
+\rho_ {r} = \rho_ {r, c _ {t - 1}} \circ c _ {t - 1} + \rho_ {r, \tilde {c} _ {t}} \circ \tilde {c} _ {t} \tag {4}
+$$
+
+$$
+\rho_ {s} = \rho_ {s, c _ {t - 1}} \circ c _ {t - 1} + \rho_ {s, \tilde {c} _ {t}} \circ \tilde {c} _ {t}
+$$
+
+Filter We propose three types of memory block: entity memory, relation memory and shared memory. Here we denote $\mu_{e}$ as entity memory, $\mu_{r}$ as
+
+relation memory and $\mu_{s}$ as shared memory. In $\mu_{e}$ , information in entity partition and shared partition are selected. In contrast, information in relation partition, which we assume is irrelevant or even harmful to named entity recognition task, is filtered out. The same logic applies to $\mu_{r}$ as well, where information in entity partition is filtered out and the rest is kept. In addition, information in shared partition will be stored in $\mu_{s}$ :
+
+$$
+\mu_ {e} = \rho_ {e} + \rho_ {s}; \mu_ {r} = \rho_ {r} + \rho_ {s}; \mu_ {s} = \rho_ {s} \tag {5}
+$$
+
+Note that inter-task information in the shared partition is accessible to both entity memory and relation memory, allowing balanced interaction between NER and RE. Whereas in sequential and parallel encoding, relation features have no direct impact on the formation of entity features.
+
+After updating information in each memory, entity features $h_e$ , relation features $h_r$ and shared features $h_s$ are generated with corresponding memories:
+
+$$
+h _ {e} = \tanh \left(\mu_ {e}\right)
+$$
+
+$$
+h _ {r} = \tanh \left(\mu_ {r}\right) \tag {6}
+$$
+
+$$
+h _ {s} = \tanh (\mu_ {s})
+$$
+
+Following the partition and filter steps, information
+
+in all three memories is used to form cell state $c_{t}$ which will then be used to generate hidden state $h_{t}$ (The hidden and cell state at time step $t$ are input to the next time step):
+
+$$
+c _ {t} = \operatorname {L i n e a r} \left(\left[ \mu_ {e, t}; \mu_ {r, t}; \mu_ {s, t} \right]\right) \tag {7}
+$$
+
+$$
+h _ {t} = \tanh (c _ {t})
+$$
+
+# 4.2 Global Representation
+
+In our model, we employ a unidirectional encoder for feature encoding. The backward encoder in the bidirectional setting is replaced with task-specific global representation to capture the semantics of future context. Empirically this shows to be more effective. For each task, global representation is the combination of task-specific features and shared features computed by:
+
+$$
+h _ {g _ {e}, t} = \tanh (\operatorname {L i n e a r} [ h _ {e, t}; h _ {s, t} ])
+$$
+
+$$
+\begin{array}{l} h _ {g _ {r}, t} = \tanh (\text {L i n e a r} [ h _ {r, t}; h _ {s, t} ]) \\ h _ {r} = \operatorname {m e a m e a l} (h _ {r}, h _ {s}) \end{array} \tag {8}
+$$
+
+$$
+h _ {g _ {e}} = \operatorname {m a x p o o l} \left(h _ {g _ {e}, 1}, \dots , h _ {g _ {e}, L}\right) \tag {8}
+$$
+
+$$
+h _ {g _ {r}} = \operatorname {m a x p o o l} \left(h _ {g _ {r}, 1}, \dots , h _ {g _ {r}, L}\right)
+$$
+
+# 4.3 Task Units
+
+Our model consists of two task units: NER unit and RE unit. In NER unit, the objective is to identify and categorize all entity spans in a given sentence. More specifically, the task is treated as a type-specific table filling problem. Given a entity type set $\mathcal{E}$ , for each type $k$ , we fill out a table whose element $e_{ij}^{k}$ represents probability of word $w_{i}$ and word $w_{j}$ being start and end position of an entity with type $k$ . For each word pair $(w_{i}, w_{j})$ , we concatenate word-level entity features $h_{i}^{e}$ and $h_{j}^{e}$ , as well as sentence-level global features $h_{ge}$ before feeding it into a fully-connected layer with ELU activation to get entity span representation $h_{ij}^{e}$ :
+
+$$
+h _ {i j} ^ {e} = \operatorname {E L U} \left(\operatorname {L i n e a r} \left(\left[ h _ {i} ^ {e}; h _ {j} ^ {e}; h _ {g _ {e}} \right]\right)\right) \tag {9}
+$$
+
+With the span representation, we can predict whether the span is an entity with type $k$ by feeding it into a feed forward neural layer:
+
+$$
+\begin{array}{l} e _ {i j} ^ {k} = p \left(e = \left\langle w _ {i}, k, w _ {j} \right\rangle \mid e \in S\right) \tag {10} \\ = \sigma \bigl (\operatorname {L i n e a r} \bigl (h _ {i j} ^ {e} \bigr) \bigr), \forall k \in \mathcal {E} \\ \end{array}
+$$
+
+where $\sigma$ represents sigmoid activation function.
+
+Computation in RE unit is mostly symmetrical to NER unit. Given a set of gold relation triples denoted as $T$ , this unit aims to identify all triples in the sentence. We only predict starting word of each entity in this unit as entity span prediction is
+
+already covered in NER unit. Similar to NER, we consider relation extraction as a relation-specific table filling problem. Given a relation label set $\mathcal{R}$ for each relation $l\in \mathcal{R}$ , we fill out a table whose element $r_{ij}^{l}$ represents the probability of word $w_{i}$ and word $w_{j}$ being starting word of subject and object entity. In this way, we can extract all triples revolving around relation $l$ with one relation table. For each triple $(w_{i},l,w_{j})$ , similar to NER unit, triple representation $h_{ij}^{r}$ and relation score $r_{ij}^{l}$ are calculated as follows:
+
+$$
+\begin{array}{l} h _ {i j} ^ {r} = \operatorname {E L U} \left(\operatorname {L i n e a r} \left(\left[ h _ {i} ^ {r}; h _ {j} ^ {r}; h _ {g _ {r}} \right]\right)\right) \\ r _ {i j} ^ {l} = p \left(r = \left\langle w _ {i}, l, w _ {j} \right\rangle | r \in T\right) \tag {11} \\ = \sigma \left(\operatorname {L i n e a r} \left(h _ {i j} ^ {r}\right)\right), \forall l \in \mathcal {R} \\ \end{array}
+$$
+
+# 4.4 Training and Inference
+
+For a given training dataset, the loss function $L$ that guides the model during training consists of two parts: $L_{ner}$ for NER unit and $L_{re}$ for RE unit:
+
+$$
+L_{ner} = \sum_{\hat{e}_{ij}^{k}\in S}\mathrm{BCELoss}(e_{ij}^{k},\hat{e}_{ij}^{k})
+$$
+
+$$
+L _ {r e} = \sum_ {\hat {r} _ {i j} ^ {l} \in T} ^ {i j} \operatorname {B C E L o s s} \left(r _ {i j} ^ {l}, \hat {r} _ {i j} ^ {l}\right) \tag {12}
+$$
+
+$\hat{e}_{ij}^{k}$ and $\hat{r}_{ij}^{l}$ are respectively ground truth label of entity table and relation table. $e_{ij}^{k}$ and $r_{ij}^{l}$ are the predicted ones. We adopt BCELoss for each task3. The training objective is to minimize the loss function $L$ , which is computed as $L_{ner} + L_{re}$ .
+
+During inference, we extract relational triples by combining results from both NER and RE unit. For each legitimate triple prediction $(s_{i,j}^{k}, l, o_{m,n}^{k'})$ where $l$ is the relation label, $k$ and $k'$ are the entity type labels, and the indexes $i, j$ and $m, n$ are respectively starting and ending index of subject entity $s$ and object entity $o$ , the following conditions should be satisfied:
+
+$$
+e _ {i j} ^ {k} \geq \lambda_ {e}; e _ {m n} ^ {k ^ {\prime}} \geq \lambda_ {e}; r _ {i m} ^ {l} \geq \lambda_ {r} \tag {13}
+$$
+
+$\lambda_{e}$ and $\lambda_{r}$ are threshold hyper-parameters for entity and relation prediction, both set to be 0.5 without further fine-tuning.
+
+# 5 Experiment
+
+# 5.1 Dataset, Evaluation and Implementation Details
+
+We evaluate our model on six datasets. NYT (Riedel et al., 2010), WebNLG (Zeng et al., 2018),
+
+ADE (Gurulingappa et al., 2012), SciERC (Luan et al., 2018), ACE04 and ACE05 (Walker et al., 2006). Descriptions of the datasets can be found in Appendix A.
+
+Following previous work, we assess our model on NYT/WebNLG under partial match, where only the tail of an entity is annotated. Besides, as entity type information is not annotated in these datasets, we set the type of all entities to a single label "NONE", so entity type would not be predicted in our model. On ACE05, ACE04, ADE and SciERC, we assess our model under exact match where both head and tail of an entity are annotated. For ADE and ACE04, 10-fold and 5-fold cross validation are used to evaluate the model respectively, and $15\%$ of the training set is used to construct the development set. For evaluation metrics, we report F1 scores in both NER and RE. In NER, an entity is seen as correct only if its type and boundary are correct. In RE, A triple is correct only if the types, boundaries of both entities and their relation type are correct. In addition, we report Macro-F1 score in ADE and Micro-F1 score in other datasets.
+
+We choose our model parameters based on the performance in the development set (the best average F1 score of NER and RE) and report the results on the test set. More details of hyperparameters can be found in Appendix B
+
+# 5.2 Main Result
+
+Table 1 shows the comparison of our model with existing approaches. In partially annotated datasets WebNLG and NYT, under the setting of BERT. For RE, our model achieves $1.7\%$ improvement in WebNLG but performance in NYT is only slightly better than previous SOTA TpLinker (Wang et al., 2020b) by $0.5\%$ margin. We argue that this is because NYT is generated with distant supervision, and annotation for entity and relation are often incomplete and wrong. Compared to TpLinker, the strength of our method is to reinforce two-way interaction between entity and relation. However, when dealing with noisy data, the strength might be counter-productive as error propagation between both tasks is amplified as well.
+
+For NER, our method shows a distinct advantage over baselines that report the figures. Compared to Casrel (Wei et al., 2020), a competitive method, our F1 scores are $2.3\% / 2.5\%$ higher in NYT/WebNLG. This proves that exposing relation information to
+
+| Method | NER | RE |
| NYT △ | | |
| CopyRE (Zeng et al., 2018) | 86.2 | 58.7 |
| GraphRel (Fu et al., 2019) | 89.2 | 61.9 |
| CopyRL (Zeng et al., 2019) | - | 72.1 |
| Casrel (Wei et al., 2020)† | (93.5) | 89.6 |
| TpLinker (Wang et al., 2020b)† | - | 91.9 |
| PFN† | 95.8 | 92.4 |
| WebNLG △ | | |
| CopyRE (Zeng et al., 2018) | 82.1 | 37.1 |
| GraphRel (Fu et al., 2019) | 91.9 | 42.9 |
| CopyRL (Zeng et al., 2019) | - | 61.6 |
| Casrel (Wei et al., 2020)† | (95.5) | 91.8 |
| TpLinker (Wang et al., 2020b)† | - | 91.9 |
| PFN† | 98.0 | 93.6 |
| ADE ▲ | | |
| Multi-head (Bekoulis et al., 2018b) | 86.4 | 74.6 |
| Multi-head + AT (Bekoulis et al., 2018a) | 86.7 | 75.5 |
| Rel-Metric (Tran and Kavuluru, 2019) | 87.1 | 77.3 |
| SpERT (Eberts and Ulges, 2019)† | 89.3 | 79.2 |
| Table-Sequence (Wang and Lu, 2020)‡ | 89.7 | 80.1 |
| PFN† | 89.6 | 80.0 |
| PFN‡ | 91.3 | 83.2 |
| ACE05 △ | | |
| Structured Perceptron (Li and Ji, 2014) | 80.8 | 49.5 |
| SPTree (Miwa and Bansal, 2016) | 83.4 | 55.6 |
| Multi-turn QA (Li et al., 2019)† | 84.8 | 60.2 |
| Table-Sequence (Wang and Lu, 2020)‡ | 89.5 | 64.3 |
| PURE (Zhong and Chen, 2021)‡ | 89.7 | 65.6 |
| PFN‡ | 89.0 | 66.8 |
| ACE04 △ | | |
| Structured Perceptron (Li and Ji, 2014) | 79.7 | 45.3 |
| SPTree (Miwa and Bansal, 2016) | 81.8 | 48.4 |
| Multi-turn QA (Li et al., 2019)† | 83.6 | 49.4 |
| Table-Sequence (Wang and Lu, 2020)‡ | 88.6 | 59.6 |
| PURE (Zhong and Chen, 2021)‡ | 88.8 | 60.2 |
| PFN‡ | 89.3 | 62.5 |
| SciERC △ | | |
| SPE (Wang et al., 2020a)§ | 68.0 | 34.6 |
| PURE (Zhong and Chen, 2021)§ | 66.6 | 35.6 |
| PFN§ | 66.8 | 38.4 |
+
+Table 1: Experiment results on six datasets. †, ‡ and § denotes the use of BERT, ALBERT and SCIBERT (Devlin et al., 2019; Lan et al., 2020; Beltagy et al., 2019) pre-trained embedding. △ and ▲ denotes the use of micro-F1 and macro-F1 score. NER results of Casrel are its reported average score of head and tail entity. Results of PURE are reported in single-sentence setting for fair comparison.
+
+NER, which is not present in Casrel, leads to better performance in entity recognition.
+
+Furthermore, our model demonstrates strong
+
+performance in fully annotated datasets ADE, ACE05, ACE04 and SciERC. For ADE, our model surpasses table-sequence (Wang and Lu, 2020) by $1.6\% /3.1\%$ in NER/RE. For ACE05, our model surpasses PURE (Zhong and Chen, 2021) by $1.2\%$ in RE but results in weaker performance in NER by $0.7\%$ . We argue that it could be attributed to the fact that, unlike the former three datasets, ACE05 contains many entities that do not belong to any triple. Thus utilizing relation information for entity prediction might not be as fruitful as that in other datasets (PURE is a pipeline approach where relation information is unseen to entity prediction). In ACE04, our model surpasses PURE by $0.5\% /2.3\%$ in NER/RE. In SciERC, our model surpasses PURE by $0.2\% /2.8\%$ in NER/RE. Overall, the performance of our model shows remarkable improvement against previous baselines.
+
+# 5.3 Ablation Study
+
+In this section, we take a closer look and check the effectiveness of our framework in relation extraction concerning five different aspects: number of encoder layer, bidirectional versus unidirectional, encoding scheme, partition granularity and decoding strategy.
+
+Number of Encoder Layers Similar to recurrent neural network, we stack our partition filter encoder with an arbitrary number of layers. Here we only examine frameworks with no more than three layers. As shown in table 2, adding layers to our partition filter encoder leads to no improvement in F1-score. This shows that one layer is good enough for encoding task-specific features.
+
+Bidirection Vs Unidirection Normally we need two partition filter encoders (one in reverse order) to model interaction between forward and backward context. However, as discussed in section 4.2, our model replaces the backward encoder with a global representation to let future context be visible to each word, achieving a similar effect with bidirectional settings. In order to find out which works best, we compare these two methods in our ablation study. From table 2, we find that unidirectional encoder with global representation outperforms bidirectional encoder without global representation, showing that global representation is more suitable in providing future context for each word than backward encoder. In addition, when global representation is involved,
+
+| Ablation | Settings | P | R | F |
| Layers | N=1 | 40.6 | 36.5 | 38.4 |
| N=2 | 39.9 | 35.7 | 37.7 |
| N=3 | 40.0 | 36.2 | 38.0 |
| Bidirection Vs Unidirection | Unidirection (w/o gl.) | 40.6 | 36.5 | 38.4 |
| 40.5 | 34.6 | 37.3 |
| Bidirection (w/o gl.) | 40.4 | 36.2 | 38.2 |
| 39.9 | 35.3 | 37.5 |
| Encoding Scheme | Joint Sequential | 40.6 | 36.5 | 38.4 |
| 40.0 | 34.2 | 36.9 |
| Parallel | 36.0 | 34.4 | 35.1 |
| Partition Granularity | Fine-grained | 40.6 | 36.5 | 38.4 |
| Coarse | 39.3 | 35.5 | 37.3 |
| Decoding Strategy | Universal | 40.6 | 36.5 | 38.4 |
| Selective | 38.5 | 36.3 | 37.4 |
+
+Table 2: Ablation study on SciERC. P, R and F represent precision, recall and F1 relation scores. The best results are marked in bold. gl. in the second experiment is short for global representation.
+
+unidirectional encoder achieves similar result in F1 score compared to bidirectional encoder, indicating that global representation alone is enough in capturing semantics of future context.
+
+Encoding Scheme We replace our partition filter encoder with two LSTM variants to examine the effectiveness of our encoder. In the parallel setting, we use two LSTM encoders to learn task-specific features separately, and no interaction is allowed except for sharing the same input. In the sequential setting where only one-way interaction is allowed, entity features generated from the first LSTM encoder is fed into the second one to produce relation features. From table 2, we observe that our partition filter outperforms LSTM variants by a large margin, proving the effectiveness of our encoder in modelling two-way interaction over the other two encoding schemes.
+
+Partition Granularity Similar to (Shen et al., 2019), we split neurons into several chunks and perform partition within each chunk. Each chunk shares the same entity gate and relation gate. Thus partition results for all chunks remain the same. For example, with a 300-dimension neuron set, if we split it into 10 chunks, each with 30 neurons, only two 30-dimension gates are needed for neuron partition. We refer to the above operation as coarse partition. In contrast, our fine-grained partition can be seen as a special case as neurons are split into only one chunk. We compare our fine-grained partition (chunk size = 300) with coarse partition
+
+| Dataset | Entity Type | P | R | F | Ratio |
| ACE05 | Total | 89.3 | 88.8 | 89.0 | 1.00 |
| In-triple | 95.9 | 92.1 | 94.0 | 0.36 |
| Out-of-triple | 85.8 | 86.9 | 86.3 | 0.64 |
| Diff | 10.1 | 5.2 | 7.7 | - |
| ACE04 | Total | 89.1 | 89.6 | 89.3 | 1.00 |
| In-triple | 94.3 | 91.2 | 92.7 | 0.71 |
| Out-of-triple | 87.1 | 89.2 | 88.1 | 0.29 |
| Diff | 7.2 | 3.0 | 4.6 | - |
| SciERC | Total | 64.8 | 69.0 | 66.8 | 1.00 |
| In-triple | 78.0 | 71.1 | 74.4 | 0.78 |
| Out-of-triple | 38.9 | 61.7 | 47.8 | 0.22 |
| Diff | 39.1 | 9.4 | 26.6 | - |
+
+Table 3: NER Results on different entity types. Entities are split into two groups: In-triple and Out-of-triple based on whether they appear in relational triples or not. Diff is the performance difference between In-triple and Out-of-triple. Ratio is number of entities of given type divided by number of total entities in the test set (train, dev and test set combined in ACE04). Results of ACE04 are averaged over 5-folds
+
+(chunk size = 10). Table 2 shows that fine-grained partition performs better than coarse partition. It is not surprising as in coarse partition, the assumption of performing the same neuron partition for each chunk might be too strong for the encoder to separate information for each task properly.
+
+Decoding Strategy In pipeline-like methods, relation prediction is performed on entities that the system considers as valid in their entity prediction. We argue that a better way for relation prediction is to take into account all the invalid word pairs. We refer to the former strategy as selective decoding and the latter one as universal decoding. For selective decoding, we only predict the relation scores for entities deemed as valid by their entity scores calculated in the NER unit. Table 2 shows that universal decoding, where all the negative instances are included, is better than selective decoding. Apart from mitigating error propagation, we argue that universal decoding is similar to contrastive learning as negative instances helps to better identify the positive instances through implicit comparison.
+
+# 6 Effects of Relation Signal on Entity Recognition
+
+It is a widely accepted fact that entity recognition helps in predicting relations, but the effect of relation signals on entity prediction remains divergent
+
+among researchers.
+
+Through two auxiliary experiments, we find that the absence of relation signals has a considerable bearing on entity recognition.
+
+# 6.1 Analysis on Entity Prediction of Different Types
+
+In Table 1, NER performance of our model is consistently better than other baselines except for ACE05 where the performance falls short with a non-negligible margin. We argued that it could be attributed to the fact that ACE05 contains many entities that do not belong to any triples.
+
+To corroborate our claim, in this section we try to quantify the performance gap of entity prediction between entities that belong to certain triples and those that have no relation with other entities. The former ones are referred to as In-triple entities and the latter as Out-of-triple entities. We split the entities into two groups and test the NER performance of each group in ACE05/ACE04/SciERC. In NYT/WebNLG/ADE, since Out-of-triple entity is non-existent, evaluation is not performed on these datasets.
+
+As is shown in table 3, there is a huge gap between In-triple entity prediction and Out-of-triple entity prediction, especially in SciERC where the diff score reaches $26.6\%$ . We argue that it might be attributed to the fact that entity prediction in SciERC is generally harder given that it involves identification of scientific terms and also the average length of entities in SciERC are longer. Another observation is that the diff score is largely attributed to the difference of precision, which means that without guidance from relational signal, our model tends to be over-optimistic about entity prediction.
+
+In addition, compared to PURE (Zhong and Chen, 2021) we find that the overall performance of NER is negatively correlated with the percentage of out-of-triple entities in the dataset. especially in ACE05, where the performance of our model is relatively weak, over $64\%$ of the entities are Out-of-triple. This phenomenon is a manifest of the weakness in joint model: Joint modeling of NER and RE might be somewhat harmful to entity prediction as the inference patterns of In-triple and Out-of-triple entity are different, considering that the dynamic between relation information and entity prediction is different for In-triple and Out-of-triple entity.
+
+| Model | ConcatSent | CrossCategory | EntTypos | OOV | SwapLonger | Average |
| Ori → Aug | Decline | Ori → Aug | Decline | Ori → Aug | Decline | Ori → Aug | Decline | Ori → Aug | Decline | Decline |
| BiLSTM-CRF | 83.0→82.2 | 0.8 | 82.9→43.5 | 39.4 | 82.5→73.5 | 9.0 | 82.9→64.2 | 18.7 | 82.9→67.7 | 15.2 | 16.6 |
| BERT-base(cased) | 87.3→86.2 | 1.1 | 87.4→48.1 | 39.3 | 87.5→83.1 | 4.1 | 87.4→79.0 | 8.4 | 87.4→82.1 | 5.3 | 11.6 |
| BERT-base(uncased) | 88.8→88.7 | 0.1 | 88.7→46.0 | 42.7 | 89.1→83.0 | 6.1 | 88.7→74.6 | 14.1 | 88.7→78.5 | 10.2 | 14.6 |
| TENER | 84.2→83.4 | 0.8 | 84.7→39.6 | 45.1 | 84.5→76.6 | 7.9 | 84.7→51.5 | 33.2 | 84.7→31.1 | 53.6 | 28.1 |
| Flair | 85.5→85.2 | 0.3 | 84.6→44.9 | 39.7 | 86.1→81.5 | 4.6 | 84.6→81.3 | 3.3 | 84.6→73.1 | 11.5 | 11.9 |
| PFN | 89.1→87.9 | 1.2 | 89.0→80.5 | 8.5 | 89.6→86.9 | 2.7 | 89.0→80.4 | 8.6 | 89.0→84.3 | 4.7 | 5.1 |
+
+Table 4: Robustness test of NER against input perturbation in ACE05, baseline results and test files are copied from https://www.textflint.io/
+
+# 6.2 Robustness Test on Named Entity Recognition
+
+We use robustness test to evaluate our model under adverse circumstances. In this case, we use the domain transformation methods of NER from (Wang et al., 2021). The compared baselines are all relation-free models, including BiLSTM-CRF (Huang et al., 2015), BERT (Devlin et al., 2019), TENER (Yan et al., 2019) and Flair-Embeddings (Akbik et al., 2019). Descriptions of the transformation methods can be found in Appendix D
+
+From table 4, we observe that our model is mostly more resilient against input perturbations compared to other baselines, especially in the category of CrossCategory, which is probably attributed to the fact that relation signals used in our training impose type constraints on entities, thus inference of entity types is less affected by the semantic meaning of target entity itself, but rather the (relational) context surrounding the entity.
+
+# 6.3 Does Relation Signal Helps in Predicting Entities
+
+Contrary to what (Zhong and Chen, 2021) has claimed (that relation signal has minimal effects on entity prediction), we find several clues that suggest otherwise. First, in section 6.1, we observe that In-triple entities are much more easier to predict than Out-of-triple entities, which suggests that relation signals are useful to entity prediction. Second, in section 6.2, we perform robustness test in NER to evaluate our model's capability against input perturbation. In the robustness test we compare our method - the only joint model to other relation-free baselines. The result suggests that our method is much more resilient against adverse circumstances, which could be (at least partially) explained by the introduction of relation signals. To sum up, we find that relation signals do have non-negligible effect on entity prediction.
+
+The reason for (Zhong and Chen, 2021) to conclude that relation information has minimal influence on entity prediction is most probably due to selective bias, meaning that the evaluated dataset ACE05 contains a large proportion of Out-of-triple entities $(64\%)$ , which in essence does not require any relation signal themselves.
+
+# 7 Conclusion
+
+In this paper, we encode task-specific features with our newly proposed model: Partition Filter Network in joint entity and relation extraction. Instead of extracting task-specific features in a sequential or parallel manner, we employ a partition filter encoder to generate task-specific features jointly in order to model two-way inter-task interaction properly. We conduct extensive experiments on six datasets to verify the effectiveness of our model. Overall experiment results demonstrate that our model is superior to previous baselines in entity and relation prediction. Furthermore, dissection on several aspects of our model in ablation study sheds some light on what works best in our framework. Lastly, contrary to what previous work has claimed, our auxiliary experiments suggest that relation prediction is contributory to named entity prediction in a non-negligible way.
+
+# 8 Acknowledgements
+
+The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by China National Key R&D Program (No.2018YFB1005104), National Natural Science Foundation of China (No.62076069, 61976056), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103).
+
+# References
+
+Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019.
+
+FLAIR: An easy-to-use framework for state-of-the-art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, Minnesota. Association for Computational Linguistics.
+Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018a. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2830-2836, Brussels, Belgium. Association for Computational Linguistics.
+Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018b. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Systems with Applications, 114:34-45.
+Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.
+Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 551-560, Portland, Oregon, USA. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Dennis Diefenbach, Vanessa Lopez, Kamal Singh, and Pierre Maret. 2018. Core techniques of question answering systems over knowledge bases: a survey. Knowledge and Information systems, 55(3):529-569.
+Markus Eberts and Adrian Ulges. 2019. Span-based joint entity and relation extraction with transformer pre-training. arXiv preprint arXiv:1909.07755.
+Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409-1418, Florence, Italy. Association for Computational Linguistics.
+
+Pankaj Gupta, Hinrich Schütze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2537-2547, Osaka, Japan. The COLING 2016 Organizing Committee.
+Vishal Gupta and Gurpreet Singh Lehal. 2010. A survey of text summarization extractive techniques. Journal of Emerging Technologies in Web Intelligence, 2:258-268.
+Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of biomedical informatics, 45(5):885-892.
+Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.
+Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 917-928, Vancouver, Canada. Association for Computational Linguistics.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 402-412, Baltimore, Maryland. Association for Computational Linguistics.
+Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1340-1350, Florence, Italy. Association for Computational Linguistics.
+Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings
+
+of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219-3232, Brussels, Belgium. Association for Computational Linguistics.
+Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin, Germany. Association for Computational Linguistics.
+Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858-1869, Doha, Qatar. Association for Computational Linguistics.
+Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148-163. Springer.
+Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74-84, Atlanta, Georgia. Association for Computational Linguistics.
+Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Yongliang Shen, Xinyin Ma, Yechun Tang, and Weiming Lu. 2021. A trigger-sense memory flow framework for joint entity and relation extraction. In Proceedings of the Web Conference 2021, pages 1704-1715.
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.
+Tung Tran and Ramakanth Kavuluru. 2019. Neural metric learning for fast end-to-end relation extraction. arXiv preprint arXiv:1905.07458.
+Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57:45.
+Jue Wang and Wei Lu. 2020. Two are better than one: Joint entity and relation extraction with table-sequence encoders. In Proceedings of the
+
+2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1706-1721, Online. Association for Computational Linguistics.
+Xiao Wang, Qin Liu, Tao Gui, Qi Zhang, Yicheng Zou, Xin Zhou, Jiacheng Ye, Yongxin Zhang, Rui Zheng, Zexiong Pang, et al. 2021. Textflint: Unified multilingual robustness evaluation toolkit for natural language processing.
+Yijun Wang, Changzhi Sun, Yuanbin Wu, Junchi Yan, Peng Gao, and Guotong Xie. 2020a. Pretraining entity relation encoder with intra-span and inter-span information. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1692-1705, Online. Association for Computational Linguistics.
+Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, and Limin Sun. 2020b. TPLinker: Single-stage joint extraction of entities and relations through token pair linking. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1572-1582, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A novel cascade binary tagging framework for relational triple extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1476-1488, Online. Association for Computational Linguistics.
+Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu. 2019. Tener: adapting transformer encoder for named entity recognition. arXiv preprint arXiv:1911.04474.
+Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In *Coling* 2010: Posters, pages 1399–1407, Beijing, China. Coling 2010 Organizing Committee.
+Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 71-78. Association for Computational Linguistics.
+Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335-2344, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.
+Xiangrong Zeng, Shizhu He, Daojian Zeng, Kang Liu, Shengping Liu, and Jun Zhao. 2019. Learning the extraction order of multiple relational facts
+
+| Dataset | #Sentences | |E| | |R| |
| Train | Dev | Test |
| NYT | 56,195 | 5,000 | 5,000 | - | 24 |
| WebNLG | 5,019 | 500 | 703 | - | 170 |
| ADE | 4,272 (10-fold) | 2 | 1 |
| ACE05 | 10,051 | 2,424 | 2,050 | 7 | 6 |
| ACE04 | 8,683 (5-fold) | 7 | 6 |
| SciERC | 1,861 | 275 | 551 | 6 | 7 |
+
+Table 5: Statistics of datasets. $|\mathcal{E}|$ and $|\mathcal{R}|$ are numbers of entity and relation types. In NYT and WebNLG, entity type information is not annotated.
+
+in a sentence with reinforcement learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 367-377, Hong Kong, China. Association for Computational Linguistics.
+
+Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 506-514, Melbourne, Australia. Association for Computational Linguistics.
+
+Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1227-1236, Vancouver, Canada. Association for Computational Linguistics.
+
+Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In North American Association for Computational Linguistics (NAACL).
+
+# A Dataset
+
+We evaluate our model on six datasets. NYT (Riedel et al., 2010) is sampled from New York Times news articles and annotated by distant supervision. WebNLG is originally created for Natural Language Generation task and is applied by (Zeng et al., 2018) as a relation extraction dataset. ACE05 and ACE04 (Walker et al., 2006) are collected from various sources, including news articles and online forums. ADE (Gurulingappa et al., 2012) contains medical descriptions of adverse effects of drug use. SciERC (Luan et al., 2018) is collected from 500 AI paper abstracts originally used for scientific knowledge graph
+
+construction. Following previous work, we filter out samples containing overlapping entities in ADE, which makes up only $2.8\%$ of the whole dataset. Statistics of the datasets can be found in table 5
+
+# B Implementation Details
+
+We leverage pre-trained language models as our embedding layer. Following previous work, the versions we use are bert-base-cased, albert-xlarge-v1 and scibert-scivocab-uncased. Batch size and learning rate are set to be 4/20 and 1e-5/2e-5 for SciERC/Others respectively. In order to prevent overfitting, dropout (Srivastava et al., 2014) is used in our word embedding, entity span and triple representation of task units (set to be 0.1). We use Adam (Kingma and Ba, 2015) to optimize our model parameters and train our model for 100 epochs. Also, to prevent gradient explosion, gradient clipping is applied during training.
+
+# C Analysis on Overlapping Pattern and Triple Number
+
+For more comprehensive evaluation, we assess our model on NYT/WebNLG datasets on different triple overlapping patterns (see section 2 for the detailed description of these patterns) and sentences containing a different number of triples. Since previous work does not compare triple overlapping pattern and triple number in ADE/ACE05/ACE04/SciERC given that EPO triples are non-existent in these datasets, comparison result is not included for these datasets.
+
+As is shown in figure 3, Our model is mostly superior to the other two baselines in all three categories. Interestingly in normal class, our model performs significantly better in WebNLG, but the score in NYT is basically on par with TpLinker. We argue that this could probably be caused by the fact that NYT, generated by distant supervision, is much more noisier than WebNLG. Besides, sentences of normal triples are likely to be much noisier than sentences of EPO and SEO triples since there is a higher chance for incomplete annotation. Thus it is unsurprising that no significant improvement is achieved in predicting normal triples of NYT.
+
+Besides, from figure 4 we observe that our model performs better in sentences with more than five triples on both datasets, where interaction between entity and relation becomes very complex. The strong performance in those sentences confirms the
+
+
+(a) NYT
+
+
+(b) WebNLG
+
+
+Figure 3: F1-score of relation triple extraction on sentences with three different overlapping patterns.
+(a) NYT
+
+
+(b) WebNLG
+Figure 4: F1-score of relational triple extraction on sentences containing $N$ triples, with $N$ ranges from 1 to $\geq 5$ .
+
+superiority of our model against other baselines.
+
+# D Details of Robustness Test
+
+Descriptions of the transformation methods used in Table 4 are listed as follows:
+
+1. ConcatSent - Concatenate sentences to a longer one.
+2. CrossCategory - Entity Swap by swapping entities with ones that can be labeled by different types.
+3. EntTypos - Swap/delete/add random character for entities.
+4. OOV - Entity Swap by out-of-vocabulary entities.
+5. SwapLonger - Substitute short entities for longer ones.
+
+Transformations of RE are not viable for the following reasons:
+
+1. The input is restricted to one triple sentence.
+2. The methods include entity swap, which is already covered in NER.
+3. The methods include relation-specific transformations (Age, Employee, Birth) and ACE05 does not have these type of relations.
+
+4. The methods include inserting descriptions of entities, which is unfair because it might introduce new entity and relation.
\ No newline at end of file
diff --git a/apartitionfilternetworkforjointentityandrelationextraction/images.zip b/apartitionfilternetworkforjointentityandrelationextraction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b71611cc8ee10d41be573a998f163a32b53567fe
--- /dev/null
+++ b/apartitionfilternetworkforjointentityandrelationextraction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0dbd0a064d4368d4357b2efe79057a9c7222053ef3a3243d793fdfcb7259e442
+size 601463
diff --git a/apartitionfilternetworkforjointentityandrelationextraction/layout.json b/apartitionfilternetworkforjointentityandrelationextraction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..22e71a9e5f06182cb918dea318f90431e19df01a
--- /dev/null
+++ b/apartitionfilternetworkforjointentityandrelationextraction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15b3cec94a73478bec01409fe2126e6cee0960a02b86563f6c9e28e886c73dd4
+size 466708
diff --git a/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_content_list.json b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..eb6235714956f373791a16d8d529ebd3e2db71e9
--- /dev/null
+++ b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:26bd1e23defe8bf39e60a708d7a1016ac67f64e9773175f9f1acde4171aae030
+size 83962
diff --git a/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_model.json b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c3ce1ef53b91760569faf040eb81bc5d583f738e
--- /dev/null
+++ b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a453a5074b7c6dcfe5a7536b66cdc30c6ef5acc257efc4f1f7d32da5e7821e08
+size 100733
diff --git a/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_origin.pdf b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6c8592356c7e8295689c3759b4727613ecb67e5c
--- /dev/null
+++ b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/8dd23836-05ce-400e-8c7d-2ccfc93b0181_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05b01976a23062098a66f4be502ac8174a07c6cb461295b0fe2b0375f2be14ac
+size 647545
diff --git a/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/full.md b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a32ce5fa7a16ebb00daf63a70c0694ad843fcec7
--- /dev/null
+++ b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/full.md
@@ -0,0 +1,297 @@
+# APIRecX: Cross-Library API Recommendation via Pre-Trained Language Model
+
+Yuning Kang $^{1}$ , Zan Wang $^{1}$ , Hongyu Zhang $^{2}$ , Junjie Chen $^{1*}$ , Hanmo You $^{1}$
+
+1College of Intelligence of and Computing, Tianjin University, Tianjin, China
+
+2The University of Newcastle, Callaghan, Australia
+
+{kangyuning, wangzan, junjiechen, youhanmo}@tju.edu.cn
+
+zhang.hongyu@newcastle.edu.au
+
+# Abstract
+
+For programmers, learning the usage of APIs (Application Programming Interfaces) of a software library is important yet difficult. API recommendation tools can help developers use APIs by recommending which APIs to be used next given the APIs that have been written. Traditionally, language models such as N-gram are applied to API recommendation. However, because the software libraries keep changing and new libraries keep emerging, new APIs are common. These new APIs can be seen as OOV (out of vocabulary) words and cannot be handled well by existing API recommendation approaches due to the lack of training data. In this paper, we propose APIRecX, the first cross-library API recommendation approach, which uses BPE to split each API call in each API sequence and pre-trains a GPT-based language model. It then recommends APIs by fine-tuning the pre-trained model. APIRecX can migrate the knowledge of existing libraries to a new library, and can recommend APIs that are previously regarded as OOV. We evaluate APIRecX on six libraries and the results confirm its effectiveness by comparing with two typical API recommendation approaches.
+
+# 1 Introduction
+
+Application Programming Interface (API) is an integral part of software libraries. Being familiar with APIs could help improve programming productivity. However, a library tends to contain a large number of APIs and there could be complex dependencies among APIs, and thus understanding all APIs in a library is very challenging, especially for new developers. To facilitate correct and efficient usage of APIs during programming, many API recommendation approaches (Zhong et al., 2009; Nguyen et al., 2016; Xie et al., 2019; Bruch et al., 2009; Huang et al., 2018) have been proposed. More specifically, API recommendation
+
+aims to automatically recommend a correct API call at the current programming location based on its preceding part of code information.
+
+As an example, Listing 1 shows a Java code snippet about opening a text file. Assuming a programmer forgets what to write in Line 6. API recommendation tool can help the programmer by prompting the most likely API call to be used next. In this case, printStackTrace() will be returned. The API recommendation tools do so by learning API usage pattern from a large code corpus. Some tools (Nguyen et al., 2016; Nguyen and Nguyen, 2015) use probabilistic models to learn API usage pattern, while others (Zhong et al., 2009; Wang et al., 2013) use data mining methods to find API usage patterns. Recently, deep learning based language models are proposed to model the API sequences and have obtained promising results in recommending APIs (Raychev et al., 2014; Yan et al., 2018; White et al., 2015; Nguyen and Nguyen, 2015).
+
+However, the existing API recommendation tools only focus on improving the performance of API recommendation when API usage data are sufficient (i.e., the usage data of the APIs to be recommended are sufficient in training data). That is, they mostly ignored the OOV (out of vocabulary) problem, which could have negative impact on the performance of API recommendation. More specifically, when some APIs are unseen in training data, these approaches cannot recommend them correctly. The OOV problem could be more serious for a new library, since it is very difficult to collect sufficient API usage data.
+
+To conduct API recommendation for new libraries, cross-library API recommendation is a potentially feasible solution, which aims to recommend APIs in new libraries based on the usage data of APIs in other libraries, but it is still an open challenge due to the inherent OOV problem. For example, as shown in Listing 2, we may rarely (or even never) see SQLEXception.printStackTrace()
+
+in the training set, but the usage of Exception is very common in the training set and the usage of SQLEXception and Exception are similar. So if we use a word segmentation algorithm to split SQLEXception.printStackTrace() into the sequence: SQLEXception-. -printStackTrace(), we can use the Exception usage pattern learned during the training process to predict the.printStackTrace() method and finally synthesize SQLEXception.printStackTrace() as the recommendation result.
+
+Listing 1: An Example of API Recommendation
+```java
+public static void main(..){ FileInputStream inputStream $=$ null; try{ File file $\equiv$ new File("tmp.txt")} catch(Exception e){ e.____); //To write a catch block }
+API Sequence:FileInputStream.new()- TRY-TryBlock-File.new(String)- CATCH-Exception._____()
+```
+
+Listing 2: An OOV Example in API Recommendation
+```java
+public static Connection getConnection(){ Connection connection $=$ null; try{ connection $=$ DriverManager.get(URL, username ,password); } catch (SQLExcption e){ e.printStackTrace(); } return connection;
+1
+```
+
+To achieve the goal of cross-library API recommendation, we draw lessons from the area of text generation in relieving the OOV problem (Sennrich et al., 2016; Hermann et al., 2021). More specifically, we design a framework of cross-library API recommendation, called APIRecX, which consists of three main components, i.e., API segmentation, subword language model building, and API synthesis for recommendation. Since the OOV problem at the API level hampers cross-library API recommendation, APIRecX first incorporates BPE (Byte Pair Encoding) (Provilkov et al., 2020; Sennrich et al., 2016), one of the most widely-used word segmentation methods in text generation, to split each API call into a sequence of subwords. That is, the OOV problem at the API level could be largely relieved at the subword level. Based on a large number of subword data, APIRecX then
+
+adopts the "pre-training&fine-tuning" mechanism to build a GPT-based(Generative Pre-Training) pretrained language model, which can recommend a subword in each prediction. Since the recommendation process is conducted at the subword level, it is necessary to compose a complete API call for recommendation based on predicted subwords. Here, APIRecX incorporates beam search for API synthesis.
+
+To evaluate the performance of APIRecX, we conducted an extensive study based on 1,711 Java projects from GitHub involving six libraries in three domains as subjects for mimicking new libraries in the scenario of cross-library API recommendation, and over 14,000 GitHub Java projects that do not involve the former six libraries as training corpus. By comparing with two typical API recommendation approaches, i.e., LSTM-based language model(Yan et al., 2018; White et al., 2015) and N-gram-based language model (Raychev et al., 2014; Karampatsis and Sutton, 2019; Hindle et al., 2012), our experimental results demonstrate the effectiveness of APIRecX for cross-library API recommendation in terms of recommendation accuracy.
+
+To sum up, this work makes the following major contributions:
+
+- We propose the first framework for crosslibrary API recommendation, consisting of BPE-based API segmentation, subword language model building, and beam-search based API synthesis.
+- We are the first to build a GPT-based language model in the area of API recommendation, which is more effective than the existing language models.
+- We conduct an extensive study to evaluate our proposed approach, demonstrating its effectiveness in the scenario of cross-library API recommendation.
+
+# 2 Approach
+
+In the paper, we propose APIRecX, the first approach for cross-library API recommendation. With APIRecX, we can recommend APIs in some libraries (especially new libraries) by learning from a large amount of API usage data of some other libraries.
+
+
+Figure 1: An Overview of APIRecX
+
+# 2.1 Overview
+
+Achieving the goal of cross-library API recommendation is challenging.
+
+- First, different libraries tend to not contain APIs with the same names, and thus it is hard to adopt existing approaches to recommend APIs that are not seen in training data. That is, the first challenge is due to the OOV problem at the API level. To overcome it, APIRecX aims to recommend APIs at the subword level through API segmentation. The insight is that an API call usually consists of a set of relatively commonly-used subwords such as Exception, print, etc. Therefore, the OOV problem at the API level can be largely relieved at the subword level.
+- Second, APIRecX recommends each subword in turn and then composes a complete API call for recommendation based on predicted subwords. That means that an API call can be correctly recommended only if all the subwords in the API call are recommended correctly, which largely aggravates the recommendation difficulty. To relieve the inaccuracy of API recommendation caused by inaccurate subword prediction, APIRecX incorporates beam search to enlarge the search space of API synthesis instead of directly recommending an API call composed by Top-1 subword in each prediction.
+
+With the above two insights, we design a novel GPT-based method in APIRecX to build a subword language model. Here, APIRecX first pre-trains a
+
+subword language model based on a large number of API usage data of other libraries in an offline process. When new libraries are released, APIRecX then directly fine-tunes the pre-trained model after collecting a certain amount of API usage data of new libraries, which is much more efficient than retraining based on all API usage data (i.e., pre-training data and fine-tuning data). Also, to make APIRecX a light-weight approach, APIRecX does not build complex data-flow and control-flow graphs, but directly represents a method as an API sequence following the existing work (Gu et al., 2017; Yan et al., 2018; Nguyen et al., 2017). The overview of APIRecX is shown in Figure 1.
+
+# 2.2 BPE-based API Segmentation
+
+APIRecX extracts API sequences following the practice in the existing work (Gu et al., 2017), which extracts all API calls (identifier&arguments, e.g.DriverManager.getConnection(String)), and control statements with API call in a method to form an API sequence. Here, all variables in API sequences are replaced with their types. For example, for an API call o.m () where o is an instance of a class C, APIRecX adds C.m to the API sequence.
+
+Although API names tend to be unique, they usually consists of a set of relatively commonly-used subwords. That is, different API names may include common subwords, and thus the OOV problem at the API level could be largely relieved at the subword level. With this insight, APIRecX splits an API call in an API sequence into a sequence of subwords and conducts follow-up learning and prediction at the subword level, and finally composes
+
+a complete API call for recommendation based on predicted subwords. In this way, it is possible to compose an unseen API call in training data with subwords, which makes cross-library API recommendation become feasible.
+
+Here, APIRecX adopts BPE (Provilkov et al., 2020; Sennrich et al., 2016; Devlin et al., 2018), one of the most widely-used word segmentation methods in text generation, for splitting an API call to subwords. The reason why choosing BPE is that it achieves a good balance between effectiveness and efficiency. More specifically, compared with character segmentation (Gao et al., 2020), whitespace segmentation (Tezcan et al., 2020; Mikolov et al., 2013), and CamelCase segmentation, BPE is more effective, since character segmentation is too fine-grained and thus leads to much semantic loss while whitespace segmentation is too coarse-grained for API calls and thus cannot effectively relieve the OOV problem. Although CamelCase segmentation can achieve a relatively appropriate segmentation granularity, compared with BPE, it has a larger granularity, which will cause more OOV words. By taking the domain of Swing as an example, there are $61.9\%$ common subwords between training and test data achieved by BPE, while there are only $50.9\%$ common subwords achieved by CamelCase. Compared with more advanced methods (e.g., WordPiece (Devlin et al., 2018) and ULM (Chen et al., 2005), BPE is more efficient but not much less effective, since these methods need to build language models during word segmentation while BPE is based on frequency. Besides, APIRecX adds a special subword $(/\mathrm{t})$ to mark the end of each API call, which helps APIRecX determine the termination of subword recommendation for an API call. Through this step, APIRecX obtains a large amount of API usage data at the subword level. As an example, for the code in Listing 1, the API sequence after BPE-based segmentation is: Connection--new(/t)-TRY(/t)-Driver-Manager--get-Connection(/t)-CATCH(/t)-Exception--print-StackTrace(/t).
+
+# 2.3 Building a Subword Language Model
+
+To build a subword language model, APIRecX adopts the "pre-training & fine-tuning" mechanism as presented above. That is, APIRecX first pretrains a subword language model based on a large amount of subword data that do not involve APIs of new libraries, and then fine-tunes the pre-trained
+
+model by including a small amount of subword data involving the APIs of the library to be recommended. Besides the efficiency benefit presented above, fine-tuning has been demonstrated to be more effective than the strategy of direct training based on the mixed data of pre-training data and fine-tuning data (Mao et al., 2015), since the volume of API usage data of new libraries is significantly smaller than that of other libraries, leading to very difficult to learn usage patterns of the APIs of new libraries via the latter strategy.
+
+In APIRecX, we design a GPT-based subword language modeling building method. GPT first maps an API subword sequence $S = a_{1}, \ldots, a_{t}$ into a vector matrix through the embedding layer Emb where $t$ represents the total number of subword in the API subword sequence, and then we can get the embedding matrix $H_{0}$ of the API subword sequence after adding the position information through the position embedding matrix $W_{p}$ .
+
+$$
+H _ {x} = \left\{ \begin{array}{l l} E m b (S) + W _ {p} & x = 0 \\ T b l o c k \left(H _ {x - 1}\right) & 1 \leq x \leq n \end{array} \right. \tag {1}
+$$
+
+Then, GPT inputs the obtained embedding matrix into the decoder block of the transformer for calculation. where $x$ represents the order number of Transformer layers, and the vector matrix $H_{n}$ outputted by the last layer of decoder block represents the attention weight for each subword in this sequence. Then, $H_{n}$ is multiplied by the transpose of embedding layer matrix, and normalized by softmax to obtain $P(S)$ which represents the probabilities of all subwords in the vocabulary at each position in the sequence.
+
+$$
+P (S) = \operatorname {S o f t m a x} \left(H _ {n} * E m b ^ {T}\right) \tag {2}
+$$
+
+In the training phase, we calculate the loss between ground truth and $P(S)$ through cross-entropy, and optimize GPT through the Adam optimization algorithm.
+
+# 2.4 Beam-search based API Synthesis
+
+With a subword language model, APIRecX recommends a subword in each prediction based on a sequence of subwords before the current position to be predicted. Given that an API call to be recommended is denoted as $A_{m} = \{s_{m}^{1}, s_{m}^{2}, \ldots, s_{m}^{n_{m}}\}$ where $s_{m}^{j}$ refers to the $j^{th}$ subword in $A_{m}$ and $n_{m}$ refers to the number of subwords in $A_{m}$ , API calls before $A_{m}$ are denoted as $\{A_{1}, A_{2}, \ldots, A_{m-1}\}$ .
+
+where $A_{i} = \{s_{i}^{1}, s_{i}^{2}, \ldots, s_{i}^{n_{i}}\}$ . When predicting at the position of $s_{m}^{1}$ , APIRecX inputs $\{s_{1}^{1}, \ldots, s_{1}^{n_{1}}, \ldots, s_{m-1}^{1}, \ldots, s_{m-1}^{n_{m-1}}\}$ to the model, and when predicting at the position of $s_{m}^{j}$ , APIRecX inputs $\{s_{1}^{1}, \ldots, s_{1}^{n_{1}}, s_{m-1}^{1}, \ldots, s_{m-1}^{n_{m-1}}, w_{m}^{1}, \ldots, w_{m}^{j-1}\}$ , where $w_{m}^{j-1}$ is the predicted subword at the position of $s_{m}^{j-1}$ and $w_{m}^{j-1}$ is the same as $s_{m}^{j-1}$ if the prediction is correct. That is, the currently predicted subword is used to predict subsequent subwords. When the prediction result ends with $(/t)$ , APIRecX outputs the chain of predicted subwords as the API call for recommendation. For example, in listing 2, when the developer enters $e$ on line 6, APIRecX inputs $\{A_{1}, A_{2}, A_{3}, A_{4}, S_{5}^{1}, S_{5}^{2}\}$ , where $A_{1} = \{File, Input, Stream,., new(/t)\}$ , $A_{2} = \{TRY(/t)\}$ , $A_{3} = \{File,., new(String(/t)\}$ , $A_{4} = \{CATCH(/t)\}$ , $S_{5}^{1} = \text{Exception}$ and $S_{5}^{2} = \ldots$ . Then APIRecX predicts the next subword based on the input. When APIRecX predicts a subword ending with $(/t)$ such as $\{print, StackTrace(/t)\}$ , it will merge predicted subwords with $S_{5}^{1}$ and $S_{5}^{2}$ and return the result to the developer.
+
+However, subword prediction aggregates the difficulty of API recommendation, since it is hard to guarantee the accurate prediction of each subword in an API call. Especially, when a wrong subword is predicted in a certain position, the predictions of all the subsequent subwords could be also affected, since the wrong subword will be used to predict subsequent subwords. Actually, each subword is assigned as a probability in each prediction. By considering all the subwords in each prediction and using each subword for subsequent predictions, the correct chain of subwords (used for composing a complete API call) cannot be missing, but exploring such enormous combination space is unaffordable. Therefore, it is still challenging to recommend a complete API call based on subword-level prediction.
+
+To achieve the balance between the accuracy of API recommendation and the efficiency, APIRecX adopts widely-used beam search (Freitag and Al-Onaizan, 2017; Shu and Nakayama, 2018; Huang et al., 2017). More specifically, beam search considers Top-K subwords (K refers to beam size) in each prediction rather than only Top-1 subword or all the subwords. For each of Top-K subwords in a prediction, it then produces Top-K subwords and obtains
+
+$K^2$ chains of subwords, and then preserves Top-K chains according to their chain probabilities for the next prediction. Following the existing work (Shu and Nakayama, 2018; Huang et al., 2017; Freitag and Al-Onaizan, 2017; Karampatsis et al., 2020), we use Formula 3 to calculate the chain probability of a chain of subwords:
+
+$$
+P \left(w _ {m} ^ {1}, \dots , w _ {m} ^ {i} \mid s _ {1} ^ {1}, \dots , s _ {1} ^ {n _ {1}}, \dots , s _ {m - 1} ^ {n _ {m - 1}}\right) = \prod_ {j = 1} ^ {i} p \left(w _ {m} ^ {j}\right) \tag {3}
+$$
+
+where, $p(w_m^j)$ (which is short for $p(w_m^j | s_1^1, \ldots, s_1^{n_1}, \ldots, s_{m-1}^{n_{m-1}}, w_m^1, \ldots, w_m^{j-1}))$ is the probability of the $j^{th}$ subword in the chain of $(w_m^1, \ldots, w_m^i)$ predicted by the subword model.
+
+To relieve the effectiveness problem caused by the monotonicity of traditional beam search, APIRecX preserves the memory of poor-quality incomplete chains produced during the process of beam search following the existing work in text generation (Shu and Nakayama, 2018). More specifically, APIRecX constructs a candidate pool that stores the remaining incomplete chains except Top-K chains among $k^2$ chains produced in each prediction. When $k^2$ chains produced based on Top-K chains selected from the last prediction have smaller chain probabilities than those of chains in the candidate pool, APIRecX chooses Top-K chains among the $k^2$ chains produced in the current prediction and all the chains in the candidate pool rather than only the current $k^2$ chains. In this way, APIRecX has a chance to make up wrong choice in previous predictions. Besides, APIRecX improves the condition of terminating the beam search process following the existing work (Huang et al., 2017) in text generation, i.e., the searching stops until the smallest chain probability among all the produced complete chains is larger than the largest chain probability among all incomplete chains (including incomplete chains in both the candidate pool and current Top-K chains).
+
+# 3 Evaluation
+
+# 3.1 Experimental Setup
+
+# 3.1.1 Datasets
+
+We used six JDK libraries from three domains to mimic new libraries in the scenario of cross-library API recommendation. They are java.sql and javax.sql in the domain of JDBC (which is the domain about database operations), java.awt and javax.swing in the domain of Swing
+
+| Domain | #API | #Project | #Sequence |
| JDBC | 909 | 784 | 42,298 |
| Swing | 10,622 | 722 | 63,249 |
| IO | 1,192 | 205 | 15,356 |
+
+Table 1: Statistical information on three domains
+
+| #Projects | 14,807 |
| #LOC | 352,312,696 |
| #Methods | 15,201,014 |
| #Sequence | 5,120,310 |
+
+Table 2: Statistical information on pre-train corpus
+
+(which is the domain about user interfaces), and java.io and javanio in the domain of IO (which is the domain about stream-based inputs and outputs), respectively. Based on the three domains, we conducted three groups of experiments, each of which uses the two libraries in the corresponding domain as the new libraries for recommendation. Table 1 shows the information about the three experiments. where Column "#API" is the number of APIs in the corresponding domain libraries, Column "#Project" is the number of Java projects that are collected from GitHub and use the APIs in the domain libraries, and Column "#Sequence" is the number of API sequences that are extracted from the collected projects.
+
+Besides, we adopted the corpus provided by the existing work (Allamanis and Sutton, 2013) for pretraining. The corpus has over 14,000 Java projects from GitHub after removing the projects involving the above three domains. From these projects, we extracted over 5,000,000 API sequences as pretraining data. Table 2 shows the information about the pre-train corpus. where Column "#Projects" is the number of Java projects in pre-train corpus, Column "#LOC" is the total number of lines of code, Column "#Methods" is the total number of java methods, and Column "#Sequence" is the number of API sequences that are extracted from this corpus.
+
+# 3.1.2 Selecting test and fine-tune data
+
+We used 10 projects (splitting domain projects into 10 groups and then selecting the one with the largest number of domain API calls in each group) as test projects, and extract API call sequences from them. For each sequence of API calls, we produced a set of API call sequences, each of which contain a "hole", as test data. Specifically, we produced them by digging a "hole" from the second API call in the sequence in turn respectively. Then, for each
+
+API call sequence with a "hole", we used the sequence of API calls before the "hole" as input for predicting the API call in the "hole". After selecting the test data, we sample a certain amount of data from the remained data at 5 different sampling ratios which are $0.2\%$ , $1\%$ , $10\%$ , $50\%$ , and $100\%$ as fine-tune data. Then we use these sampled data to fine-tune the pre-trained model following the fine-tuning process presented in Section 2.3
+
+# 3.1.3 Baselines
+
+We adopted traditional LSTM-based API recommendation approach (Yan et al., 2018; White et al., 2015; Zhang et al., 2019; Chen et al., 2019) and Ngram based API recommendation approach (Raychev et al., 2014; Karampatsis and Sutton, 2019) for comparison in order to quantitatively investigate the superiority of APIRecX over traditional API recommendation approaches. We refer to the parameter settings in these two works (Yan et al., 2018; Raychev et al., 2014) to train baseline tools on the data we collected, and the specific parameter settings are shown in Table 6.
+
+# 3.1.4 Parameters
+
+The parameters comprise the model training parameters and the beam search parameters in the API recommendation process. Table 6 lists all the parameters of APIRecX and baselines. The structure of original GPT contains a 12-layer transformer decoder block with 12-head attention, containing nearly 100 million parameters, which requires an extremely huge amount of data to support training. However, compared with collecting text data, it is harder to collect such a huge amount of API usage data to support training such a complicated model, and thus we tailored the structure of the original GPT to match with the scale of our training data. Specifically, our tailored GPT uses a 6-layer transformer decoder block with 8-head attention. Besides, GPT handles fixed-length sequences, thus we set the subword-sequence length to be 512. In our context, the fixed-length sequence refers to the fixed-length subword sequence processed from an API call sequence. For the API call sequences in our dataset, the average length is 41, the largest length is 2,280, and the percentage of subword sequences that are longer than 512 is only $0.4\%$ . Moreover, the longer the sequence is, the more difficult it is to model. Therefore, our setting (512) could reach a good trade-off following the existing study(Devlin et al., 2018). The baseline
+
+model parameters were set according to the previous work(Yan et al., 2018; Raychev et al., 2014). We trained the APIRecX for 15 epochs in the pretraining stage, and then we adopted the early stop strategy to terminate the fine-tuning process in the fine-tuning stage. For baseline approaches, we adopted early stop strategy to terminate the training process according to the previous work.
+
+Beam search process contains two parameters: beam size and max iteration. Beam size represents the width of beam search and max iteration represents the maximum search epoch. More details of parameters setting will be shown in the Appendix.
+
+# 3.1.5 Evaluation Metric
+
+To evaluate the performance of APIRecX, we adopted Top-N accuracy following the existing work on API recommendation (Xie et al., 2019; Nguyen et al., 2016; Nguyen and Nguyen, 2015). Each API recommendation approach can produce a ranking list of API calls for recommendation. Top-N accuracy measures the percentage of the cases that the correct API call is included in Top-N results among all the locations in the test set, and higher Top-N accuracy indicates better performance. Following the existing work (Nguyen et al., 2016; Nguyen and Nguyen, 2015; Xie et al., 2019; Yan et al., 2018), we set $N$ to be 1, 5, and 10 respectively. Note that We focus on the recommendation of domain APIs, so we only report the accuracy of Top-N recommendation of domain APIs.
+
+# 3.2 Results and Analysis
+
+# 3.2.1 Overall effectiveness
+
+Table 3 presents the comparison results between APIRecX and baselines under five sampling ratios in three domains, respectively.
+
+From this table, APIRecX performs better than the two baselines under all the studied sampling ratios in all the three domains in terms of all the metrics. For example, under the sampling ratio of $0.2\%$ in the domain of IO, APIRecX has achieved $52.9\%$ Top-1 accuracy while the two baselines are only $30.6\%$ and $16.5\%$ . The improvements are $72.87\%$ and $220.61\%$ , respectively. We also performed a Wilcoxon rank sum test to investigate whether our approach can significantly outperform LSTM and N-gram across all the domains respectively. The results show that all the p-values are smaller than 0.004 (<0.05) regardless of Top-1/Top-5/Top-10 accuracy, demonstrating the effectiveness of our approach in statistics.
+
+We then analyzed why APIRecX performs well as shown in Table 4. In this table, the fast three rows present the percentage that training data cover domain APIs in the test set, the percentage that training data cover subwords from domain APIs in the test set, percentage of unseen APIs in the correct recommendation result, and the last rows present the number of API call types that successfully recommended by APIRecX under the sampling ratio of $0.2\%$ .
+
+From Table 4, under the sampling ratio of $0.2\%$ , the API coverage is small $(10.9\sim 49.3\%)$ only $25.5\%$ APIs are covered by training data on average, but the subword coverage is large $(61.9\sim 89.3\%)$ and the average subword coverage rate reached $77.7\%$ , indicating the power of API segmentation to handle the OOV problem. Indeed, APIRecX is able to recommend unseen APIs in both pre-training and fine-tuning data. For example, Among the APIs correctly recommended by the APIRecX, an average of $28.1\%$ , 131.3 types is from the unseen APIs, demonstrating its ability for cross-library API recommendation.
+
+# 3.2.2 Effectiveness of Beam Search
+
+We compare our beam search strategy in APIRecX and the traditional beam search (Freitag and Al-Onaizan, 2017; Shu and Nakayama, 2018) under different beam sizes. Here, we use the JDBC domain with and the sampling ratio of $10\%$ as the representative, whose comparison results are shown in Table 5. From this table, our used beam search performs better than traditional beam search under all the studied beam sizes in terms of all the metrics, demonstrating the contribution of the improved beam search strategy. In the meanwhile, its contribution becomes more obvious in Top-5 accuracy and Top-10 accuracy than Top-1 accuracy because the rescued chains of subwords by the improved beam search are difficult to have larger chain probabilities than Top-1 chain due to the small probability of certain subword prediction. More specifically, the probability of a complete API call (e.g., printStackTrace() in Line-6 of Listing-1) is the product of the probabilities of a chain of subwords (e.g., print, StackTrace, ()) . Although the candidate pool storage of the improved beam search can relieve the effectiveness problem caused by the monotonicity of traditional beam search through preserving the memory of poor-quality incomplete chains produced during the beam-search process, the small probabilities of poor-quality incomplete
+
+| Sample | Approach | JDBC | Swing | IO |
| Top-1 | Top-5 | Top-10 | Top-1 | Top-5 | Top-10 | Top-1 | Top-5 | Top-10 |
| 0.2% | APIRecX | 37.9 | 74.7 | 81.2 | 25.0 | 43.8 | 51.2 | 52.9 | 69.5 | 73.7 |
| LSTM | 26.8 | 52.6 | 65.9 | 15.1 | 31.3 | 39.1 | 30.6 | 53.4 | 63.3 |
| N-gram | 11.9 | 41.5 | 56.5 | 7.9 | 26.3 | 31.6 | 16.5 | 45.9 | 57.0 |
| 1% | APIRecX | 42.8 | 77.7 | 83.7 | 25.3 | 46.9 | 54.5 | 56.4 | 75.5 | 79.8 |
| LSTM | 31.6 | 67.4 | 74.8 | 17.2 | 34.3 | 44.0 | 36.7 | 56.5 | 66.4 |
| N-gram | 16.0 | 40.6 | 58.5 | 10.2 | 28.2 | 36.7 | 16.7 | 45.9 | 57.8 |
| 10% | APIRecX | 46.9 | 79.9 | 85.7 | 40.6 | 67.8 | 74.5 | 56.9 | 75.9 | 80.5 |
| LSTM | 33.7 | 69.1 | 75.3 | 30.6 | 53.1 | 60.9 | 36.1 | 60.8 | 70.0 |
| N-gram | 18.6 | 43.8 | 59.3 | 16.3 | 37.5 | 46.9 | 18.1 | 48.3 | 59.2 |
| 50% | APIRecX | 56.6 | 86.3 | 93.0 | 48.7 | 79.0 | 80.9 | 60.6 | 81.9 | 85.3 |
| LSTM | 41.8 | 73.8 | 84.7 | 32.8 | 56.7 | 65.0 | 39.1 | 64.1 | 70.9 |
| N-gram | 25.4 | 55.1 | 63.7 | 16.3 | 39.4 | 48.7 | 18.6 | 48.5 | 62.8 |
| 100% | APIRecX | 60.0 | 89.4 | 94.5 | 54.8 | 77.2 | 83.7 | 63.9 | 84.2 | 88.7 |
| LSTM | 43.4 | 76.2 | 85.6 | 36.7 | 61.1 | 69.2 | 40.1 | 67.1 | 75.3 |
| N-gram | 28.6 | 56.1 | 65.5 | 18.7 | 41.4 | 50.7 | 21.6 | 52.8 | 68.8 |
+
+Table 3: Overall effectiveness of APIRecX
+
+| Criterion | Domain | Avg. |
| JDBC | Swing | IO |
| API Coverage | 49.3% | 10.9% | 16.3% | 25.5% |
| Subword Coverage | 82.0% | 61.9% | 89.3% | 77.7% |
| OOV Correct Rate | 8.7% | 26.4% | 49.6% | 28.1% |
| OOV Correct API type | 14.7 | 317.6 | 61.5 | 131.3 |
+
+Table 4: Analysis of the results
+
+| Beam size | Method | Top-1 | Top-5 | Top-10 |
| 10 | Ours | 45.2 | 79.4 | 84.3 |
| Traditional | 44.1 | 69.7 | 76.1 |
| 15 | Ours | 46.6 | 78.6 | 84.6 |
| Traditional | 44.5 | 69.1 | 75.8 |
| 20 | Ours | 46.9 | 79.9 | 85.7 |
| Traditional | 44.1 | 70.1 | 77.1 |
| 25 | Ours | 46.6 | 83.1 | 85.7 |
| Traditional | 44.0 | 72.7 | 77.2 |
| 30 | Ours | 45.3 | 80.1 | 86.7 |
| Traditional | 44.3 | 71.9 | 78.0 |
+
+Table 5: The results of different beam search methods on JDBC
+
+chains could lead to the small probability of the corresponding complete API call, making it hard to be ranked as Top-1. Taking Line-6 in Listing-1 as an example, if "StackTrace" has a small probability, its small probability could make the probability of the complete API call small, causing it hard to be ranked as Top-1. Therefore, the improved beam search has less apparent improvement in terms of Top-1 accuracy. Also, APIRecX performs stably under different beam sizes.
+
+# 4 Related work
+
+# 4.1 API recommendation
+
+In the literature, some statistical learning based (Nguyen and Nguyen, 2015; Liu et al., 2018; Raychev et al., 2014; Xie et al., 2019) and pattern mining based API recommendation approaches (Zhong et al., 2009; Wang et al., 2013; Fowkes and Sutton, 2016; Xie et al., 2019) have been proposed without dealing with the OOV problem, and thus all of them cannot be effective in the scenario of cross-library API recommendation. For example, Xie et al. (2019) proposed HiRec, which improves pattern-mining based approaches by utilizing the hidden information of project-specific code via call graph in mining API usage patterns. Nguyen and Nguyen (2015) designed a graph-based statistical language model by representing source code as graphs for API recommendation. Different from them, APIRecX is the first approach for crosslibrary API recommendation by handling the OOV problem via GPT-based pre-trained subword language model.
+
+# 4.2 Pre-trained models across languages
+
+Our approach is inspired by pre-training in the multilingual scenario (Chi et al., 2020; Huang et al., 2019; Yang et al., 2020a,b, 2019). For example, Lample and Conneau (2019) proposed the XLM model, which processes multiple languages via BPE so that all the languages can share subword dictionaries. Ren et al. (2019) proposed the cross-lingual masked language model, which uses more explicit cross-lingual information (such as translation table). More specifically, they used the monolingual corpus of two languages to train the monolingual N-gram vector through FastText (Bo
+
+janowski et al., 2017), and then used the unsupervised cross-lingual word vector method, VecMap, (Garneau et al., 2020) to obtain the cross-lingual N-gram vector. The translation table between the two languages is inferred from the similarity of the N-gram vectors of the two languages.
+
+Different from them, our work targets the problem of API recommendation rather than cross-lingual problems, which have different characteristics, and APIRecX builds a GPT-based subword language model for API recommendation. CodeBERT(Feng et al., 2020) gets a general language model about programming language by pre-trained on six different programming languages, and can be applied to different downstream tasks. It seems that codebert can be our baseline but the reason why not use CodeBERT as the baseline for comparison is that it needs two-way information and we regard API recommendation as a one-way text generation task. When developers use API, they usually write API calls sequentially (forward) and the task of API recommendation is to predict the future API calls, there is no reverse information (backward) in practice. Therefore, CodeBERT cannot be applied to our problem.
+
+# 5 Conclusions
+
+We propose the first approach APIRecX for crosslibrary API recommendation, which can automatically recommend API calls for new libraries. APIRecX first splits each API call into a sequence of subwords to relieve the OOV problem at the API level. It then pre-trains a GPT-based subword language model based on a large number of API usage data from other libraries. By finetuning the pre-trained model with a sample of API usage data of new libraries, APIRecX conducts subword prediction and incorporates beam search to compose a complete API call for recommendation. We conduct an extensive study based on six libraries of three domains for mimicking new libraries and 14,000 GitHub Java projects for pre-training, demonstrating the effectiveness of APIRecX. However, our work also has certain limitation, which is the generalization of our results and findings. Although we invested significant time and effort to prepare datasets, conducted experiments and analyzed results, our experiments involved only one program language with three domains. The performance of our neural architecture, and especially the findings on transfer learning, could be different
+
+with other programming languages or libraries. In the future, we will try to get rid of this limitation by applying our approach to more languages/libraries.
+
+The source code of APIRecX and experimental data can be found in https://github.com/yuningkang/APIRecX.
+
+# 6 Acknowledgements
+
+This work was funded by National Natural Science Foundation of China (Nos. 61872263, 62002256, 20201180), Intelligent Manufacturing Special Fund of Tianjin. We are also very grateful to reviewers for their helpful comments.
+
+# References
+
+Miltiadis Allamanis and Charles Sutton. 2013. Mining source code repositories at massive scale using language modeling. In Proceedings of the 10th Working Conference on Mining Software Repositories, MSR '13, San Francisco, CA, USA, May 18-19, 2013, pages 207-216. IEEE Computer Society.
+Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomás Mikolov. 2017. Enriching word vectors with subword information. Trans. Assoc. Comput. Linguistics, 5:135-146.
+Marcel Bruch, Martin Monperrus, and Mira Mezini. 2009. Learning from examples to improve code completion systems. In Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering, ESEC/FSE '09, page 213-222, New York, NY, USA. Association for Computing Machinery.
+Aitao Chen, Yiping Zhou, Anne Zhang, and Gordon Sun. 2005. Unigram language model for Chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2005, Jeju Island, Korea, 14-15, 2005. ACL.
+Chi Chen, Xin Peng, Jun Sun, Zhenchang Xing, Xin Wang, Yifan Zhao, Hairui Zhang, and Wenyun Zhao. 2019. Generative API usage code recommendation with parameter concretization. Sci. China Inf. Sci., 62(9):192103:1-192103:22.
+Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, XianLing Mao, and Heyan Huang. 2020. Cross-lingual natural language generation via pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7570-7577. AAAI Press.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
+Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Codebert: A pre-trained model for programming and natural languages. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 1536-1547. Association for Computational Linguistics.
+Jaroslav Fowkes and Charles Sutton. 2016. Parameter-free probabilistic api mining across github. FSE 2016, page 254-265, New York, NY, USA. Association for Computing Machinery.
+Markus Freitag and Yaser Al-Onaizan. 2017. Beam search strategies for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, NMT@ACL 2017, Vancouver, Canada, August 4, 2017, pages 56-60. Association for Computational Linguistics.
+Yingqiang Gao, Nikola I. Nikolov, Yuhuang Hu, and Richard H. R. Hahnloser. 2020. Character-level translation with self-attention. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1591-1604. Association for Computational Linguistics.
+Nicolas Garneau, Mathieu Godbout, David Beauchemin, Audrey Durand, and Luc Lamontagne. 2020. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings: Making the method robustly reproducible as well. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 5546-5554. European Language Resources Association.
+Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, and Sunghun Kim. 2017. Deep api learning.
+Enno Hermann, Herman Kamper, and Sharon Goldwater. 2021. Multilingual and unsupervised subword modeling for zero-resource languages. Comput. Speech Lang., 65:101098.
+Abram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and Premkumar T. Devanbu. 2012. On the naturalness of software. In 34th International Conference on Software Engineering, ICSE 2012, June 2-9, 2012, Zurich, Switzerland, pages 837-847. IEEE Computer Society.
+Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language
+
+Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2485-2494. Association for Computational Linguistics.
+Liang Huang, Kai Zhao, and Mingbo Ma. 2017. When to finish? optimal beam search for neural text generation (modulo beam size). In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2134-2139, Copenhagen, Denmark. Association for Computational Linguistics.
+Qiao Huang, Xin Xia, Zhenchang Xing, David Lo, and Xinyu Wang. 2018. API method recommendation without worrying about the task-api knowledge gap. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, Montpellier, France, September 3-7, 2018, pages 293-304. ACM.
+Rafael-Michael Karampatsis, Hlib Babii, Romain Robbes, Charles Sutton, and Andrea Janes. 2020. Big code != big vocabulary: Open-vocabulary models for source code. CoRR, abs/2003.07914.
+Rafael-Michael Karampatsis and Charles Sutton. 2019. Maybe deep neural networks are the best choice for modeling source code. CoRR, abs/1903.05734.
+Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining.
+Xiaoyu Liu, LiGuo Huang, and Vincent Ng. 2018. Effective api recommendation without historical software repositories. ASE 2018, page 282-292, New York, NY, USA. Association for Computing Machinery.
+Junhua Mao, Xu Wei, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan L. Yuille. 2015. Learning like a child: Fast novel visual concept learning from sentence descriptions of images. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2533-2541. IEEE Computer Society.
+Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.
+Anh Tuan Nguyen, Michael Hilton, Mihai Codoban, Hoan Anh Nguyen, Lily Mast, Eli Rademacher, Tien N. Nguyen, and Danny Dig. 2016. Api code recommendation using statistical learning from fine-grained changes. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2016, page 511-522, New York, NY, USA. Association for Computing Machinery.
+Anh Tuan Nguyen and Tien N. Nguyen. 2015. Graph-based statistical language model for code. In Proceedings of the 37th International Conference on
+
+Software Engineering - Volume 1, ICSE '15, page 858-868. IEEE Press.
+Trong Duc Nguyen, Anh Tuan Nguyen, Hung Dang Phan, and Tien N. Nguyen. 2017. Exploring API embedding for API usages and applications. In Proceedings of the 39th International Conference on Software Engineering, ICSE 2017, Buenos Aires, Argentina, May 20-28, 2017, pages 438-449. IEEE / ACM.
+Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. Bpe-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1882-1892. Association for Computational Linguistics.
+Veselin Raychev, Martin Vechev, and Eran Yahav. 2014. Code completion with statistical language models. 49(6):419-428.
+Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Explicit cross-lingual pre-training for unsupervised machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 770-779. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.
+Raphael Shu and Hideki Nakayama. 2018. Improving beam search by removing monotonic constraint for neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 339-344, Melbourne, Australia. Association for Computational Linguistics.
+Arda Tezcan, Véronique Hoste, and Lieve Macken. 2020. Estimating word-level quality of statistical machine translation output using monolingual information alone. Nat. Lang. Eng., 26(1):73-94.
+Jue Wang, Yingnong Dang, Hongyu Zhang, Kai Chen, Tao Xie, and Dongmei Zhang. 2013. Mining succinct and high-coverage API usage patterns from source code. In Proceedings of the 10th Working Conference on Mining Software Repositories, MSR '13, San Francisco, CA, USA, May 18-19, 2013, pages 319-328. IEEE Computer Society.
+Martin White, Christopher Vendome, Mario Linares-Vásquez, and Denys Poshyvanyk. 2015. Toward deep learning software repositories. In Proceedings
+
+of the 12th Working Conference on Mining Software Repositories, MSR '15, page 334-345. IEEE Press.
+Rensong Xie, Xianglong Kong, Lulu Wang, Ying Zhou, and Bixin Li. 2019. Hirec: API recommendation using hierarchical context. In 30th IEEE International Symposium on Software Reliability Engineering, IS-SRE 2019, Berlin, Germany, October 28-31, 2019, pages 369-379. IEEE.
+Jinpei Yan, Yong Qi, Qifan Rao, and Hui He. 2018. Learning API suggestion via single LSTM network with deterministic negative sampling. In The 30th International Conference on Software Engineering and Knowledge Engineering, Hotel Pullman, Redwood City, California, USA, July 1-3, 2018, pages 137-136. KSI Research Inc. and Knowledge Systems Institute Graduate School.
+Jian Yang, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Zhoujun Li, and Ming Zhou. 2020a. Alternating language modeling for cross-lingual pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9386-9393. AAAI Press.
+Jian Yang, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Zhoujun Li, and Ming Zhou. 2020b. Alternating language modeling for cross-lingual pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9386-9393. AAAI Press.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.
+Haoyu Zhang, Jingjing Cai, Jianjun Xu, and Ji Wang. 2019. Pretraining-based natural language generation for text summarization. In CoNLL, pages 789-797. Association for Computational Linguistics.
+Hao Zhong, Tao Xie, Lu Zhang, Jian Pei, and Hong Mei. 2009. Mapo: Mining and recommending api usage patterns. In Proceedings of the 23rd European Conference on ECOOP 2009 — Object-Oriented Programming, Genoa, page 318-343, Berlin, Heidelberg. Springer-Verlag.
+
+# Appendix
+
+# A Parameter settings
+
+| Section | Approach | Hyperparameter | Value |
| Model | GPT | ffn Hidden | 512 |
| hidden | 256 |
| num_head | 8 |
| num_layer | 6 |
| batch size | 32 |
| sequence length | 512 |
| learning rate | 0.00015 |
| epoch(pre-train) | 15 |
| epoch(fine-tune) | Early Stop |
| LSTM | hidden | 128 |
| num_layer | 2 |
| batch size | 128 |
| sequence length | 60 |
| learning rate | 0.005 |
| epoch | Early Stop |
| N-gram | hidden | 300 |
| context size | 3 |
| batch size | 40000 |
| learning rate | 0.005 |
| epoch | Early Stop |
| Beam Search | - | beam size | 20 |
| max iteration | 10 |
+
+# B Retrain and pre-train
+
+Our experiments show that the "pre-train&fine-tune" mechanism is effective and efficient than the one-step training strategies. Table 8 lists the domain API recommendation accuracy of the model trained in three training strategies. "Pretrain&fine-tune" represents the strategy used in training APIRecX introduced in Section 2.3, "retrain" means training APIRecX from scratch using three different proportions of fine-tuning data combined with pre-training data in three domains.
+
+Table 6: Parameters of APIRecX and baseline
+
+| Beam size | Sample | Top-1 | Top-5 | Top-10 |
| 10 | 0.2% | 38.4 | 71.3 | 76.5 |
| 10% | 45.2 | 79.4 | 84.3 |
| 100% | 58.7 | 88.1 | 92.7 |
| 15 | 0.2% | 38.2 | 73.1 | 79.3 |
| 10% | 46.6 | 78.6 | 84.6 |
| 100% | 58.8 | 88.1 | 93.7 |
| 20 | 0.2% | 38.2 | 74.8 | 81.2 |
| 10% | 46.9 | 79.9 | 85.7 |
| 100% | 60.0 | 89.4 | 94.5 |
| 25 | 0.2% | 38.5 | 74.1 | 81.5 |
| 10% | 46.6 | 83.1 | 85.7 |
| 100% | 59.6 | 88.7 | 93.1 |
| 30 | 0.2% | 38.4 | 73.5 | 81.9 |
| 10% | 45.3 | 80.1 | 86.7 |
| 100% | 58.8 | 88.2 | 93.9 |
+
+Table 7: Different beam size results in JDBC domain
+
+| Domain | Ratio | Strategy | Top-1 | Top-5 | Top-10 |
| JDBC | 100% | pre-train&fine-tune | 60 | 89.4 | 94.5 |
| retrain | 54.5 | 85.6 | 91.1 |
| scratch | 52.9 | 85.4 | 91.9 |
| 10% | pre-train&fine-tune | 46.9 | 79.9 | 85.7 |
| retrain | 42.1 | 71.5 | 79.4 |
| scratch | 30.2 | 56.9 | 64.7 |
| 0.2% | pre-train&fine-tune | 37.6 | 75 | 81.2 |
| retrain | 27.7 | 50.4 | 53.1 |
| scratch | 13.2 | 33.2 | 33.3 |
| Swing | 100% | pre-train&fine-tune | 54.8 | 77.2 | 83.7 |
| retrain | 44.4 | 71.3 | 77.6 |
| scratch | 48.8 | 75.3 | 80.8 |
| 10% | pre-train&fine-tune | 40.6 | 67.8 | 74.5 |
| retrain | 33.6 | 57.9 | 64.2 |
| scratch | 25.9 | 50.7 | 60.6 |
| 0.2% | pre-train&fine-tune | 25 | 43.8 | 51.2 |
| retrain | 23.6 | 43.1 | 47.7 |
| scratch | 3.2 | 4.9 | 8.8 |
| IO | 100% | pre-train&fine-tune | 63.9 | 84.2 | 88.7 |
| retrain | 62.7 | 81.9 | 87.2 |
| scratch | 32.4 | 62.8 | 71.4 |
| 10% | pre-train&fine-tune | 56.9 | 75.9 | 80.5 |
| retrain | 56.9 | 74.9 | 79.1 |
| scratch | 0.7 | 10.2 | 20.8 |
| 0.2% | pre-train&fine-tune | 52.9 | 69.5 | 73.7 |
| retrain | 51.7 | 68.4 | 71.3 |
| scratch | 0.05 | 0.05 | 1.4 |
+
+Table 8: Pre-train and retrain result
+
+"Scratch" means training APIRecX from scratch using only three different proportions of fine-tuning data. As shown in Table 8, the "pre-train&finetune" mechanism is better than the other two one-step strategy at three sampling ratios, and proves superiority under low sampling ratios.
+
+# C Beam Size evaluation
+
+We evaluate the effectiveness of different beam size under three different sampling ratios of JDBC domain to find the suitable beam size. Table 7 lists the average recommendation accuracy rates achieved in 5 different beam sizes under three different sampling ratios in JDBC domain. Table 7 shows that, as the beam size increases, the duration and the accuracy both increases. After the beam size reaches 20, the accuracy increases rather slowly and remains basically unchanged. To balance the performance and efficiency of APIRecX, we set beam size to be 20 as the parameter of other comparative experiments.
\ No newline at end of file
diff --git a/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/images.zip b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a4cbba7556eae4147bb275949ddf93c6367cc039
--- /dev/null
+++ b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:204f97ae32011e7f7cfd93045a11e124ecc78056d36e74d7b50ef2884fd656e6
+size 441970
diff --git a/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/layout.json b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bbc537899250a438a0e1e35368cc069723540b26
--- /dev/null
+++ b/apirecxcrosslibraryapirecommendationviapretrainedlanguagemodel/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:694e66b2320478c4b330ef56eeab7a5efe119d42b76f6ca6bd1b90979204f910
+size 360642
diff --git a/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_content_list.json b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..38fd47b9acf93fcd5ad3eb0963591b205158fce6
--- /dev/null
+++ b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4683a89d8436d0950e1620dab45921b234084c67f5acf07d91dbba2c79306018
+size 90686
diff --git a/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_model.json b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e795b12e3bea573204f4325cffd90bd333da0775
--- /dev/null
+++ b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18d7d742d426e0187a8777fb637c6a29a119b63e0cae25b1e205403e2c7c6d0d
+size 111141
diff --git a/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_origin.pdf b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f657d5d0ecc2d856fb9459e19aadf754ebdb06c5
--- /dev/null
+++ b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/5b416601-5a39-4506-a4db-4a88e1d8ba63_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:029742e6608878d67826612f2ae5c9de160d31a6fe1bf4a405a2b048a6b6f7be
+size 4791285
diff --git a/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/full.md b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..671c1d9d2ba374b0d0c80f067bc86b84760777e8
--- /dev/null
+++ b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/full.md
@@ -0,0 +1,403 @@
+# Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search
+
+Jialu Wang, Yang Liu, Xin Eric Wang
+Department of Computer Science and Engineering
+University of California, Santa Cruz
+{faldict, yangliu, xwang366}@ucsc.edu
+
+# Abstract
+
+Internet search affects people's cognition of the world, so mitigating biases in search results and learning fair models is imperative for social good. We study a unique gender bias in image search in this work: the search images are often gender-imbalanced for gender-neutral natural language queries. We diagnose two typical image search models, the specialized model trained on in-domain datasets and the generalized representation model pretrained on massive image and text data across the internet. Both models suffer from severe gender bias. Therefore, we introduce two novel debiasing approaches: an in-processing fair sampling method to address the gender imbalance issue for training models, and a post-processing feature clipping method base on mutual information to debias multimodal representations of pre-trained models. Extensive experiments on MS-COCO (Lin et al., 2014) and Flickr30K (Young et al., 2014) benchmarks show that our methods significantly reduce the gender bias in image search models.
+
+# 1 Introduction
+
+Internet information is shaping people's minds. The algorithmic processes behind modern search engines, with extensive use of machine learning, have great power to determine users' access to information (Eslami et al., 2015). These information systems are biased when results are systematically slanted in unfair discrimination against protected groups (Friedman and Nissenbaum, 1996).
+
+Gender bias is a severe fairness issue in image search. Figure 1 shows an example: given a gender-neutral natural language query "a person is cooking", only 2 out of 10 images retrieved by an image search model (Radford et al., 2021) depict females, while equalized exposure for male and female is expected. Such gender-biased search results are harmful to society as they change people's cognition and worsen gender stereotypes (Kay et al.,
+
+2015). Mitigating gender bias in image search is imperative for social good.
+
+In this paper, we formally develop a framework for quantifying gender bias in image search results, where text queries in English1 are made gender-neutral, and gender-balanced search images are expected for models to retrieve. To evaluate model fairness, we use the normalized difference between masculine and feminine images in the retrieved results to represent gender bias. We diagnose the gender bias of two primary families of multimodal models for image search: (1) the specialized models that are often trained on in-domain datasets to perform text-image retrieval, and (2) the general-purpose representation models that are pre-trained on massive image and text data available online and can be applied to image search. Our analysis on MS-COCO (Lin et al., 2014) and Flickr30K (Young et al., 2014) datasets reveals that both types of models lead to serious gender bias issues (e.g., nearly $70\%$ of the retrieved images are masculine images).
+
+To mitigate gender bias in image search, we propose two novel debiasing solutions for both model families. The specialized in-domain training methods such as SCAN (Lee et al., 2018) often adopt contrastive learning to enforce image-text matching by maximizing the margin between positive and negative image-text pairs. However, the gender distribution in the training data is typically imbalanced, which results in unfair model training. Thus we introduce a fair sampling (FairSample) method to alleviate the gender imbalance during training without modifying the training data.
+
+Our second solution aims at debiasing the large, pre-trained multimodal representation models, which effectively learn pre-trained image and text representations to accomplish down-stream applications (Bachman et al., 2019; Chen et al., 2020a,c; Gan et al., 2020; Chen et al., 2020d; Rad-
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 1: Gender bias in image search. We show the top-10 retrieved images for searching "a person is cooking" on the Flickr30K (Young et al., 2014) test set using a state-of-the-art model (Radford et al., 2021). Despite the gender-neutral query, only 2 out of 10 images are depicting female cooking.
+
+
+
+
+
+
+
+
+
+ford et al., 2021). We examine whether the representative CLIP model (Radford et al., 2021) embeds human biases into multimodal representations when they are applied to the task of image search. Furthermore, we propose a novel post-processing feature clipping approach, clip, that effectively prunes out features highly correlated with gender based on their mutual information to reduce the gender bias induced by multimodal representations. The clip method does not require any training and is compatible with various pre-trained models.
+
+We evaluate both debiasing approaches on MS-COCO and Flickr30K and find that, on both benchmarks, the proposed approaches significantly reduce the gender bias exhibited by SCAN and CLIP models when evaluated on the gender-neutral corpora, yielding fairer and more gender-balanced search results. In addition, we evaluate the similarity bias of the CLIP model in realistic image search results for occupations on the internet, and observe that the post-processing methods mitigate the discrepancy between gender groups by a large margin.
+
+Our contributions are four-fold: (1) we diagnose a unique gender bias in image search, especially for gender-neutral text queries; (2) we introduce a fair sampling method to mitigate gender bias during model training; (3) we also propose a novel post-processing clip method to debias pre-trained multimodal representation models; (4) we conduct extensive experiments to analyze the prevalent bias in existing models and demonstrate the effectiveness of our debiasing methods.
+
+# 2 Gender Bias in Image Search
+
+In an image search system, text queries may be either gender-neutral or gender-specific. Intuitively, when we search for a gender-neutral query like "a person is cooking", we expect a fair model returning approximately equal proportions of images depicting men and women. For gender-specific queries, an unbiased image search system is supposed to exclude images with misspecified gender information. This intention aligns with seeking more accurate search results and would be much different from the scope of measuring gender bias in gender-neutral cases. Therefore, we focus on identifying and quantifying gender bias when only searching for gender-neutral text queries.
+
+# 2.1 Problem Statement
+
+Given a text query provided by the users, the goal of an image search system is to retrieve the matching images from the curated images. In the domain of multi-modality, given the dataset $\{(v_{n},c_{n})\}_{n = 1}^{N}$ with $N$ image-text pairs, the task of image search aims at matching every image $v$ based on the providing text $c$ . We use $\mathcal{V} = \{v_n\}_{n = 1}^N$ to denote the image set and $\mathcal{C} = \{c_n\}_{n = 1}^N$ to denote the text set. Given a text query $c\in \mathcal{C}$ and an image $v\in \mathcal{V}$ , image retrieval models often predict the similarity score $S(v,c)$ between the image and text. One general solution is to embed the image and text into a high-dimensional representation space and compute a proper distance metric, such as Euclidean distance or cosine similarity, between vectors (Wang et al.,
+
+2014). We take cosine similarity for an example:
+
+$$
+\begin{array}{l} S (v, c) = \frac {\vec {v} \cdot \vec {c}}{\| \vec {v} \| \| \vec {c} \|} \\ \vec {v} = \text {i m m a n o p e r a t o r} (v) \end{array} \tag {1}
+$$
+
+$$
+s. t. \vec {v} = i m a g e _ {-} e n c o d e r (v)
+$$
+
+$$
+\vec {c} = \operatorname {t e x t \_ e n c o d e r} (c)
+$$
+
+The image search system outputs a set of top- $K$ retrieved images $\mathcal{R}_K(c)$ with the highest similarity scores. In this work, we assume that when evaluating on test data, $\forall c\in \mathcal{C}$ , the text query $c$ is written in gender-neutral language.
+
+# 2.2 Measuring Gender Bias
+
+The situations of image search results are complex: there might be no people, one person, or more than one person in the images. Let $g(v) \in \{\text{male}, \text{female}, \text{neutral}\}$ represent the gender attribute of an image $v$ . Note that in this study gender refers to biological sex Larson, 2017. We use the following rules to determine $g(v)$ : $g(v) = \text{male}$ when there are only men in the image, $g(v) = \text{female}$ when there are only women in the image, otherwise $g(v) = \text{neutral}$ .
+
+Portraits in image search results with different gender attributes often receive unequal exposure. Inspired by Kay et al. (2015) and Zhao et al. (2017), we measure gender bias in image search by comparing the proportions of masculine and feminine images in search results. Given the set of retrieved images $\mathcal{R}_K(c)$ , we count the images depicting males and females
+
+$$
+N _ {\text {m a l e}} = \sum_ {v \in \mathcal {R} _ {K} (c)} \mathbb {1} [ g (v) = \text {m a l e} ],
+$$
+
+$$
+N _ {\text {f e m a l e}} = \sum_ {v \in \mathcal {R} _ {K} (c)} \mathbb {1} [ g (v) = \text {f e m a l e} ],
+$$
+
+and define the gender bias metric as:
+
+$$
+\Delta_ {K} (c) = \left\{ \begin{array}{l l} 0, & \text {i f N _ {\mathrm {m a l e}} + N _ {\mathrm {f e m a l e}} = 0} \\ \frac {N _ {\mathrm {m a l e}} - N _ {\mathrm {f e m a l e}}}{N _ {\mathrm {m a l e}} + N _ {\mathrm {f e m a l e}}}, & \text {o t h e r w i s e} \end{array} \right. \tag {2}
+$$
+
+We don't take absolute values for measuring the direction of skewness, i.e., if $\Delta_K(c) > 0$ it skews towards males. Note that a similar definition of gender bias $\frac{N_{\mathrm{male}}}{N_{\mathrm{male}} + N_{\mathrm{female}}}$ in Zhao et al. (2017) is equivalent to $(1 + \Delta (c)) / 2$ . But our definition of gender bias considers the special case when none of the retrieved images are gender-specific, i.e., $N_{\mathrm{male}} + N_{\mathrm{female}} = 0$ . For the whole test set, we measure the mean difference over all the text queries:
+
+$$
+\operatorname {B i a s} @ K = \frac {1}{| \mathcal {C} |} \sum_ {c \in \mathcal {C}} \Delta_ {K} (c) \tag {3}
+$$
+
+# 3 Mitigating Gender Bias in Image Search
+
+There are two fashions of multimodal models for the image search task. One is to build a specialized model that could embed image and text into representation vectors with measurable similarity scores. The other is to use general-purpose imagetext representations pre-trained on sufficiently big data and compute a particular distance metric. We focus on two representative models, SCAN (Lee et al., 2018) and CLIP (Radford et al., 2021), for both fashions. For the first fashion, we propose an in-processing learning approach to ameliorate the unfairness caused by imbalanced gender distribution in training examples. This approach builds on contrastive learning but extends with a fair sampling step. The in-processing solution requires full training on in-domain data examples. For the second fashion, we propose a post-processing feature clipping technique to mitigate bias from an information-theoretical perspective. This approach is compatible with pre-trained models and is light to implement without repeating training steps.
+
+# 3.1 In-processing Debiasing: Fair Sampling
+
+Image search models in the first fashion are often trained under the contrastive learning framework (Le-Khac et al., 2020). For our in-processing debiasing approach, we now explain the two primary components, contrastive learning and fair sampling, within our context.
+
+Contrastive Learning We start by formally introducing the standard contrastive learning framework commonly used in previous works (Lee et al., 2018; Chen et al., 2020b) for image-text retrieval. Given a batch of $N$ image-text pairs $\mathcal{B} = \{(v_n, c_n)\}_{n=1}^N$ , the model aims to maximize the similarity scores of matched image-text pairs (positive pairs) while minimizing that of mismatched pairs (negative pairs). The representative SCAN model (Lee et al., 2018), denoted as $S(v, c)$ outputting a similarity score between image and text, is optimized with a standard hinge-based triplet loss:
+
+$$
+\mathcal {L} _ {i - t} = \sum_ {(v, c) \in \mathcal {B}} [ \gamma - S (v, c) + S (v, \tilde {c}) ] _ {+} \tag {4}
+$$
+
+$$
+\mathcal {L} _ {t - i} = \sum_ {(v, c) \in \mathcal {B}} [ \gamma - S (v, c) + S (\tilde {v}, c) ] _ {+} \tag {5}
+$$
+
+where $\gamma$ is the margin, $\tilde{v}$ and $\tilde{c}$ are negative examples, and $[\cdot]_{+}$ denotes the ramp function. $\mathcal{L}_{i-t}$
+
+corresponds to image-to-text retrieval, while $\mathcal{L}_{t - i}$ corresponds to text-to-image retrieval (or image search). Common negative sampling strategy includes selecting all the negatives (Huang et al., 2017), selecting hard negatives of highest similarity scores in the mini-batch (Faghri et al., 2018), and selecting hard negatives from the whole training data (Chen et al., 2020b). Minimizing the margin-based triplet loss will make positive image-text pairs closer to each other than other negative samples in the joint embedding space.
+
+Fair Sampling One major issue in the contrastive learning framework is that the gender distribution in a batch of image-text pairs is typically imbalanced. Hence, the negative samples will slant towards the majority group, leading to systematic discrimination. To address this problem, we propose a fair sampling strategy. We split the batch of image-text pairs into masculine and feminine pairs based on the image's gender attribute:
+
+$$
+\mathcal {V} _ {\text {m a l e}} = \{v \mid g (v) = \text {m a l e}, (v, c) \in \mathcal {B} \}
+$$
+
+$$
+\mathcal {V} _ {\text {f e m a l e}} = \{v \mid g (v) = \text {f e m a l e}, (v, c) \in \mathcal {B} \}
+$$
+
+$$
+\mathcal {V} _ {\text {n e u t r a l}} = \{v \mid g (v) = \text {n e u t r a l}, (v, c) \in \mathcal {B} \}
+$$
+
+For every positive image and text pair $(v, c) \in \mathcal{B}$ , we identify the gender information contained in the query $c$ . If the natural language query is gender-neutral, we sample a negative image from the set of male and female images with probability $\frac{1}{2}$ , respectively. Otherwise, we keep the primitive negative sampling selection strategy for keeping the model's generalization on gender-specific queries. Let $\mathcal{B}^*$ be the batch of gender-neutral image-text pairs, the image search loss with fair sampling is:
+
+$$
+\begin{array}{l} \mathcal {L} _ {t - i} ^ {f a i r} = \sum_ {(v, c) \in \mathcal {B} ^ {*}} \left(\frac {1}{2} \mathbb {E} _ {\bar {v} \in \mathcal {V} _ {\mathrm {m a l e}}} [ \gamma - S (v, c) + S (\bar {v}, c) ] _ {+} \right. \\ + \frac {1}{2} \mathbb {E} _ {\bar {v} \in \mathcal {V} _ {\text {f e m a l e}}} [ \gamma - S (v, c) + S (\bar {v}, c) ] _ {+}) \\ + \sum_ {(v, c) \in \mathcal {B} / \mathcal {B} ^ {*}} \left[ \gamma - S (v, c) + S (\tilde {v}, c) \right] _ {+} \tag {6} \\ \end{array}
+$$
+
+Empirically, we find that if we thoroughly apply the Fair Sampling strategy, the recall performance drops too much. To obtain a better tradeoff, we use a weight $\alpha$ to combine the objectives
+
+$$
+\alpha \mathcal {L} _ {t - i} ^ {f a i r} + (1 - \alpha) \mathcal {L} _ {t - i}
+$$
+
+as the final text-to-image loss function. We do not alter the sentence retrieval loss $\mathcal{L}_{i - t}$ during training for preserving generalization.
+
+Algorithm 1 clip algorithm
+Require: Index set $\Omega = \{1,\dots,d\}$ , number of clipped features $0\leq m < d$ $\mathcal{Z}\gets \emptyset$
+for $i = 1$ to $d$ do
+Estimate mutual information $I(V_{i};g(V))$
+end for
+for $j = 1$ to $m$ do
+ $z\gets \arg \max \{I(V_i;g(V)):i\in \Omega /\mathcal{Z}\}$ $\mathcal{Z}\gets \mathcal{Z}\cup \{z\}$
+end for
+return Index set of clipped features $\mathcal{Z}$
+
+# 3.2 Post-processing Debiasing: Feature Clipping based on Mutual Information
+
+Pre-training methods have shown promising zero-shot performance on extensive NLP and computer vision benchmarks. The recently introduced CLIP model (Radford et al., 2021) was pre-trained on an enormous amount of image-text pairs found across the internet to connect text and images. CLIP can encode image and text into $d$ -dimensional embedding vectors, based on which we can use cosine similarity to quantify the similarity of image and text pairs. In this work, we find that the pre-trained CLIP model reaches the state-of-the-art performance but exhibits large gender bias due to training on uncurated image-text pairs collected from the internet. Although Radford et al. (2021) released the pre-trained CLIP model, the training process is almost unreproducible due to limitations on computational costs and massive training data.
+
+In order to avoid re-training of the CLIP model, we introduce a novel post-processing mechanism to mitigate the representation bias in the CLIP model. We propose to "clip" the dimensions of feature embeddings that are highly correlated with gender information. This idea is motivated by the fact that an unbiased retrieve implies the independence between the covariates (active features) and sensitive attributes (gender) (Barocas et al., 2019). Clipping the highly correlating covariates will return us a relatively independent and neutral set of training data that does not encode hidden gender bias.
+
+The proposed clip algorithm is demonstrated in Algorithm 1, and we explain the key steps below. Let $\Omega = \{1,\dots,d\}$ be the full index set. We use $V = V_{\Omega} = [V_1,V_2,\dots,V_d]$ to represent the variable of $d$ -dimensional encoding image vectors and $g(V)\in \{\mathrm{male},\mathrm{female},\mathrm{neutral}\}$ to represent the corresponding gender attribute. The goal is to output the index set $\mathcal{Z}$ of clipped covariates that reduce the dependence between representations $V_{\Omega /\mathcal{Z}}$
+
+and gender attributes $g(V)$ . We measure the correlation between each dimension $V_{i}$ and gender attribute $g(V)$ by estimating their mutual information $I(V_{i};g(V))$ (Gao et al., 2017):
+
+$$
+I (V _ {I}; g (V)) = D _ {\mathrm {K L}} \left(\mathbb {P} _ {(V _ {i}, g (V))} \| \mathbb {P} _ {V _ {i}} \otimes \mathbb {P} _ {g (V)}\right) (7)
+$$
+
+where $D_{\mathrm{KL}}$ is the KL divergence (Kullback and Leibler, 1951), $\mathbb{P}_{(V_i,g(V))}$ indicates the joint distribution, $\mathbb{P}_{V_i}$ and $\mathbb{P}_{g(V)}$ indicate their marginals. Next, we greedily clip $m$ covariates with highest mutual information, and construct $(d - m)$ -dimensional embedding vectors $V_{\Omega /\mathcal{Z}}$ . $m$ is a hyper-parameter that we will experimentally find to best trade-off accuracy and the reduced gender bias, and we show how the selection of $m$ affects the performance in Section 5.3. To project text representations, denoted by variable $C$ , into the same embedding space, we also apply the index set $\mathcal{Z}$ to obtain clipped text embedding vectors $C_{\Omega /\mathcal{Z}}$ .
+
+The clipped image and text representations, denoted by $\vec{v}^*$ and $\vec{c}^*$ , will have a relatively low correlation with gender attributes due to the "loss" of mutual information. Then we compute the cosine similarity between image and text by substituting $\vec{v}^*$ and $\vec{c}^*$ into Equation (1):
+
+$$
+S (v, c) = \frac {\vec {v} ^ {*} \cdot \vec {c} ^ {*}}{\| \vec {v} ^ {*} \| \| \vec {c} ^ {*} \|} \tag {8}
+$$
+
+Finally, we rank the images based on the cosine similarity between the clipped representations.
+
+# 4 Experimental Setup
+
+# 4.1 Datasets
+
+We evaluate our approaches on the standard MS-COCO (Chen et al., 2015) and Flickr30K (Young et al., 2014) datasets. Following Karpathy and Fei-Fei (2017) and Faghri et al. (2018), we split MS-COCO captions dataset into 113,287 training images, 5,000 validation images and 5,000 test images. Each image corresponds to 5 human-annotated captions. We report the results on the test set by averaging over five folds of 1K test images or evaluating the full 5K test images. Flickr30K consists of 31,000 images collected from Flickr. Following the same split of Karpathy and Fei-Fei (2017); Lee et al. (2018), we select 1,000 images for validation, 1,000 images for testing, and the rest of the images for training.
+
+Identifying Gender Attributes of Images Sensitive attributes such as gender are often not explicitly annotated in large-scale datasets such as MS-COCO and Flickr30K, but we observe that implicit gender attributes of images can be extracted from their associated human-annotated captions. Therefore, we pre-define a set of masculine words and a set of feminine words. Following Zhao et al. (2017) and Burns et al. (2018) we use the groundtruth annotated captions to identify the gender attributes of images. An image will be labeled as "male" if at least one of its captions contains masculine words and no captions include feminine words. Similarly, an image will be labeled as "female" if at least one of its captions contains feminine words and no captions include masculine words. Otherwise, the image will be labeled as "gender-neutral".
+
+# 4.2 Models
+
+We compare the fairness performance of the following approaches:
+
+- SCAN (Lee et al., 2018): we use the official implementation for training and evaluation5.
+- FairSample: we apply the fair sampling method proposed in Section 3.1 to the SCAN framework and adopt the same hyper-parameters suggested by Lee et al. (2018) for training.
+- CLIP (Radford et al., 2021): we use the pretrained CLIP model released by OpenAI. The model uses a Vision Transformer (Dosovitskiy et al., 2021) as the image encoder and a masked self-attention Transformer (Vaswani et al., 2017) as the text encoder. The original model produces 500-dimensional image and text vectors.
+- CLIP-clip: we apply the feature pruning algorithm in Section 3.2 to the image and text features generated by the CLIP model. We set $m = 100$ and clip the image and text representations into 400-dimensional vectors.
+
+Note that SCAN and FairSample are trained and tested on the in-domain MS-COCO and Flickr30K datasets, while the pre-trained CLIP model is directly tested on MS-COCO and Flickr30K test sets without fine-tuning on their training sets (same for CLIP-clip as it simply drops CLIP features).
+
+| Before Pre-processing | After Pre-processing |
| A man with a red helmet on a small moped on a dirt road. | A person with a red helmet on a small moped on a dirt road. |
| A little girl is getting ready to blow out a candle on a small dessert. | A little child is getting ready to blow out a candle on a small dessert. |
| A female surfboarder dressed in black holding a white surfboard. | A surfboarder dressed in black holding a white surfboard. |
| A group of young men and women sitting at a table. | A group of young people sitting at a table. |
+
+Table 1: Samples of the constructed gender-neutral captions. For evaluation, we convert gender-specific captions to gender-neutral ones by replacing or removing the gender-specific words.
+
+
+(a) MS-COCO 1K Test Set.
+
+
+(b) MS-COCO 5K Test Set.
+
+
+(c)Flickr30K Test Set.
+Figure 2: Gender bias analysis with different top- $K$ results.
+
+# 4.3 Evaluation
+
+Gender-Neutral Text Queries In this study, we focus on equalizing the search results of gender-neutral text queries. In addition to the existing gender-neutral captions in the test sets, we preprocess those gender-specific captions to construct a purely gender-neutral test corpus to guarantee a fair and large-scale evaluation. For every caption, we identify all these gender-specific words and remove or replace them with corresponding gender-neutral words. We show some pre-processing examples in Table 1.
+
+Metrics As introduced in Section 2.2, we employ the fairness metric in Equation (3), Bias@K, to measure the gender bias among the top-K images. In addition, following standard practice, we measure the retrieval performance by Recall@K, defined as the fraction of queries for which the correct image is retrieved among the top-K images.
+
+# 5 Debiasing Results
+
+# 5.1 Main Results on MS-COCO & Flickr30K
+
+We report the results comparing our debiasing methods and the baseline methods in Table 2.
+
+Model Bias Although the pre-trained CLIP model is evaluated without fine-tuning, we observe that it achieves a comparable recall performance with the SCAN model on MS-COCO and dominates the Flickr30K dataset. However,
+
+both models suffer from severe gender bias. Especially, the Bias@10 of the SCAN model on Flickr30K is 0.3960, meaning nearly $70\%$ of the retrieved gender-specific images portray men and only $30\%$ portray women. Similarly, the CLIP model achieves 0.2648 gender bias on MS-COCO 1K test set, indicating about 6.4 out of 10 retrieved images portray men while about 3.6 out of 10 portray women. Given that all of the testing text queries are gender-neutral, this result shows that severe implicit gender bias exists in image search models.
+
+Debiasing Effectiveness As shown in Table 2, both the in-processing sampling strategy FairSample and the post-processing feature pruning algorithm clip consistently mitigate the gender bias on test data. For instance, among the top-10 search images, SCAN with FairSample reduces gender bias from 0.3960 to 0.3537 (decreased by $10.7\%$ ) on Flickr30K. Using the clipped CLIP features for image search (CLIP-clip), the gender bias drops from 0.2648 to 0.2057 ( $22.3\%$ ) on MS-COCO 1K, from 0.2131 to 0.1611 ( $24.4\%$ ) on MS-COCO 5K, and from 0.3586 to 0.2951 ( $17.7\%$ ) on Flickr30K. For the tradeoff, CLIP-clip sacrifices the recall performance slightly (from $93.6\%$ Recall@10 to $91.3\%$ on Flickr30K). On the other hand, SCAN with Fair-Sample even achieves a comparable recall performance with SCAN.
+
+| Dataset | Method | Gender Bias↓ | Recall↑ |
| Bias@1 | Bias@5 | Bias@10 | Recall@1 | Recall@5 | Recall@10 |
| COCO1K | SCAN | .1250 | .2044 | .2506 | 47.7 | 82.0 | 91.0 |
| FairSample | .1140 | .1951 | .2347 | 49.7 | 82.5 | 90.9 |
| CLIP | .0900 | .2024 | .2648 | 48.2 | 77.9 | 88.0 |
| CLIP-clip | .0670 | .1541 | .2057 | 46.1 | 75.2 | 86.0 |
| COCO5K | SCAN | .1379 | .2133 | .2484 | 25.4 | 54.1 | 67.8 |
| FairSample | .1133 | .1916 | .2288 | 26.8 | 55.3 | 68.5 |
| CLIP | .0770 | .1750 | .2131 | 28.7 | 53.9 | 64.7 |
| CLIP-clip | .0672 | .1474 | .1611 | 27.3 | 50.8 | 62.0 |
| Flickr30K | SCAN | .1098 | .3341 | .3960 | 41.4 | 69.9 | 79.1 |
| FairSample | .0744 | .2699 | .3537 | 35.8 | 67.5 | 77.7 |
| CLIP | .1150 | .3150 | .3586 | 67.2 | 89.1 | 93.6 |
| CLIP-clip | .0960 | .2746 | .2951 | 63.9 | 85.4 | 91.3 |
+
+Table 2: Results on MS-COCO (1K and 5K) and Flickr30K test sets. We compare the baseline models (SCAN (Lee et al., 2018) and CLIP (Radford et al., 2021)) and our debiasing methods (FairSample and CLIP-clip) on both the gender bias metric Bias@K and the retrieval metric Recall@K.
+
+# 5.2 Gender Bias at Different Top-K Results
+
+We plot how gender bias varies across different values of $K$ (1-10) for all the compared methods in Figure 2. We observe that when $K < 5$ , the gender bias has a higher variance due to the inadequate retrieved images. When $K \geq 5$ , the curves tend to be flat. This result indicates that Bias@10 is more recommended than Bias@1 for measuring gender bias as it is more stable. It is also noticeable that CLIP-clip achieves the best fairness performance in terms of Bias@10 consistently on all three test sets compared to the other models.
+
+# 5.3 Tradeoff between Recall and Bias
+
+There is an inherent tradeoff between fairness and accuracy in fair machine learning (Zhao and Gordon, 2019). To achieve the best recall-bias tradeoff in our methods, we further examine the effect of the controlling hyper-parameters: the weight $\alpha$ in Fair-Sampling and the number of clipped dimensions $m$ in CLIP-clip.
+
+Figure 3 demonstrates the recall-bias curve with the fair sampling weight $\alpha \in [0,1]$ . Models of higher recall often suffer higher gender bias, but the fairness improvement outweighs the recall performance drop in FairSample models. For example, the model fully trained with fair sampling $(\alpha = 1)$ has the lowest bias and drops the recall performance the most—it relatively reduces $22.5\%$ Bias@10 but only decreases $10.9\%$ Recall@10 on Flickr30K. We choose $\alpha = 0.4$ for the final model, which has a better tradeoff in retaining the recall performance.
+
+As shown in Figure 4, we set the range of the
+
+
+(a) MS-COCO 1K test set
+
+
+(b) Flickr30K test set
+
+
+Figure 3: The Pareto frontier of recall-bias tradeoff curve for FairSample on MS-COCO 1K and Flickr30K.
+(a) recall
+Figure 4: Effect of the number of clipped dimensions $m$ on performance of recall and bias on MS-COCO 1K.
+
+
+(b) gender bias
+
+clipping dimension $m$ between 100 and 400 on MS-COCO 1K. We find that clipping too many covariates (1) harms the expressiveness of image and text representations (Recall@1 drops from $46.1\%$ to $11.3\%$ , Recall@5 drops from $75.2\%$ to $25.4\%$ , and Recall@10 drops from $86.0\%$ to $34.2\%$ ), and (2) causes high standard deviation in gender bias. In light of the harm on expressiveness, we select $m = 100$ for conventional use.
+
+# 5.4 Evaluation on Internet Image Search
+
+The aforementioned evaluation results on MSCOCO and Flickr30K datasets are limited that they rely on gender labels extracted from human
+
+
+(a) CLIP
+Figure 5: Gender bias evaluation of internet image search results on occupations (Kay et al., 2015). We visualize the similarity biases on 18 occupations. indicates the occupation is biased towards males and indicates it is biased towards females. The clip algorithm mitigates gender bias for a variety of occupations.
+
+
+(b) CLIP-clip
+
+captions. In this sense, it is important to measure the gender biases on a benchmark where the gender labels are identified by crowd annotators. To this end, we further evaluate on the occupation dataset (Kay et al., 2015), which collects top 100 Google Image Search results for each gender-neutral occupation search term. Each image is associated with the crowd-sourced gender attribute of the participant portrayed in the image. Inspired by Burns et al. (2018) and Tang et al. (2020), we measure the gender bias by computing the difference of expected cosine similarity between male and female occupational images. Given an occupation $o$ , the similarity bias is formulated as
+
+$$
+\operatorname {B i a s} = \mathbb {E} _ {v \in \mathcal {V} _ {\text {m a l e}} ^ {o}} S (v, o) - \mathbb {E} _ {v \in \mathcal {V} _ {\text {f e m a l e}} ^ {o}} S (v, o) \tag {9}
+$$
+
+where $\nu_{\mathrm{male}}^{o}$ and $\nu_{\mathrm{female}}^{o}$ are the sets of images for occupation $o$ , labeled as "male" and "female".
+
+Figure 5 demonstrates the absolute similarity bias of CLIP and CLIP-clip on the occupation dataset for 18 occupations. We observe that the CLIP model exhibits severe similarity discrepancy for some occupations, including telemarketer, chemist, and housekeeper, while the clip algorithm alleviates this problem effectively. Note that for doctor and police officer, the CLIP-clip model exaggerates the similarity discrepancy, but the similarity bias is still less than 0.01. In general, CLIP-clip is effective for mitigating similarity bias and obtains a $42.3\%$ lower mean absolute bias of the 100 occupations than the CLIP model (0.0064 vs. 0.0111).
+
+# 6 Related Work
+
+Fairness in Machine Learning A number of unfair treatments by machine learning models were reported recently (Angwin et al., 2016; Buolamwini and Gebru, 2018; Bolukbasi et al., 2016; Otterbacher et al., 2017), and the literature has observed a growing demand and interests in proposing defenses, including regularizing disparate impact (Za-
+
+far et al., 2015) and disparate treatment (Hardt et al., 2016), promoting fairness through causal inference (Kusner et al., 2017), and adding fairness guarantees in recommendations and information retrieval (Beutel et al., 2019; Biega et al., 2018; Morik et al., 2020). The existing fair machine learning solutions can be broadly categorized as pre-processing (KamiranFaisal and CaldersToon, 2012; Feldman et al., 2015; Calmon et al., 2017), preprocessing, and post-processing approaches. Preprocessing algorithms typically re-weight and repair the training data which captures label bias or historical discrimination (KamiranFaisal and CaldersToon, 2012; Feldman et al., 2015; Calmon et al., 2017). In-processing algorithms focus on modifying the training objective with additional fairness constraints or regularization terms (Zafar et al., 2017; Agarwal et al., 2018; Cotter et al., 2019). Post-processing algorithms enforce fairness constraints by applying a post hoc correction of a (pre-)trained classifier (Hardt et al., 2016; Calmon et al., 2017). In this work, the fair sampling strategy designed for the contrastive learning framework could be considered as an in-processing treatment, while the clip algorithm is in the post-processing regime that features an information-theoretical clipping procedure. Our contribution highlights new challenges of reducing gender bias in a multimodal task and specializes new in-processing and post-processing ideas in the domain of image search.
+
+Social Bias in Multi-modality Implicit social bias related to gender and race has been discussed in multimodal tasks including image captioning (Burns et al., 2018; Tang et al., 2020), visual question answering (Manjunatha et al., 2019), face recognition (Buolamwini and Gebru, 2018), and unsupervised image representation learning (Stead and Caliskan, 2021). For example, Zhao et al. (2017) shows that models trained on unbalanced data can amplify bias, and injecting corpus-level Lagrangian constraints can calibrate the bias amplification. Caliskan et al. (2017) demonstrates the association between the word embeddings of occupation and gendered concepts correlates with the imbalanced distribution of gender in text corpora. There are also a series of debiasing techniques in this area. Bolukbasi et al. (2016) propose to surgically alter the embedding space by identifying the gender subspace from gendered word pairs. Manzini et al. (2019) extend the bias component removal approach to the setting where the
+
+sensitive attribute is non-binary. Data augmentation approaches remove the implicit bias in the training corpora and train the models on the balanced datasets (Zhao et al., 2018). Our work complements this line of research by examining gender bias induced by multimodal models in image search results. Our focus on gender bias in the gender-neutral language would offer new insights for a less explored topic to the community.
+
+Gender Bias in Online Search Systems Our work is also closely connected to studies in the HCI community showing the gender inequality in online image search results. Kay et al. (2015) articulate the gender bias in occupational image search results affect people's perceptions of the prevalence of men and women in each occupation. Kay et al. (2015) compare gender proportions in occupational image search results and discuss how the bias affects people's perceptions of the prevalence of men and women in each occupation. Singh et al. (2020) examine the prevalence of gender stereotypes on various digital media platforms. Otterbacher et al. (2017) identify gender bias with character traits. Nonetheless, these works do not attempt to mitigate gender bias in search algorithms. Our work extends these studies into understanding how gender biases enter search algorithms and provides novel solutions to mitigating gender bias in two typical model families for image search.
+
+# 7 Conclusion
+
+In this paper, we examine gender bias in image search models when search queries are gender-neutral. As an initial attempt to study this critical problem, we formally identify and quantify gender bias in image search. To mitigate the gender bias perpetuating two representative fashions of image search models, we propose two novel debiasing algorithms in in-processing and post-processing manners. When training a new image search model, the in-processing FairSample method can be used to learn a fairer model from scratch. Meanwhile, the clip algorithm can be used for lightweight deployment of pre-trained representation models with accessible gender information.
+
+# Broader Impact
+
+The algorithmic processes behind modern search engines, with extensive use of machine learning algorithms, have great power to determine users'
+
+access to information (Eslami et al., 2015). Our research provides evidence that unintentionally using image search models trained either on in-domain image retrieval data sets or massive corpora across the internet may lead to unequal inclusiveness between males and females in image search results, even when the search terms are gender-neutral. This inequity can and do have significant impacts on shaping and exaggerating gender stereotype in people's minds (Kay et al., 2015).
+
+This work offers new methods for mitigating gender bias in multimodal models, and we regard the algorithms proposed in this paper have the potentials to be deployed in real-world systems. We conjecture that our methods may contribute to driving the development of responsible image search engines with other fairness issues. For instance, we would encourage future works to understand and mitigate the risks arising from other social biases, like racial bias, in image search results. We would also encourage researchers to explore whether the methodology presented in this work could be generalized to quantify and mitigate other bias measures.
+
+Our work has limitations. The gender bias measures and the debiasing methods proposed in this study require acquiring the gender labels of images. Our method for identifying the gender attributes of people portrayed in the images is limited: we make use of the contextual cues in the human-annotated captions from the image datasets. The accuracy of such a proxy-based method heavily relies on the coverage of gendered nouns and the inclusiveness of gendered language in the original human annotations. The corruption of gender labels, due to missing gendered words or inappropriate text preprocessing steps, may introduce biases we have not foreseen into the evaluated metrics. Additionally, the gendered word lists are collected from English corpora and may differ in other languages or cultures. It is possible that blind application of our methods by improperly acquiring the gender labels may create image search models that produce even greater inequality, which is very much discouraged. This limitation arises from the unavailability of such sensitive attributes in the source datasets. The lack of relevant data for studying gender bias in image search, and the concerns about how to acquire the gender attributes while preserving the privacy of people concerned, is itself an important question in this area. We believe this research would benefit when richer datasets become available.
+
+# Acknowledgements
+
+The authors would like to thank anonymous reviewers for their constructive comments. This work is supported by the UC Santa Cruz Startup Funding, and the National Science Foundation (NSF) under grants IIS-2040800 and CCF-2023495.
+
+# References
+
+Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna M. Wallach. 2018. A reductions approach to fair classification. In ICML, volume 80 of Proceedings of Machine Learning Research, pages 60-69. PMLR.
+Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica, May, 23:2016.
+Philip Bachman, R Devon Hjelm, and William Buchwalter. 2019. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, volume 32, pages 15535-15545. Curran Associates, Inc.
+Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. *Fairness and Machine Learning*. fairmlbook.org. http://www.fairmlbook.org.
+Alex Beutel, J. Chen, T. Doshi, H. Qian, L. Wei, Y. Wu, L. Heldt, Zhe Zhao, L. Hong, Ed Huai hsin Chi, and Cristos Goodrow. 2019. Fairness in recommendation ranking through pairwise comparisons. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
+Asia J. Biega, K. Gummadi, and G. Weikum. 2018. Equity of attention: Amortizing individual fairness in rankings. The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval.
+Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 4356-4364, Red Hook, NY, USA. Curran Associates Inc.
+Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 77-91, New York, NY, USA. PMLR.
+Kaylee Burns, Lisa Anne Hendricks, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In ECCV.
+
+Aylin Caliskan, J. Bryson, and A. Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356:183 - 186.
+Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. 2017. Optimized pre-processing for discrimination prevention. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3992-4001. Curran Associates, Inc.
+Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020a. Generative pretraining from pixels. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1691-1703. PMLR.
+T. Chen, Jiajun Deng, and Joebo Luo. 2020b. Adaptive offline quintuplet loss for image-text matching. ArXiv, abs/2003.03669.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020c. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR.
+Xinlei Chen, H. Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dolkar, and C. L. Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. ArXiv, abs/1504.00325.
+Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020d. Uniter: Universal image-text representation learning. In ECCV.
+Andrew Cotter, Heinrich Jiang, Serena Wang, Taman Narayan, M. Gupta, S. You, and K. Sridharan. 2019. Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. ArXiv, abs/1809.04198.
+Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations.
+Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. "i always assumed that i wasn't really that close to [her]": Reasoning about invisible algorithms in news feeds. In CHI 2015 - Proceedings of the 33rd Annual CHI Conference on Human
+
+Factors in Computing Systems, Conference on Human Factors in Computing Systems - Proceedings, pages 153-162. Association for Computing Machinery. 33rd Annual CHI Conference on Human Factors in Computing Systems, CHI 2015; Conference date: 18-04-2015 Through 23-04-2015.
+Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2018. Vse++: Improving visual-semantic embeddings with hard negatives.
+Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15, page 259-268, New York, NY, USA. Association for Computing Machinery.
+Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Trans. Inf. Syst., 14(3):330-347.
+Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. In NeurIPS.
+Weihao Gao, Sreeram Kannan, Sewoong Oh, and Pramod Viswanath. 2017. Estimating mutual information for discrete-continuous mixtures. In Advances in Neural Information Processing Systems, volume 30, pages 5986-5997. Curran Associates, Inc.
+Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 3323-3331, Red Hook, NY, USA. Curran Associates Inc.
+Yan Huang, Wei Wang, and Liang Wang. 2017. Instance-aware image and sentence matching with selective multimodal LSTM. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 7254-7262. IEEE Computer Society.
+KamiranFaisal and CalderToon. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems.
+A. Karpathy and Li Fei-Fei. 2017. Deep visual-semantic alignments for generating image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:664-676.
+Matthew Kay, Cynthia Matuszek, and Sean A. Munson. 2015. Unequal Representation and Gender Stereotypes in Image Search Results for Occupations, page 3819-3828. Association for Computing Machinery, New York, NY, USA.
+
+S. Kullback and R. A. Leibler. 1951. On information and sufficiency. Ann. Math. Statist., 22(1):79-86.
+Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4066-4076. Curran Associates, Inc.
+Brian Larson. 2017. Gender as a variable in natural-language processing: Ethical considerations. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1-11, Valencia, Spain. Association for Computational Linguistics.
+P. H. Le-Khac, G. Healy, and A. F. Smeaton. 2020. Contrastive representation learning: A framework and review. IEEE Access, 8:193907-193934.
+Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked cross attention for image-text matching. arXiv preprint arXiv:1803.08024.
+Tsung-Yi Lin, M. Maire, Serge J. Belongie, James Hays, P. Perona, D. Ramanan, Piotr Dollar, and C. L. Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV.
+Varun Manjunatha, Nirat Saini, and Larry S. Davis. 2019. Explicit bias discovery in visual question answering models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
+Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615-621, Minneapolis, Minnesota. Association for Computational Linguistics.
+Marco Morik, Ashudeep Singh, Jessica Hong, and Thorsten Joachims. 2020. Controlling Fairness and Bias in Dynamic Learning-to-Rank, page 429-438. Association for Computing Machinery, New York, NY, USA.
+Jahna Otterbacher, Jo Bates, and Paul Clough. 2017. Competent Men and Warm Women: Gender Stereotypes and Backlash in Image Search Results, page 6620-6631. Association for Computing Machinery, New York, NY, USA.
+A. Radford, J. W. Kim, Chris Hallacy, Aditya Ramesh, G. Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, J. Clark, G. Krüger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. Technical report, OpenAI.
+
+Vivek K. Singh, Mary Chayko, Raj Inamdar, and Diana Floegel. 2020. Female librarians and male computer programmers? gender bias in occupational images on digital media platforms. J. Assoc. Inf. Sci. Technol., 71(11):1281-1294.
+Ryan Steed and Aylin Caliskan. 2021. Image representations learned with unsupervised pre-training contain human-like biases. In Conference on Fairness, Accountability, and Transparency (FAccT '21), New York, NY, USA.
+Ruixiang Tang, Mengnan Du, Yuening Li, Zirui Liu, and X. Hu. 2020. Mitigating gender bias in captioning systems. *ArXiv*, abs/2006.08315.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30, pages 5998-6008. Curran Associates, Inc.
+J. Wang, Yang Song, Thomas Leung, C. Rosenberg, J. Philbin, Bo Chen, and Y. Wu. 2014. Learning fine-grained image similarity with deep ranking. 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 1386-1393.
+P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.
+M. Zafar, I. Valera, M. Gomez-Rodriguez, and K. Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In AISTATS.
+M. Zafar, I. Valera, M. G. Rodriguez, and K. Gummadi. 2015. Learning fair classifiers. arXiv: Machine Learning.
+Han Zhao and Geoff Gordon. 2019. Inherent tradeoffs in learning fair representations. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979-2989, Copenhagen, Denmark. Association for Computational Linguistics.
+Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics.
+
+# A Gender Word Lists
+
+We show the word lists for identifying the gender attributes of a caption in Table 3.
+
+| feminine words | woman, women, female, girl, lady, mother, mom, sister, daughter, wife, girlfriend |
| masculine words | man, men, male, boy, gentleman, father, brother, son, husband, boyfriend |
| gender-neutral words | person, people, human, adult, baby, child, kid, children, guy, teenage, crowd |
+
+Table 3: Gender word lists. We identify the gender attributes of captions based on the occurrence of gender-specific words appeared in the sentences.
+
+# B Implementation Details
+
+# B.1 Computing Infrastructure
+
+We use a GPU server with 4 NVIDIA RTX 2080 Ti GPUs for training and evaluation.
+
+# B.2 Computational Time Costs
+
+We find that SCAN (Lee et al., 2018) and SCAN with fair sampling need about 20 hours for training 30 epochs on MS-COCO and 8-10 minutes for testing on 1K test set. In comparison, pre-trained CLIP (Radford et al., 2021) and CLIP-clip can be evaluated within 1 minutes on MS-COCO 1K test set.
+
+# C Qualitative Examples
+
+We take a qualitative study on the image search results. We show the results of searching "a person riding a bike" in Figure 6. The first row presents the top-5 retrieved images for SCAN, the second row presents the top-5 retrieved images for SCAN+FairSample, the third row presents the top-5 retrieved images for CLIP, and the last row presents the top-5 retrieved images for CLIP-clip. While we notice that all the models retrieve relevant images, we find FairSample put images depicting females in a higher rank.
+
+
+Figure 6: Qualitative analysis of gender bias in image search results. The text query is "a person riding a bike". The first row presents the top-5 retrieved images for SCAN, the second row presents the top-5 retrieved images for SCAN+FairSample, the third row presents the top-5 retrieved images for CLIP, and the last row presents the top-5 retrieved images for CLIP-clip.
\ No newline at end of file
diff --git a/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/images.zip b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a680186979e4b2360e78a0482868b253bac16f11
--- /dev/null
+++ b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a166e637b51323cb59fa66965c4af0ad7226d5173696d16391b5ab5af55024e7
+size 798112
diff --git a/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/layout.json b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee4a39f0b9b4f872376240a9d82ccdb17816e53b
--- /dev/null
+++ b/aregenderneutralqueriesreallygenderneutralmitigatinggenderbiasinimagesearch/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:20d36df04f1ee7b4b808d47572a0e3e34d23ae6090d3c690f9bd51529fd5232d
+size 461789
diff --git a/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_content_list.json b/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..49edfc2c77bb2b0414a01dd10930e9c3c443a1ba
--- /dev/null
+++ b/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:11f515047b46c393503f99d6aa275c9b0fd24386a179707b3708ffe8b93a8f8e
+size 83891
diff --git a/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_model.json b/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..de3e191381a86b6a158901bb7781691da5dccdd6
--- /dev/null
+++ b/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f3df90f19e47f28d41288ed7298dbbafb4d468b11443c4be035ff32113faceb2
+size 98054
diff --git a/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_origin.pdf b/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5c4b4bef857e02f23e2188ae66b64374cf9d4acf
--- /dev/null
+++ b/arelationorientedclusteringmethodforopenrelationextraction/14d058dd-098d-4f0a-800b-e1a5efe5c6ba_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cbacda77194fbfa5a3a3d2ba2ff5354e3f8df003f102dcf7c6f55a7b92a79f5c
+size 973592
diff --git a/arelationorientedclusteringmethodforopenrelationextraction/full.md b/arelationorientedclusteringmethodforopenrelationextraction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8029fe2c44a0bbe109972b56b051ed244fb940f8
--- /dev/null
+++ b/arelationorientedclusteringmethodforopenrelationextraction/full.md
@@ -0,0 +1,360 @@
+# A Relation-Oriented Clustering Method for Open Relation Extraction
+
+Jun Zhao $^{1}$ , Tao Gui $^{2*}$ , Qi Zhang $^{1*}$ , Yaqian Zhou $^{1}$
+
+$^{1}$ School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing,
+
+Fudan University, Shanghai, China
+
+$^{2}$ Institute of Modern Languages and Linguistics, Fudan University
+
+{zhaoj19,tgui, qz,zhouyaqian}@fudan.edu.cn
+
+# Abstract
+
+The clustering-based unsupervised relation discovery method has gradually become one of the important methods of open relation extraction (OpenRE). However, high-dimensional vectors can encode complex linguistic information which leads to the problem that the derived clusters cannot explicitly align with the relational semantic classes. In this work, we propose a relation-oriented clustering model and use it to identify the novel relations in the unlabeled data. Specifically, to enable the model to learn to cluster relational data, our method leverages the readily available labeled data of pre-defined relations to learn a relation-oriented representation. We minimize distance between the instance with same relation by gathering the instances towards their corresponding relation centroids to form a cluster structure, so that the learned representation is cluster-friendly. To reduce the clustering bias on predefined classes, we optimize the model by minimizing a joint objective on both labeled and unlabeled data. Experimental results show that our method reduces the error rate by $29.2\%$ and $15.7\%$ , on two datasets respectively, compared with current SOTA methods.
+
+# 1 Introduction
+
+Relation extraction (RE), a crucial basic task in the field of information extraction, is of the utmost practical interest to various fields including web search (Xiong et al., 2017), knowledge base completion (Bordes et al., 2013), and question answering (Yu et al., 2017). However, conventional RE paradigms such as supervision and distant supervision are generally designed for pre-defined relations, which cannot deal with new emerging relations in the real world.
+
+Under this background, open relation extraction (OpenRE) has been widely studied for its use
+
+
+Figure 1: Although both instances $S_{2}$ and $S_{3}$ express founded relation while $S_{1}$ expresses CEO relation, the distance between $S_{1}$ and $S_{2}$ is still smaller than that between $S_{2}$ and $S_{3}$ . This is because there may be more similar surface information (e.g. word overlapping) or syntactic structure between $S_{1}$ and $S_{2}$ , thus the derived clusters cannot explicitly align with relations.
+
+in extracting new emerging relational types from open-domain corpora. The approaches used to handle open relations roughly fall into one of two groups. The first group is open information extraction (OpenIE) (Etzioni et al., 2008; Yates et al., 2007; Fader et al., 2011), which directly extracts related phrases as representations of different relational types. However, if not properly canonicalized, the extracted relational facts can be redundant and ambiguous. The second group is unsupervised relation discovery (Yao et al., 2011; Shinyama and Sekine, 2006; Simon et al., 2019). In this type of research, much attention has been focused on unsupervised clustering-based RE methods, which cluster and recognize relations from high-dimensional representations (Elsahar et al., 2017). Recently, the self-supervised signals in pretrained language model are further exploited for clustering optimization (Hu et al., 2020).
+
+However, many studies show that high-dimensional embeddings can encode complex linguistic information such as morphological (Peters et al., 2018), local syntactic (Hewitt and Manning, 2019), and longer range semantic
+
+information (Jawahar et al., 2019). Consequently, the distance of representation is not completely consistent with relational semantic similarity. Although Hu et al. (2020) use self-supervised signals to optimize clustering, there is still no guarantee that the learned clusters will explicitly align with the desired relational semantic classes (Xing et al., 2002). As shown in Figure 1, we use the method proposed by Hu et al. (2020) to get the instance representations. Although both instances $S_{2}$ and $S_{3}$ express the founded relation, the euclidean distance between them is larger than that between $S_{1}$ and $S_{2}$ , which express different relation. Obviously, the clustering algorithm tends to group instances $S_{1}$ and $S_{2}$ together, rather than $S_{2}$ and $S_{3}$ which express the same relation.
+
+In this work, we propose a relation-oriented clustering method. To enable the model to learn to cluster relational data, pre-defined relations and their existing labeled instances are leveraged to optimize a non-linear mapping, which transforms high-dimensional entity pair representations into relation-oriented representations. Specifically, we minimize distance between the instances with same relation by gathering the instances representation towards their corresponding relation centroids to form the cluster structure, so that the learned representation is cluster-friendly. In order to reduce the clustering bias on the predefined classes, we iteratively train the entity pair representations by optimizing a joint objective function on the labeled and unlabeled subsets of the data, improving both the supervised classification of the labeled data, and the clustering of the unlabeled data. In addition, the proposed method can be easily extended to incremental learning by classifying the pre-defined and novel relations with a unified classifier, which is often desirable in real-world applications. Our experimental results show that our method outperforms current state-of-the-art methods for OpenRE. Our codes are publicly available at Github*.
+
+To summarize, the main contributions of our work are as follows: (1) we propose a novel relation-oriented clustering method RoCORE to enable model to learn to cluster relational data; (2) the proposed method achieves the incremental learning of unlabeled novel relations, which is often desirable in real-world applications; (3) experimental results show that our method reduces
+
+the error rate by $29.2\%$ and $15.7\%$ , on two real-world datasets respectively, compared with current state-of-the-art OpenRE methods.
+
+# 2 Related Work
+
+Open Relation Extraction. To meet the needs of extracting new emerging relation types, many efforts have been undertaken to exploring methods for open relation extraction (OpenRE). The first line of research is Open Information Extraction (Etzioni et al., 2008; Yates et al., 2007; Fader et al., 2011), in which relation phrases are extracted directly to represent different relation types. However, using surface forms to represent relations results in an associated lack of generality since many surface forms can express the same relation. Recently, unsupervised clustering-based RE methods is attracting lots of attentions. Elsahar et al. (2017) proposed to extract and cluster open relations by re-weighting word embeddings and using the types of named entities as additional features. Hu et al. (2020) proposed to exploit weak, self-supervised signals in pretrained language model for adaptive clustering on contextualized relational features. However, the self-supervised signals are sensitive to the initial representation (Gansbeke et al., 2020) and there is still no guarantee that the learned clusters will align with the relational semantic classes (Xing et al., 2002). Wu et al. (2019) proposed the relation similarity metrics from labeled data, and then transfers the relational knowledge to identify novel relations in unlabeled data. Different from them, we propose a relation-oriented method explicitly clustering data based on relational information.
+
+Knowledge in High-Dimensional Vector. Pretrained static and contextual word representations can provide valuable prior knowledge for constructing relational representations (Soares et al., 2019; Elsahar et al., 2017). Peters et al. (2018) showed that different neural architectures (e.g., LSTM, CNN, and Transformers) can hierarchically structure linguistic information that varies with network depth. Recently, many studies (Jawahar et al., 2019; Clark et al., 2019; Goldberg, 2019) have shown that such hierarchy also exists in pretraining models like BERT. These results suggest that high-dimensional embeddings, independent of model architecture, learn much about the structure of language. Directly clustering on these high-dimensional embeddings should hardly produce
+
+
+Figure 2: Overview of our RoCORE method. At the first step, we encode both the labeled and unlabeled instances in to entity pair representations. Then the entity pair representations are transformed to relation-oriented representations by gathering towards their relational centroids in the second step. Finally, based on the pseudo labels generated by clustering on unlabeled data, we optimize the entity pair representations and classifier by minimizing a joint objective function to reduce the clustering bias on predefined classes. The above three steps are performed iteratively to gradually improve model performance on novel relations.
+
+ideal clusters in our desired way, which motivates us to extend current unsupervised clustering-based RE methods to learn the representations tailored for clustering relational data.
+
+# 3 Approach
+
+In this work, we propose a relation-oriented clustering method, which takes advantage of the relational information in the existing labeled data to enable model to learn to cluster relational data. In order to reduce the clustering bias on the predefined classes, we iteratively train the entity pair representations by optimizing a joint objective function on the labeled and unlabeled subsets of the data, improving both the supervised classification of the labeled data, and the clustering of the unlabeled data. The proposed method is shown in Figure 2.
+
+Specifically, given an unlabeled dataset $\mathcal{D}^u = \{s_i^u\}_{i=1,\dots,M}$ of relational instances $s_i^u$ , our goal is to automatically cluster the relational instances into a number of classes $C^u$ , which we assume to be known a priori. To enable the model to learn to cluster data, we incorporate a second labeled dataset of pre-defined relations $\mathcal{D}^l = \{(s_i^\ell, y_i^\ell)\}_{i=1,\dots,N}$ where $y_i^\ell \in \{1,\dots,C^\ell\}$ is the relational label for instance $s_i^\ell$ .
+
+# 3.1 Method Overview
+
+We approach the problem by learning a relation-oriented representation, from which the derived clusters can be explicitly aligned with the desired relational semantic classes. As illustrated in Figure 2, we learn the representation and optimize the model by performing three iterative steps:
+
+(1) First, we encode relation instances in $\mathcal{D}^{\ell}$ and $\mathcal{D}^u$ using the entity pair encoder implemented as the pretrained BERT (Devlin et al., 2018), which takes relation instances $\{s_i^\ell\}_{i=1,\dots,N}$ , and $\{s_j^u\}_{j=1,\dots,M}$ , as input, and output relation representation $h_i^\ell$ , $h_j^u$ . However, high-dimensional $h$ can encode a mixture of various aspects of linguistic features and the derived clusters from $h$ cannot explicit align with desired relational classes. (2) To make the distance between the representations accurately reflect the relational semantic similarity, the obtained $h_i^\ell$ are transformed to low-dimensional relation-oriented representations $h_i^{\ell'}$ by a non-linear mapping $g$ . Under the supervision of labels $y_i^\ell$ in $\mathcal{D}^\ell$ , $g$ is optimized by the gathering of $h_i^{\ell'}$ towards their relational centroids to form a cluster structure, thereby we obtain $h_j^{u'}$ from unlabeled data using the optimized $g$ and generate the pseudo labels $\hat{y}^u$ according to clustering on $h_i^{u'}$ . (3) Because using labeled data to guide the $h'$ towards their relational centroids will produce
+
+clustering bias on pre-defined relations, it is difficult to directly generate high-quality pseudo labels. To reduce the negative effect of errors in pseudo labels, we optimize classifier and entity pair representations by minimizing a joint objective function, containing terms for both pre-defined and novel relations, using respectively the given labels $y^{\ell}$ and generated pseudo label $\hat{y}^{u}$ . Based on the refined entity pair representation $h$ which encode more contextual relational information, the above three steps are performed iteratively to gradually improve the quality of pseudo labels $\hat{y}^{u}$ and model performance.
+
+# 3.2 Entity Pair Encoder
+
+Given a relation instance $s_i = (x_i, h_i, t_i)$ , which consists of a sentence $x_i = \{x_1, x_2, \dots, x_n\}$ and two entity spans $h_i = (s_h, e_h)$ , $t_i = (s_t, e_t)$ marking the position of the entity pair, the entity pair encoder $f$ aims to map relation instance $s_i$ to a fixed-length embedding $h_i = f(s_i) \in \mathbb{R}^d$ that encode contextual information in $s_i$ . We adopt BERT (Devlin et al., 2018) as the implementation of our encoder $f$ due to its strong performance on extracting contextual information. Formally:
+
+$$
+\boldsymbol {h} _ {1} ^ {r}, \dots , \boldsymbol {h} _ {n} ^ {r} = \operatorname {B E R T} ^ {r} \left(x _ {1}, \dots , x _ {n}\right) \tag {1}
+$$
+
+$$
+\boldsymbol {h} _ {e n t} = \operatorname {M A X P O O L} \left(\boldsymbol {h} _ {s} ^ {r}, \dots , \boldsymbol {h} _ {e} ^ {r}\right) \tag {2}
+$$
+
+$$
+\boldsymbol {h} _ {i} = \boldsymbol {h} _ {\text {h e a d}} \oplus \boldsymbol {h} _ {\text {t a i l}}, \tag {3}
+$$
+
+where $r$ is a hyperparameter that denotes the output layer of BERT. $s$ and $e$ represent start and end position of the corresponding entity respectively. $\oplus$ denotes the concatenation operator. This structure of entity pair representation encoder has been widely used in previous RE methods (Wang et al., 2021; Hu et al., 2020).
+
+# 3.3 Relation-Oriented Clustering Module
+
+In order to make the distance between representation accurately reflect the relational semantic similarity, the obtained $\{h_i^\ell\}_{i=1,\dots,N}$ are transformed to low-dimensional relation-oriented representations $h_i^{\ell'}$ by a non-linear mapping $g(\cdot): \mathbb{R}^d \to \mathbb{R}^m$ . Under the supervision of labels $y_i^\ell$ in $\mathcal{D}^\ell$ , $g$ is optimized by the gathering of $h_i^{\ell'}$ towards their relational centroids as follows:
+
+$$
+\mathcal {L} _ {\text {c e n t e r}} = \frac {1}{2 N} \sum_ {i = 1} ^ {N} \left\| \boldsymbol {h} _ {i} ^ {\boldsymbol {\ell} \prime} - \boldsymbol {c} _ {y _ {i}} \right\| _ {2} ^ {2} \tag {4}
+$$
+
+$$
+\boldsymbol {c} _ {r} = \frac {1}{| \mathcal {D} _ {r} |} \sum_ {i \in \mathcal {D} _ {r}} \boldsymbol {h} _ {i} ^ {\ell \prime}, \tag {5}
+$$
+
+where $c_r$ denotes the centroids of relation $r$ . The center loss $\mathcal{L}_{center}$ seems reasonable, but problematic. A global optimal solution to minimize $\mathcal{L}_{center}$ is $g(h_i) = 0$ , which is far from being desired. This motivates us to incorporate a reconstruction term to prevent the semantic space from collapsing. Specifically, a decoding network $d(\cdot)$ is used to map the representation $h_i'$ back to the original representation $h_i$ . Thus, we can derive the following loss function:
+
+$$
+\mathcal {L} _ {C} = \frac {1}{2 N} \sum_ {i = 1} ^ {N} \ell \left(\boldsymbol {d} \left(\boldsymbol {h} _ {i} ^ {\prime}\right), \boldsymbol {h} _ {i}\right) + \lambda \mathcal {L} _ {\text {c e n t e r}}, \tag {6}
+$$
+
+where both the encoder $g(h_{i})$ and decoder $d(h_{i}^{\prime})$ are implemented as DNN. The function $\ell(\cdot, \cdot): \mathcal{R}^{d} \to \mathcal{R}$ is the least-squares loss $\ell(x, y) = \| x - y \|_{2}^{2}$ that measures the reconstruction error and other choices such as $\ell_{1}$ -norm also can be considered. $\lambda$ is a hyper-parameter that balances the reconstruction error versus center loss.
+
+Finally, we obtain $\{h_j^{u'}\}_{j = 1,\dots ,M}$ using the optimized $\pmb{g}$ and generate pseudo labels $\hat{y}^u$ using k-means algorithm as follows:
+
+$$
+\hat {y} ^ {u} = \mathrm {k} - \operatorname {m e a n s} \left(\boldsymbol {h} ^ {u ^ {\prime}}\right), \tag {7}
+$$
+
+# 3.4 Relation Classification Module
+
+Based on the pseudo labels $\hat{y}^u$ generated by clustering, we can train the classifier and refine entity pair representation $h$ to encode more contextual relation information. Since it's difficult to keep the order of clusters consistent in multiple clustering, instead of using standard cross entropy loss, we propose to use the pairwise similarities for novel relation learning.
+
+$$
+q _ {i j} = \mathbb {1} \left\{\hat {y} _ {i} ^ {u} = \hat {y} _ {j} ^ {u} \right\}, \tag {8}
+$$
+
+where the symbol $q_{ij}$ denotes whether $s_i^u$ and $s_j^u$ belong to the same cluster. If a pair is from the same cluster, the classifier $\pmb{\eta}^{u}:\mathcal{R}^{d}\to$ $\mathcal{R}^{C^u}$ outputs similar distributions, and vice-versa. Specifically, we use the pair-wise KL-divergence to evaluate the distance of two relation instances. Given a pair of instance $s_i^u,s_j^u$ , their corresponding output distributions are defined as $\mathcal{P} = \pmb{\eta}^{u}(\pmb {f}(s_{i}^{u}))$ and $\mathcal{Q} = \pmb{\eta}^{u}(\pmb {f}(s_{j}^{u}))$ . For the pair from the same cluster, the cost is described as:
+
+$$
+\mathcal {L} ^ {+} \left(s _ {i} ^ {u}, s _ {j} ^ {u}\right) = \mathcal {D} _ {K L} \left(\mathcal {P} ^ {*} \mid \mid \mathcal {Q}\right) + \mathcal {D} _ {K L} \left(\mathcal {Q} ^ {*} \mid \mid \mathcal {P}\right) \tag {9}
+$$
+
+$$
+\mathcal {D} _ {K L} \left(\mathcal {P} ^ {*} \right\| \mathcal {Q}) = \sum_ {c = 1} ^ {C ^ {u}} p _ {c} \log \frac {p _ {c}}{q _ {c}}, \tag {10}
+$$
+
+where $\mathcal{P}^*$ denotes that $\mathcal{P}$ is assumed to be a constant and each KL-divergence factor $\mathcal{D}_{KL}(\mathcal{P}||\mathcal{Q})$ is a unary function whose gradient is simply $\partial \mathcal{D}_{KL}(\mathcal{P}^{*}||\mathcal{Q}) / \partial \mathcal{Q}$ .
+
+If $s_i^u, s_j^u$ comes from different clusters, their output distributions are expected to be different, which can be defined as a hinge-loss function:
+
+$$
+\begin{array}{l} \mathcal {L} ^ {-} \left(s _ {i} ^ {u}, s _ {j} ^ {u}\right) = L _ {h} \left(\mathcal {D} _ {K L} \left(\mathcal {P} ^ {*} \mid \mid \mathcal {Q}\right), \sigma\right) + \\ L _ {h} \left(\mathcal {D} _ {K L} \left(\mathcal {Q} ^ {*} | | \mathcal {P}\right), \sigma\right) \\ L _ {h} (e, \sigma) = \max (0, \sigma - e), \tag {12} \\ \end{array}
+$$
+
+and the total loss can be defined as a contrastive loss:
+
+$$
+\begin{array}{l} \mathcal {L} _ {B C E} \left(s _ {i} ^ {u}, s _ {j} ^ {u}\right) = q _ {i, j} \mathcal {L} ^ {+} \left(s _ {i} ^ {u}, s _ {j} ^ {u}\right) + \tag {13} \\ (1 - q _ {i j}) \mathcal {L} ^ {-} \left(s _ {i} ^ {u}, s _ {j} ^ {u}\right). \\ \end{array}
+$$
+
+Note that $\mathcal{L}_{BCE}$ is a symmetric loss w.r.t. $s_i^u,s_j^u$ since $\mathcal{P}$ and $\mathcal{Q}$ are alternatively assumed to be constant in $\mathcal{L}^{+}$ and $\mathcal{L}^{-}$ . Finally, we get the prediction for a relation instance $s_i^u$ as follows:
+
+$$
+\hat {y} _ {i} ^ {u} = \arg \max _ {y} \left[ \boldsymbol {\eta} ^ {u} \left(\boldsymbol {f} \left(s _ {i} ^ {u}\right)\right) \right] _ {y} \tag {14}
+$$
+
+# 3.5 Training Methods
+
+# 3.5.1 Iterative Joint Training
+
+Because using labeled data to guide the $h$ towards their relational centroids will produce clustering bias on pre-defined relations, it is difficult to directly generate high-quality pseudo labels $\hat{y}^u$ for novel relations. To reduce the negative effect of errors in pseudo labels, we incorporate a classifier $\eta^\ell : \mathcal{R}^d \to \mathcal{R}^{C^\ell}$ for pre-defined relations and refine $h$ by minimizing a joint objective function, containing terms for both pre-defined and novel relations, using respectively the given labels $y^\ell$ and generated pseudo label $\hat{y}^u$ as follows:
+
+$$
+\mathcal {L} _ {C E} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \boldsymbol {\eta} _ {y _ {i}} ^ {\ell} (\boldsymbol {h} _ {i}) \tag {15}
+$$
+
+$$
+\mathcal {L} _ {C L S} = \mathcal {L} _ {C E} + \mathcal {L} _ {B C E}. \tag {16}
+$$
+
+The refined entity pair representation $h$ encode more contextual relation information, which in turn promote clustering optimization and generate pseudo labels $\hat{y}^u$ with higher accuracy. We refine representation $h$ and optimize clustering in a iterative manner to gradually improve the quality of the pseudo labels and model performance. This iterative procedure is detailed in Algorithm 1.
+
+# Algorithm 1: The RoCORE Method
+
+Input: novel relation dataset $\mathcal{D}^u = \{s_j^u\}$ , predefined relation dataset $\mathcal{D}^\ell = \{(s_i^\ell, y_i^\ell)\}$ , model parameters $\Theta$ , $\Phi$ , $\Psi$ for Entity pair encoder, Relation-oriented clustering module, Relation classifiers, respectively, and learning rate $\eta$ .
+
+1 for epoch $\leftarrow 1$ to $L$ do
+2 Pre-train clustering network by minimize reconstruction loss $\Phi = \Phi -\eta \nabla_{\Phi}\ell (\pmb {g}(\pmb{h}_i^{\prime}),\pmb {h}_i);$
+3 end
+4 repeat
+5 generate pseudo labels $\hat{y}$ by equation 7; 6 refine entity pair representation
+7 $\Theta = \Theta -\eta \nabla_{\Theta}\mathcal{L}_{CLS};$
+8 $\Psi = \Psi -\eta \nabla_{\Psi}\mathcal{L}_{CLS};$
+9 optimize clustering $\Phi = \Phi -\eta \nabla_{\Phi}\mathcal{L}_C$
+0 until convergence;
+
+# 3.5.2 Incremental Learning Scheme
+
+In real-world settings, when facing a new sentence, we often don't know whether it belongs to predefined relations or novel relations. In this work, we explore the incremental learning of novel relations to enable $\pmb{\eta}^{l}$ to discriminate both predefined and novel relations. Under incremental learning settings, we extend the classifier $\pmb{\eta}^{l}$ to $C^u$ novel relation types, so that $\pmb{\eta}^{l}: \mathcal{R}^{d} \to \mathcal{R}^{C^{l} + C^{u}}$ . Then, the model is trained using cross-entropy loss instead of equation 15 as follows:
+
+$$
+\begin{array}{l} \mathcal {L} _ {C E} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \eta_ {y _ {i}} ^ {\ell} \left(\boldsymbol {h} _ {i}\right) \tag {17} \\ - \frac {\mu (t)}{M} \sum_ {j = 1} ^ {M} \log \boldsymbol {\eta} _ {\hat {y} _ {j}} ^ {\ell} (\boldsymbol {h} _ {j}), \\ \end{array}
+$$
+
+where we obtain $\hat{y}_j$ using equation 14 and the coefficient $\mu(t)$ balances the cross entropy loss of pre-defined and novel relations. We implemented it as a ramp-up function $\mu(t) = \mu_0 e^{-5(1 - \frac{t}{T})^2}$ where $t$ is current epoch and $T$ is the ramp-up length and coefficient $\mu_0 \in \mathbb{R}^+$ .
+
+# 4 Experimental Setup
+
+In this section, we describe the datasets for training and evaluating the proposed method. We also detail the baseline models for comparison.
+
+Finally, we clarify the implementation details and hyperparameter configuration of our method.
+
+# 4.1 Datasets
+
+We conduct experiments on two relation extraction datasets.
+
+FewRel. Few-Shot Relation Classification Dataset (Han et al., 2018). FewRel is a human-annotated dataset containing 80 types of relations, each with 700 instances. We follow the setting in (Wu et al., 2019) to use the original train set of FewRel, which contains 64 relations, as labeled set with predefined relations, and the original validation set of FewRel, which contains 16 new relations, as the unlabeled set with novel relations to extract. 1,600 instances were randomly selected from the unlabeled set as the test set. The rest of labeled and unlabeled instances are considered as the train set.
+
+TACRED. The TAC Relation Extraction Dataset (Zhang et al., 2017). TACRED is a human-annotated large-scale relation extraction dataset that covers 41 relation types. We remove the instances labeled as no Relation and use the remaining 21,773 instances for training and evaluation. Similar to the setting of FewRel, we select the 0-30 relation types as labeled set with pre-defined relations and the 31-40 relation types as unlabeled set with novel relations. We randomly selected $15\%$ of the instances from the unlabeled set as the test set. The rest of the labeled and unlabeled instances are considered as the train set.
+
+# 4.2 Compared Methods
+
+To evaluate the effectiveness of our method, we select the following SOTA OpenRE models for comparison. Note that the first four methods are unsupervised and RSN as well as RSN-BERT leverages labeled data of predefined relations.
+
+HAC with Re-weighted Word Embeddings (RW-HAC) (Elsahar et al., 2017). RW-HAC is a feature clustering method for OpenRE. The model contracts relational feature based on the weighted word embeddings as well as entity types.
+
+Discrete-state Variational Autoencoder (VAE) (Marcheggiani and Titov, 2016). VAE is a reconstruction-based method for OpenRE. The model is optimized by reconstructing entities from pairing entities and predicted relations.
+
+Entity Based URE (Etype+) (Tran et al., 2020). Etype+ is a simple and effective method relying only on entity types. The same link predictor as in
+
+| Hyper-parameters | value |
| optimizer | Adam |
| learning rate | 1e-4 |
| batch size | 100 |
| pre-training epochs L | 10 |
| BCE loss coefficient σ | 2 |
| center loss coefficient λ for FewRel | 0.005 |
| center loss coefficient λ for TACRED | 0.001 |
| ramp-up coefficient μ0 | 1.0 |
| ramp-up length T | 10 |
+
+Table 1: Hyper-parameter settings.
+
+Marcheggiani and Titov, 2016) is employed and two additional regularisers are used.
+
+Self-supervised Feature Learning for OpenRE (SelfORE) (Hu et al., 2020). SelfORE exploits weak, self-supervised signals by leveraging large pretrained language model for adaptive clustering on contextualized relational features.
+
+Relational Siamese Network (RSN) (Wu et al., 2019). This method learns similarity metrics of relations from labeled data of pre-defined relations, and then transfer the relational knowledge to identify novel relations in unlabeled data.
+
+RSN with BERT Embedding (RSN-BERT). A variant of RSN, the static word vector is replaced by the BERT embedding for fair comparison.
+
+# 4.3 Implementation Details
+
+Our entity pair encoder is implemented as the bert-base-uncased which consists of 12 layers and we use layer 8 as the output layer for best performance. Note that we only fine-tune the parameters of the output layer in the iterative training process to avoid overfitting. Non-linear mapping $g(\cdot)$ and $d(\cdot)$ are both implemented as a DNN with relu activation, specifically $\mathbb{R}^d - 512 - 512 - 256$ for $g(\cdot)$ and $256 - 512 - 512 - \mathbb{R}^d$ for $d(\cdot)$ . All experiments are conducted using a GeForce GTX 1080Ti with 11GB memory and table 1 shows our best hyper-parameter settings.
+
+# 5 Results and Analysis
+
+In this section, we present the experimental results of our model on two real-world datasets to demonstrate the effectiveness of our method. We also provide additional experimental results on hyperparameter analysis and relation representation visualization in appendix A and B.
+
+| Dataset | Method | B3 | V-measure | ARI |
| Prec. | Rec. | F1 | Hom. | Comp. | F1 |
| FewRel | VAE(Marcheggiani and Titov, 2016) | 0.309 | 0.446 | 0.365 | 0.448 | 0.500 | 0.473 | 0.291 |
| RW-HAC(Elsahar et al., 2017) | 0.256 | 0.492 | 0.337 | 0.391 | 0.485 | 0.433 | 0.250 |
| ETYPE+(Tran et al., 2020) | 0.238 | 0.485 | 0.319 | 0.364 | 0.463 | 0.408 | 0.249 |
| SelfORE(Hu et al., 2020) | 0.672 | 0.685 | 0.678 | 0.779 | 0.788 | 0.783 | 0.647 |
| RSN(Wu et al., 2019) | 0.486 | 0.742 | 0.589 | 0.644 | 0.787 | 0.708 | 0.453 |
| RSN-BERT | 0.585 | 0.899 | 0.709 | 0.696 | 0.889 | 0.781 | 0.532 |
| RoCORE | 0.75217 | 0.84609 | 0.79611 | 0.83810 | 0.88306 | 0.86007 | 0.70923 |
| TACRED | VAE(Marcheggiani and Titov, 2016) | 0.247 | 0.564 | 0.343 | 0.208 | 0.362 | 0.264 | 0.159 |
| RW-HAC(Elsahar et al., 2017) | 0.426 | 0.633 | 0.509 | 0.469 | 0.597 | 0.526 | 0.281 |
| ETYPE+(Tran et al., 2020) | 0.302 | 0.803 | 0.439 | 0.260 | 0.607 | 0.364 | 0.143 |
| SelfORE(Hu et al., 2020) | 0.576 | 0.510 | 0.541 | 0.630 | 0.608 | 0.619 | 0.447 |
| RSN(Wu et al., 2019) | 0.628 | 0.634 | 0.631 | 0.624 | 0.663 | 0.643 | 0.459 |
| RSN-BERT | 0.795 | 0.878 | 0.834 | 0.849 | 0.870 | 0.859 | 0.756 |
| RoCORE | 0.87142 | 0.84937 | 0.86035 | 0.89532 | 0.88120 | 0.88824 | 0.81264 |
+
+# 5.1 Main Results
+
+Table 2 reports model performances on FewRel, TACRED dataset, which shows that the proposed method achieves state-of-the-art results on OpenRE task. Benefitting from the valuable information in the labeled instances of pre-defined relations, RoCORE effectively learns the relation-oriented representation from which the derived clusters explicitly align with relational semantic classes, thereby outperforming previous clustering-based baseline such as SelfORE by a large margin. In addition, despite the fact that RSN and its variant RSN-BERT also leverage relational information in labeled data, the learning of similarity metrics and clustering are mutually independent. In our method, relation representation learning and cluster optimization are mutually dependent. Thus, the learned representations are tailored for clustering. As a result, our method outperforms RSN and RSN-BERT on the two datasets.
+
+# 5.2 Ablation Study
+
+To study the contribution of each component in the proposed method, we conduct ablation experiments on the two datasets and display the results in Table 3. The results show that the model performance is degraded if $\mathcal{L}_{center}$ is removed, indicating that the guidance of supervision signals from predefined relations provide valuable information for learning the relation-oriented representations. It is worth noting that the reconstruction term has an important role in the clustering module. Without
+
+Table 2: Main results on two relation extraction datasets. The subscript represents the corresponding standard deviation (e.g., $0.796_{11}$ indicates $0.796 \pm 0.011$ ). Experimental results show that our method reduces the error rate by $29.2\% (0.709 \rightarrow 0.796)$ and $15.7\% (0.834 \rightarrow 0.860)$ , on two datasets respectively.
+
+| Dataset | Method | Prec. | Rec. | F1 |
| FewRel | w/o center loss | 0.72632 | 0.77431 | 0.74931 |
| w/o reconstruction | 0.51238 | 0.57317 | 0.54025 |
| w/o CE | 0.66267 | 0.78754 | 0.71947 |
| RoCORE | 0.75217 | 0.84609 | 0.79611 |
| TACRED | w/o center loss | 0.81857 | 0.84237 | 0.83041 |
| w/o reconstruction | 0.54935 | 0.48330 | 0.51431 |
| w/o CE | 0.70645 | 0.77651 | 0.73947 |
| RoCORE | 0.87142 | 0.84937 | 0.86035 |
+
+Table 3: Abalation study of our method. This table only lists the results of metric $B^3$ . For results of other metrics, please refer to the Table 5 in Appendix C.
+
+the reconstruction term, the semantic space will collapse and the performance will be seriously hurt. In addition, joint optimizing on both the labeled and unlabeled data is also very important. The initial pseudo labels for novel relations are not accurate due to the unwanted clustering bias on pre-defined relations. Without $\mathcal{L}_{CE}$ , the error in pseudo labels will lead the refinement of the entity pair representation to a wrong direction, which affects the model performance.
+
+# 5.3 The Influence of Pre-defined Relation Number on Performance
+
+In this subsection, we conduct experiments on two different datasets to explore the influence of predefined relation number on performance of our method. For FewRel dataset, following the setting in (Wu et al., 2019), we change the number of predefined relations from 40 to 64 while fixing the total number of labeled instances to 25,000.
+
+
+Figure 3: Clustering results with different numbers of pre-defined training relations.
+
+
+Figure 4: Model performance with different amounts of labeled data.
+
+| Task | Method | Prec. | Rec. | F1 |
| F → T | RSN | 0.349 | 0.590 | 0.439 |
| RSN-BERT | 0.337 | 0.866 | 0.486 |
| RoCORE | 0.62128 | 0.60251 | 0.61134 |
| T → F | RSN | 0.225 | 0.529 | 0.316 |
| RSN-BERT | 0.261 | 0.861 | 0.400 |
| RoCORE | 0.68736 | 0.76646 | 0.72426 |
+
+Table 4: Results on two constructed cross-domain tasks. F means FewRel, which is from encyclopedia domain. T means TACRED, which is from news and web domain. This table only lists the results of metric $B^3$ . For results of other metrics, please refer to the Table 6 in Appendix C.
+
+Similarly, the settings for TACRED dataset is 18, 31 and 12,000,respectively.
+
+From figure 3 we can see the following: (1) The increase of pre-defined relation number do improve the generalization of our method on novel relations. The models trained on 64/31 relations slightly perform better than the models trained on 40/18 relations on FewRel/TACRED dataset (2) Our method constantly performs better than RSN and RSN-BERT with the number of predefined relations vary. This indicates the effectiveness of our method.
+
+# 5.4 Cross Domain Analysis
+
+In real-world settings, pre-defined relations and novel relations of interest usually come from different domains. To study the model performance in cross-domain settings, we conducted experiments on two cross-domain tasks, i.e.: FewRel to TACRED and TACRED to FewRel. Pre-defined relations and their labeled instances come from the source domain training dataset, and we evaluate performance on the target domain testing dataset.
+
+Table 4 shows the experimental results, from which we can observe that: (1) the change of domain increases the semantic gap between the predefined and novel relations. As the result of that,
+
+
+
+
+
+the performance of the model using labeled data of predefined relations is degraded. (2) compared with RSN and RSN-BERT, our method shows better generalization performance on novel relations, which shows that our proposed iterative joint training method effectively reduces the unwanted bias on source domain labeled data. (3) In addition, when a model has the tendency to cluster multiple relation into one, an unbalanced PR value (i.e., high rec. and low prec. in RSN-BERT) will be produced, which is undesired in real-world applications.
+
+# 5.5 Incremental Learning of Novel Relations
+
+In this subsection, we evaluate the effectiveness of our incremental learning scheme and explore the influence of the amount of labeled data on model performance. We use BERT with a linear softmax classifier as the baseline for comparison. We train the baseline model using the labeled data of both pre-defined and novel relations, following the supervised learning paradigm. For our method, we still use only the labels of pre-defined relations.
+
+From figure 4 we can observe the following: (1) The performance of the models improve gradually as labeled data increase. Our method can still maintain good performance when there is a lack of labeled data. This indicates that the proposed method is robust to the reduction of labeled data. (2) Our method achieves similar performance compared with the supervised baseline on two experiments, which use $40\%$ labels of novel relations on FewRel dataset and $82\%$ on TACRED respectively. It indicates that we successfully achieve the incremental learning of novel relations.
+
+# 6 Conclusions
+
+In this work, we introduce a relation-oriented clustering method that extends the current unsupervised clustering-based OpenRE method. The proposed method leverages the labeled data of pre-defined relations to learn a relation-oriented representation
+
+from which the derived clusters explicitly align with relational classes. Iterative joint training method effectively reduces the unwanted bias on labeled data. In addition, the proposed method can be easily extended to incremental learning of novel relations. Experimental results show that our method outperforms SOTA methods for OpenRE.
+
+# Acknowledgements
+
+The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by China National Key R&D Program (No. 2018YFB1005104), National Natural Science Foundation of China (No. 62076069, 61976056), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103).
+
+# References
+
+Antoine Bordes, Nicolas Usunier, Alberto Garcia-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, page 2787-2795, Red Hook, NY, USA. Curran Associates Inc.
+Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of bert's attention. CoRR, abs/1906.04341.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
+Hady Elsahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Frederique Laforest. 2017. Unsupervised open relation extraction. In *The Semantic Web: ESWC 2017 Satellite Events*, pages 12-16, Cham. Springer International Publishing.
+Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S. Weld. 2008. Open information extraction from the web. Commun. ACM, 51(12):68-74.
+Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1535-1545, Edinburgh, Scotland, UK. Association for Computational Linguistics.
+Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. 2020. Scan: Learning to classify images without labels.
+Yoav Goldberg. 2019. Assessing bert's syntactic abilities. CoRR, abs/1901.05287.
+
+Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803-4809, Brussels, Belgium. Association for Computational Linguistics.
+John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Association for Computational Linguistics.
+Xuming Hu, Chenwei Zhang, Yusong Xu, Lijie Wen, and Philip S. Yu. 2020. Selfore: Self-supervised relational feature learning for open relation extraction.
+Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.
+Diego Marcheggiani and Ivan Titov. 2016. Discrete-state variational autoencoders for joint discovery and factorization of relations. Transactions of the Association for Computational Linguistics, 4:231-244.
+Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499-1509, Brussels, Belgium. Association for Computational Linguistics.
+Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 304-311, New York City, USA. Association for Computational Linguistics.
+Étienne Simon, Vincent Guigue, and Benjamin Piwowarski. 2019. Unsupervised information extraction: Regularizing discriminative approaches with relation distribution losses. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1378-1387, Florence, Italy. Association for Computational Linguistics.
+Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. CoRR, abs/1906.03158.
+
+Thy Thy Tran, Phong Le, and Sophia Ananiadou. 2020. Revisiting unsupervised relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7498-7505, Online. Association for Computational Linguistics.
+Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579-2605.
+Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei Li, and Junchi Yan. 2021. ENPAR:enhancing entity and entity pair representations for joint entity relation extraction. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2877-2887, Online. Association for Computational Linguistics.
+Ruidong Wu, Yuan Yao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2019. Open relation extraction: Relational knowledge transfer from supervised data to unsupervised data. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 219-228, Hong Kong, China. Association for Computational Linguistics.
+Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart Russell. 2002. Distance metric learning, with application to clustering with side-information. In Proceedings of the 15th International Conference on Neural Information Processing Systems, NIPS'02, page 521-528, Cambridge, MA, USA. MIT Press.
+Chenyan Xiong, Russell Power, and Jamie Callan. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. In Proceedings of the 26th International Conference on World Wide Web, WWW '17, page 1271-1279, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
+Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1456-1466, Edinburgh, Scotland, UK. Association for Computational Linguistics.
+Alexander Yates, Michele Banko, Matthew Broadhead, Michael Cafarella, Oren Etzioni, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 25-26, Rochester, New York, USA. Association for Computational Linguistics.
+Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2017.
+
+
+Figure 5: Model performance with different $\lambda$ .
+
+
+
+Improved neural relation detection for knowledge base question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 571-581, Vancouver, Canada. Association for Computational Linguistics.
+
+Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35-45, Copenhagen, Denmark. Association for Computational Linguistics.
+
+# A Hyperparameter Analysis
+
+From the experimental results of ablation study, it can be seen that reconstruction loss and center loss have a great impact on the performance of the model. $\lambda$ is a key hyperparameter that balances the reconstruction loss versus center loss. In this section, we conduct experiments to study the influence of the value of $\lambda$ on the performance of the model. From Figure 5 we can see that: (1) When $\lambda$ gradually increases from 0, the center loss begins to affect the optimization. The model learns that instances with the same relation should be mapped to relatively close positions in the representation space, and the performance of the model gradually improves. (2) When the lambda exceeds a certain threshold, further increasing the $\lambda$ will leads to unwanted bias to the predefined relations, which will degrade the performance of the model.
+
+# B Relation Representation Visualization
+
+To intuitively show how the RoCORE method learns the constantly optimized relation-oriented representation, we visualize the relational representation with t-SNE (van der Maaten and Hinton, 2008). The visualization results are shown in Figure 6. It is apparent that, before training (left), the relational representations are distributed randomly at different locations in the semantic
+
+
+Figure 6: Visualization of the relation representation after t-SNE dimension reduction. The representations are colored with their ground-truth relation labels. These three sequentially illustrate the feature representation of initial sate, after reconstruction pre-training, and after training. All figures visualize the clustering result for 600 instances of randomly selected 6 novel relations on FewRel test dataset.
+
+
+
+
+
+space. After pre-training (middle), the relational representations still are not tailored for the relations. For example, the instances with blue and light green colors may have similar syntactic or surface features and clustering them directly will lead to a poor result. After training (right), the relational representations are well separated and the distribution is based on relation types.
+
+# C Detailed Results of Other Experiments
+
+In this section, the detailed results of ablation experiments and cross domain analysis are listed in Table 5 and Table 6 respectively.
+
+| Dataset | Method | B3 | V-measure | ARI |
| Prec. | Rec. | F1 | Hom. | Comp. | F1 |
| FewRel | w/o center loss | 0.72632 | 0.77431 | 0.74931 | 0.81819 | 0.84218 | 0.83019 | 0.70238 |
| w/o reconstruction | 0.51238 | 0.57317 | 0.54025 | 0.66524 | 0.68912 | 0.67616 | 0.49540 |
| w/o CE | 0.66267 | 0.78754 | 0.71947 | 0.77241 | 0.84427 | 0.80628 | 0.61768 |
| RoCORE | 0.75217 | 0.84609 | 0.79611 | 0.83810 | 0.88306 | 0.86007 | 0.70923 |
| TACRED | w/o center loss | 0.81857 | 0.84237 | 0.83041 | 0.85541 | 0.86732 | 0.86432 | 0.78369 |
| w/o reconstruction | 0.54935 | 0.48330 | 0.51431 | 0.58937 | 0.57028 | 0.57932 | 0.39354 |
| w/o CE | 0.70645 | 0.77651 | 0.73947 | 0.75330 | 0.80337 | 0.77732 | 0.65685 |
| RoCORE | 0.87142 | 0.84937 | 0.86035 | 0.89532 | 0.88120 | 0.88824 | 0.81264 |
+
+Table 5: The detailed results of abalation study. The subscript represents the corresponding standard deviation (e.g., $0.749_{12}$ indicates $0.749 \pm 0.012$ )
+
+| Task | Method | B3 | V-measure | ARI |
| Prec. | Rec. | F1 | Hom. | Comp. | F1 |
| F → T | RSN | 0.349 | 0.590 | 0.439 | 0.387 | 0.533 | 0.448 | 0.279 |
| RSN-BERT | 0.337 | 0.866 | 0.486 | 0.400 | 0.777 | 0.528 | 0.352 |
| RoCORE | 0.62128 | 0.60251 | 0.61134 | 0.64237 | 0.66632 | 0.65431 | 0.45165 |
| F → T | RSN | 0.225 | 0.529 | 0.316 | 0.359 | 0.507 | 0.420 | 0.243 |
| RSN-BERT | 0.261 | 0.861 | 0.400 | 0.438 | 0.822 | 0.571 | 0.263 |
| RoCORE | 0.68736 | 0.76646 | 0.72426 | 0.79622 | 0.83622 | 0.81516 | 0.65843 |
+
+Table 6: The detailed results of cross domain analysis. The subscript represents the corresponding standard deviation (e.g., $0.724_{26}$ indicates $0.724 \pm 0.026$ )
\ No newline at end of file
diff --git a/arelationorientedclusteringmethodforopenrelationextraction/images.zip b/arelationorientedclusteringmethodforopenrelationextraction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0629536249ff9bb0dfb074abd31a2f36f2551d05
--- /dev/null
+++ b/arelationorientedclusteringmethodforopenrelationextraction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cf5c6be4cd68ef426ec67365511310c5d6ef81fc8220c0d6ba96a798971664f6
+size 675575
diff --git a/arelationorientedclusteringmethodforopenrelationextraction/layout.json b/arelationorientedclusteringmethodforopenrelationextraction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ab5eb70e633eed32ea91b7c1c3b340f1d66a0227
--- /dev/null
+++ b/arelationorientedclusteringmethodforopenrelationextraction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:358403a42a4818fe66ddaa96b346e15228db6967549a26dad9a2d467126ca296
+size 456679
diff --git a/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_content_list.json b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d7c9350ee9da6e6ca88980c9e43f257ce8f849d5
--- /dev/null
+++ b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff640d47fe23001541788389a40a6cdd23cf47770ceda36d1e6a8823ad2eee13
+size 71321
diff --git a/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_model.json b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7041fbea165348f3cdaa1e73e0e1eea0e6ab287b
--- /dev/null
+++ b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18abdf58ff4cca791a7bf37fe8af4132100766aa1617bdc0aafe1f29c01fe75e
+size 83829
diff --git a/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_origin.pdf b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..537b18d469e9f28db6e1bae705ab0f8d2ff5af41
--- /dev/null
+++ b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/a03ea783-ce65-41bf-8f30-6b4d415dd5fc_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9f70b7bbf0f900b86ed067b4d02ae0f085f8faa1072589f480748fc747ed9add
+size 275684
diff --git a/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/full.md b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2f163cb7a9d6628b5618ea31453cb65eb4bad058
--- /dev/null
+++ b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/full.md
@@ -0,0 +1,250 @@
+# Are Transformers a Modern Version of ELIZA? Observations on French Object Verb Agreement
+
+Bingzhi Li and Guillaume Wisniewski and Benoit Crabbé
+
+Université de Paris, LLF, CNRS
+
+75 013 Paris, France
+
+bingzhi.li@etu.u-paris.fr
+
+{guillaume.wisniewski,benoit.crabbe}@u-paris.fr
+
+# Abstract
+
+Many recent works have demonstrated that unsupervised sentence representations of neural networks encode syntactic information by observing that neural language models are able to predict the agreement between a verb and its subject. We take a critical look at this line of research by showing that it is possible to achieve high accuracy on this agreement task with simple surface heuristics, indicating a possible flaw in our assessment of neural networks' syntactic ability. Our fine-grained analyses of results on the long-range French object-verb agreement show that contrary to LSTMs, Transformers are able to capture a non-trivial amount of grammatical structure.
+
+# 1 Introduction
+
+The long distance agreement task is one of the most popular method to assess neural networks (NN) ability to encode syntactic information: Linzen et al. (2016) showed that LSTMs are able to predict the subject-verb agreement in English and has initiated a very active line of research. Since then, many studies have generalized this observation to other languages (Gulordava et al., 2018), other models such as Transformers (Goldberg, 2019; Jawahar et al., 2019) or have identified possible confounding factors that could distort the stated conclusions (Gulordava et al., 2018; Marvin and Linzen, 2018). All of these studies show that NN are able to learn a 'substantial amount' of syntactic information (Belinkov and Glass, 2019).
+
+In this work, we propose to take an alternative look at these results by studying whether neural networks are able to predict the correct form of a verb because they are able to build an abstract, high-level (maybe hierarchical) sentence representation (Giulianelli et al., 2018; Lakretz et al., 2019) or solely because they capture surface statistical regularities, as suggested by several recent work (Sennhauser and Berwick, 2018;
+
+Chaves, 2020; Li and Wisniewski, 2021). Overall, this set of results questions one of the most fundamental assumption in linguistics (Lakretz et al., 2021), namely that a sentence has a recursive structure (Everaert et al., 2015): while LSTMs with proper parametrization can model context-free patterns (Suzgun et al., 2019), Transformers are essentially feed forward models relying on a large number of attention heads. Consequently, they are, in theory, not adapted to model hierarchical syntactic patterns (Hahn, 2020) and explaining their capacity to predict accurately syntactic agreement patterns remains an open issue.
+
+We bring new light on this problematic by identifying simple heuristics (§4) that can be used to correctly predict verbal agreement, pushing further the observation of Kuncoro et al. (2018) that a simple rule can provide highly accurate results on the task. Using our extended set of heuristics, we identify sentences for which predicting the correct verb form requires a more abstract representation of the sentence. By comparing models' performance on these examples, we show that contrary to LSTMs, Transformers perform consistently well in these critical cases.
+
+# 2 Test Set for French Object Past-Participle Agreement
+
+We focus on the object-verb agreement (i.e. object past-participle agreement) in French: agreement in number and gender occurs between the object and the past participle when the latter is used with the auxiliary avoir (to have) and the object is located before the verb. As shown in Figure 1, this is, for instance, the case for past participles in object relatives. When agreement is required, a -s suffix (resp. -e) has to be added to past participles for
+
+plural object (resp. feminine).
+
+To predict the past participle agreement in object relatives, a model has to identify the object relative pronoun, its antecedent and the auxiliary. It has also to ignore the effect of attractors (nouns with misleading agreement features) occurring between the object and the past participle. Compared to the subject-verb agreement, the French object past participle agreement is more difficult as the target verb form depends on a noun that is never adjacent to the verb. The auxiliary avoir before the target verb could also be an attractor.
+
+We restrict ourselves to the number agreement between object and past participle in the case of object relatives to (1) design reasonably simple patterns that can be easily extracted automatically from raw texts, (2) extract a sufficiently large number of representative examples and (3) reduce the importance of the anaphoric resolution problem. These restrictions allow us to carry out a fine-grained analysis of NN ability to extract syntactic generalizations from non-annotated corpora ( $\S 4$ ).
+
+Building a Test Set Sentences used in the number agreement task are extracted automatically from the 8,638,145 sentences of the French Gutenberg corpus. We use the FLAUBERT parser (Le et al., 2019) and the pretrained French model of spaCy (Honnibal et al., 2020) to automatically parse the sentences of the Gutenberg project. We also consider the gold annotations of the 45,088 sentences of the French Universal Dependency treebanks (Zeman et al., 2020) to evaluate the impact of parsing errors on the corpus quality.
+
+We extract examples of object-verb agreement from sentences' syntactic and morphological annotations using simple rules, resulting in a corpus of 104 sentences (68% singular and 32% plural) extracted from the UD treebank and of 68,794 sentences (65% singular and 35% plural) extracted from the Gutenberg project. In French, the singular is identical to the unmarked form of the past participle verbs, making the frequency statistics unbalanced in favor of singular.
+
+We evaluate the quality of our automatic extraction procedure by comparing the examples extracted using the gold annotations of UD treebank to those extracted from predicted annotations of
+
+UD treebank sentences (generated by FLAUBERT and spaCy). Our automatic procedure correctly picked up $98\%$ of the object past participle agreement examples.
+
+# 3 Language Models
+
+We contrast two types of incremental language models in our experiments: LSTM models and incremental Transformer models. Both models are modeling the probability of a sentence $\mathbf{x}$ as:
+
+$$
+P (\mathbf {x}) = \prod_ {i = 1} ^ {n} P \left(x _ {i} \mid x _ {1} \dots x _ {i - 1}\right) \tag {1}
+$$
+
+All neural models are trained to compute $P(x_{i}|x_{1}\ldots x_{i - 1})$ and they all use the same generic template:
+
+$$
+P \left(x _ {i} \mid x _ {1} \dots x _ {i - 1}\right) = \operatorname {S O F T M A X} \left(\mathbf {W} _ {\text {d e c}} \mathbf {c} _ {i - 1} + \mathbf {b}\right) \tag {2}
+$$
+
+$$
+\mathbf {c} _ {i - 1} = \operatorname {C o n t e x t} \left(\mathbf {e} _ {1} \dots \mathbf {e} _ {i - 1}\right) \tag {3}
+$$
+
+$$
+\mathbf {e} _ {i} = \mathbf {W} _ {\text {e n c}} \mathbf {x} _ {i} \tag {4}
+$$
+
+where $\mathbf{x}_i$ are one-hot word vectors; $\mathbf{W}_{enc}$ and $\mathbf{W}_{dec}$ are tied parameter matrices, the latter being the transpose of the former, encoding respectively the word embeddings and the output layer of the language model.
+
+A context model (CONTEXT) is either an incremental LSTM or a Transformer decoder where the sequence of embeddings $\mathbf{e}_i\ldots \mathbf{e}_n$ is masked (i.e. the probability of the $i$ -th word is estimated knowing only the first (i-1) words of the sentences, contrary to the 'standard' Transformer models which assume that the whole sentence is known). The context vector $\mathbf{c}$ returned by the context model is either the hidden vector of the LSTM at step $i - 1$ or the vector returned by the upper layer of the Transformer at step $i - 1$ .
+
+Our LSTM models use 2 layers while our Transformer language model uses 16 layers and 16 heads. Both models are using embeddings of size 768 and are trained on the same data. For Transformers we add positional embeddings to the word embeddings $\mathbf{e}_i$ using the cosine scheme and weighting described by Vaswani et al. (2017). Since all the models use a word-based tokenization and not a subword tokenizer, we bound the vocabulary to the 50,000 most frequent tokens found in the training data and use an token to encode the least frequent tokens.
+
+(1) Le nombre d' offres que le directeur a acceptées The-DET-Sg number-N-M-Sg of-ADP offers-N-F-Pl that-PRON the-DET-3Sg director-N-M-Sg has-AUX-3Sg accepted-PP-F-Pl
+
+The number of offers that the director has accepted...
+
+(2) Les offresh1 que les directeursh2 ont3 acceptées ... The-DET-PI offers-N-F-Pl that-PRON the-DET-Pl directors-N-M-Pl have-AUX-3PI accepted-PP-F-Pl
+
+The offers that the directors have accepted...
+
+Figure 1: Examples of object-verb agreement in French. The past participle in the relative clause (in blue) has to agree in gender and in number with its object (also in blue) when the latter is placed before the verb. To predict the agreement the model has to identify the antecedent of the relative pronoun (dashed arrow)
+
+| corpus | size in sentences | LSTMs | Transformers |
| Original Test Set |
| overall | 68,497 | 80,8±1.2 | 93.5 ±1.4 |
| 4 heuristics | 32,311 | 96.4 ±0.6 | 99.0 ±0.4 |
| 3 heuristics | 13,222 | 84.0 ±1.7 | 95.1 ±1.5 |
| 2 heuristics | 8,869 | 66.5 ±2.7 | 89.5 ±2.3 |
| 1 heuristic | 10,946 | 55.7 ±3.5 | 84.2 ±3.0 |
| 0 heuristic | 3,149 | 34.9 ±6.8 | 74.1 ±4.1 |
| Permuted Test Set |
| overall | 68,497 | 69.0 ±0.6 | 70.4 ±1.0 |
| 4 heuristics | 32,311 | 87.0 ±1.2 | 88.0 ±0.8 |
| 3 heuristics | 13,222 | 73.6 ±0.6 | 73.9 ±0.9 |
| 2 heuristics | 8,869 | 52.0 ±0.3 | 54.8 ±1.8 |
| 1 heuristic | 10,946 | 35.3 ±0.3 | 37.4 ±1.5 |
| 0 heuristic | 3,149 | 30.2 ±0.4 | 32.6 ±1.2 |
+
+Table 1: Accuracy achieved by LSTMs and Transformers on the object-verb agreement task for the Original and Permuted test sets. Results are averaged over the three best models in terms of the validation perplexity for each architecture
+
+This setting aims to get reasonably fair comparisons between LSTM and Transformers. To train the models, we extracted raw text from a recent French Wikipedia dump using WikiExtractor (Attardi, 2015) and then segmented and tokenized it with the Moses tokenizer (Koehn et al., 2007). We filtered out sentences with more than $5\%$ unknown words based on the lemma annotations generated by TreeTagger (Schmid, 1999). Finally, we sampled a subset containing 100M tokens and split it into training, validation and test sets with a standard 8:1:1 proportion.
+
+# 4 Experimental Results
+
+In our experiments, following Linzen et al. (2016) and Gulordava et al. (2018), we compare the probabilities a language model assigns to the singular
+
+form of the target participle and its plural form given a prefix. $^{6}$ We consider the model has predicted the agreement correctly if the form with the correct number has a higher probability than the form with the incorrect number.
+
+Table 1 reports the accuracy of two types of models evaluated in this framework. Even for this difficult task, the models perform, overall, very well: LSTMs achieve an accuracy of $80.8\%$ , a performance similar to the one reported in the literature. With an accuracy of $93.5\%$ , Transformers perform even better. These preliminary results support the conclusion, drawn by many works, that neural networks encode syntactic information.
+
+However, we believe that this conclusion must be taken with great care: because of confounding factors, a language model could predict the correct form without actually capturing syntactic information. For instance, as our test set is unbalanced (section 2) a naive model always choosing the singular form of the participle achieves an accuracy of $65\%$ , a score that puts into perspective the performance of LSTMs. More importantly, Gulordava et al. (2018) and Kuncoro et al. (2018) observed that the agreement task can be partially solved by collocational information or a simple heuristic, namely the number of the first noun of the prefix. In the following, we propose several experiments to strengthen these first results.
+
+# 4.1 Agreement with Surface Heuristics
+
+Extending the observations of Kuncoro et al. (2018), we identify four heuristics that a model could adopt to predict the verb's number only from
+
+| Heuristics | Accuracy |
| h1: First noun | 69.5% |
| h2: Last noun | 88.6% |
| h3: Last token | 60.3% |
| h4: Majority number | 70.0% |
+
+surface information. Each of these heuristics assumes that the target past participle agrees systematically in number with:
+
+h1. the first noun in the prefix;
+h2. the last noun in the prefix;
+h3. the last token in the prefix with a mark of number;
+h4. the majority number expressed in the prefix.
+
+The example (2) in Figure 1 illustrates the tokens that each heuristic relies on to make its decision. These heuristics are not tailored to the prediction of the object-past participle agreement in French: they could easily be used to other agreement tasks in other language. More complicated, task-specific heuristics could have been designed. We could, for instance, consider the first noun on the left of the relative pronoun.
+
+Surprisingly enough, as reported in Table 2, on our test set, these heuristics achieve an accuracy between $60.3\%$ (for h3) and $88.6\%$ (for h2). These results challenge our previous conclusion: they show that the ability to predict the correct number of the verb cannot be used to prove that a model captures abstract syntactic relations, since a simple surface heuristic outperforms LSTMs and achieves an accuracy only slightly worse than that of Transformers. On the contrary, it suggests that NN, like Eliza, only extract and combine surface patterns to make their decisions.
+
+To shed further light on this new perspective, we use these heuristics to quantify the 'difficulty' of the task: for each example of our test set, we count the number of heuristics that predict the correct form and consider that the higher this number, the easier the prediction. We then divide our test set into five different subsets according to the number of heuristics that a model could rely on to predict the verb form: the 4 heuristics group gathers the 'easiest' examples, while examples in the O heuristic group are the most difficult, for which the choice of the verb number cannot rely on simple surface
+
+Table 2: Heuristics' accuracy on French object past participle agreement task
+
+| corpus | size in sentences | LSTMs | Transformers |
| Original Test Set |
| overall | 68,497 | 80.8±1.2 | 93.5 ±1.4 |
| singular | 44,599 | 96.4 ±1.1 | 98.9 ±0.4 |
| plural | 23,898 | 51.6 ±4.7 | 83.5 ±3.3 |
| Nonce Test Set |
| overall | 68,497*3 | 78.1±1.2 | 92.6 ±1.9 |
| singular | 44,599*3 | 93 ±2.3 | 96.8 ±0.9 |
| plural | 23,898*3 | 50.3 ±6.8 | 84.7 ±3.6 |
| Mirror Test Set |
| overall | 68,497 | 59.8 ±2.5 | 81.3 ±2.7 |
| singular | 23,898 | 90.6 ±1.8 | 91.8 ±0.7 |
| plural | 44,599 | 43.5 ±4.5 | 75.8 ±3.8 |
+
+Table 3: Accuracy achieved by LSTMs and Transformers on different experimental settings, by target verb number and averaged
+
+heuristics and requires building a more abstract representation of the sentence.8
+
+Table 1 reports the results achieved by our models according to the prediction difficulty. The two architectures have a very different behavior: while they both show high agreement prediction accuracy in the simplest case (the 4 heuristics group), LSTMs' performance drops sharply with increasing task difficulty: with an accuracy of only $34.9\%$ on the most difficult examples (the 0 heuristic group), they perform worse than random. On the contrary, even if Transformers' performance also degrades with increasing task difficulty, they perform consistently much better on all groups: they are still predicting the correct verb number for $74.1\%$ of the most difficult examples, suggesting that Transformers are able to extract certain abstract generalizations.
+
+# 4.2 Control Experiments
+
+To corroborate these results and avoid some known pitfalls of the agreement task, we have performed four control experiments.
+
+Lexical Cues Following Gulordava et al. (2018), we convert the original test set into a nonsensical but grammatically correct test set to ensure that the model is not using collocational information to choose the correct form of the verb. 9 Results in Table 3 show that for LSTMs (resp. Transformers), the global accuracy drops from $80.8\%$ (resp.
+
+93.5%) for the original set to $78.1\%$ (resp. $92.6\%$ ) for the so-called nonce test set. This drop is of the same order of magnitude as that reported by Gulordava et al. (2018), showing that the lexical or collocational confounds have only a moderate impact on models' performance in our agreement prediction task.
+
+Frequency Bias and Imbalanced Data Another possible confound identified in this work (§2) results from the imbalance between classes: most of the past participles in French are singular and $65\%$ of the target past participle in our test set are singular. That is why, as expected, models perform better in predicting singular form than plural form(Original Test Set of Table 3): both LSTMs and Transformers predict almost perfectly singular forms (accuracy: $96.4\%$ and $98.9\%$ ), but accuracy on plural verbs drops sharply: LSTMs correctly predict $51.6\%$ of the plural forms and Transformers appear to be more robust with an accuracy of $83.5\%$ .
+
+To ensure, a model is not simply memorizing the most frequent form of a verb, we have generated a mirror test set in which each plural verb is automatically transformed into singular (and vice-versa) as well as the corresponding object and all its adjective and pronoun modifiers to make sure that the modified sentence is grammatically correct.
+
+The accuracy of LSTMs and Transformers on the mirror set is of $59.8\%$ and $81.3\%$ (Table 3). This drop suggests that more frequent forms are more likely to be better predicted, even though Transformers are more robust to the low frequency bias. Compared to the nonce setting, models' performance is impacted to a much larger degree in mirror setting. We don't have a clear explanation to this surprising observation, which need to be explored through new experiments.
+
+Distance Following Linzen et al. (2016) we have examined how models' performance on this agreement task is affected by distance between the object and the target verb. Results, reported in Table 8 in the appendix show that models' performance decreases slightly as the distance increases, except for the shortest distance, thus replicating the results of Linzen et al. (2016).
+
+Word Order We now test to which extent a model relies on word order to predict the verb number. We convert each original example into a scrambled example by randomly permuting its
+
+prefix. As reported in Table 1, despite the fact that the syntax has been destroyed in shuffled prefixes setting, both models still achieve high accuracy for the easy examples but achieve worse than chance accuracy for the $O$ and $I$ heuristic groups, confirming that syntactic information is critical for models to solve the most difficult cases. For Transformers, the difference in accuracy between the original and permuted setting on the $O$ heuristic group extends up to 41.5 percentage points! These results suggest that Transformers perform significantly better than surface heuristics and capture a non trivial amount of word order information.
+
+# 5 Conclusions
+
+We ran a fine-grained analysis of NN's syntactic generalization capabilities in processing French object-verb agreement for grammatical number, a phenomenon crucially depending on hierarchical syntactic structure. We designed a new evaluation protocol based on four shallow heuristics that the models could adopt to perform number agreement task. Our experiments show that, contrary to LSTMs, Transformers extract a non trivial amount of syntactic information.
+
+In future work, we will investigate the kind of syntactic information Transformers are encoding and the relationship between the superficial heuristics and hierarchical syntactic structure processing in Transformer models. In particular, our results intriguingly suggest that Transformers rely on word order information to predict verb agreement, despite the fact that they don't model word order explicitly beyond marking each word with its absolute-position embedding. We plan to study this question in future work.
+
+# Acknowledgments
+
+We sincerely thank the reviewers and Program Chairs for their careful reviews and insightful comments, which are of great help in improving the manuscript. This work was granted access to the HPC resources of French Institute for Development and Resources in Intensive Scientific Computing (IDRIS) under the allocation 2020-AD011012282 and 2021-AD011012408 made by GENCI.
+
+# References
+
+Giuseppe Attardi. 2015. Wikiextractor. https://github.com/attardi/wikiextractor.
+
+Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.
+Rui Chaves. 2020. What don't RNN language models learn about filler-gap dependencies? In Proceedings of the Society for Computation in Linguistics 2020, pages 1-11, New York, New York. Association for Computational Linguistics.
+Noam Chomsky et al. 1957. Mouton. Syntactic Structures", The Hague.
+Martin B.H. Everaert, Marinus A.C. Huybregts, Noam Chomsky, Robert C. Berwick, and Johan J. Bolhuis. 2015. Structures, not strings: Linguistics as part of the cognitive sciences. Trends in Cognitive Sciences, 19(12):729-743.
+Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. arXiv preprint arXiv:1808.08079.
+Yoav Goldberg. 2019. Assessing bert's syntactic abilities. CoRR, abs/1901.05287.
+Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205, New Orleans, Louisiana. Association for Computational Linguistics.
+Michael Hahn. 2020. Theoretical limitations of self-attention in neural sequence models. Trans. Assoc. Comput. Linguistics, 8:156-171.
+Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.
+Ganesh Jawahar, Benoit Sagot, and Djame Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.
+Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177-180.
+
+Adhiguna Kuncoro, Chris Dyer, John Hale, and Phil Blunsom. 2018. The perils of natural behaviour tests for unnatural models: the case of number agreement. Poster presented at Learning Language in Humans and in Machines, Paris, Fr., July, pages 5-6.
+Yair Lakretz, Theo Desbordes, Jean-Rémi King, Benoit Crabbé, Maxime Oquab, and Stanislas Dehaene. 2021. Can rnns learn recursive nested subject-verb agreements? CoRR, abs/2101.02258.
+Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Baroni. 2019. The emergence of number and syntax units in lstm language models. arXiv preprint arXiv:1903.07435.
+Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maxim Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoit Crabbe, Laurent Besacier, and Didier Schwab. 2019. Flaubert: Unsupervised language model pre-training for french. arXiv preprint arXiv:1912.05372.
+Bingzhi Li and Guillaume Wisniewski. 2021. Are neural networks extracting linguistic properties or memorizing training data? an observation with a multilingual probe for predicting tense. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3080-3089, Online. Association for Computational Linguistics.
+Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521-535.
+Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.
+Aaron Mueller, Garrett Nicolai, Panayiotia Petrou-Zeniou, Natalia Talmina, and Tal Linzen. 2020. Cross-linguistic syntactic evaluation of word prediction models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5523-5539, Online. Association for Computational Linguistics.
+Helmut Schmid. 1999. Improvements in part-of-speech tagging with an application to german. In Natural language processing using very large corpora, pages 13-25. Springer.
+Luzi Sennhauser and Robert Berwick. 2018. Evaluating the ability of LSTMs to learn context-free grammars. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 115-124, Brussels, Belgium. Association for Computational Linguistics.
+
+Mirac Suzgun, Yonatan Belinkov, and Stuart M. Shieber. 2019. On evaluating the generalization of LSTM models in formal languages. In Proceedings of the Society for Computation in Linguistics (SCiL) 2019.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
+Daniel Zeman, Joakim Nivre, Mitchell Abrams, Elia Ackermann, Noëmi Aepli, Hamid Aghaei, Željko Agić, Amir Ahmadi, Lars Ahrenberg, Chika Kennedy Ajede, Gabriele Aleksandravicićute, Ika Alfina, Lene Antonsen, Katya Aplonova, Angelina Aquino, Carolina Aragon, Maria Jesus Aranzabe, Hórunn Arnardóttir, Gashaw Arutie, Jessica Naraiswari Arwidarasti, Masayuki Asahara, Luma Ateyah, Furkan Atmaca, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Keerthana Balasubramani, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Victoria Basmov, Colin Batchelor, John Bauer, Seyyit Talha Bedir, Kepa Bengoetxea, Gözde Berk, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agné Bielinskière, Kristín Bjarnadóttir, Rogier Blokland, Victoria Bobicev, Loïc Boizou, Emanuel Borges Völker, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Kristina Brokaite, Aljoscha Burchardt, Marie Candido, Bernard Caron, Gauthier Caron, Tatiana Cavalcanti, Gülsen Cebiroğlu Eryigit, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, Slavomír Čéplö, Savas Cetin, Özlem Çetinoglu, Fabricio Chalub, Ethan Chi, Yongseok Cho, Jinho Choi, Jayeol Chun, Alessandra T. Cignarella, Silvie Cinková, Aurélie Collomb, Āçrì Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Mehmet Oguz Derin, Elvis de Souza, Arantza Díaz de Ilarraza, Carly Dickerson, Arawinda Dinakaramani, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Tomaz Erjavec, Aline Etienne, Wograine Evelyn, Sidney Facundes, Richard Farkas, Marília Fernanda, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdòsová, Daniel Galbraith, Marcos Garcia, Moe Gärdenfors, Sebastian Garza, Fabricio Ferraz Gerardi, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökirmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Bernadeta Griciūte Matias Grioni, Loïc Grobol, Normunds Gržitis Bruno Guillaume, Céline Guillot-Barbance, Tunga Güngör, Nizar Habash, Hinrik Hafsteinsson, Jan Hajic, Jan Hajic jr., Mika Hämaläinenen Linh Hà Myi Na-Rae Han Muhammad Yudistira Hanifmuti Sam Hardwick Kim Harris Dag Haug Johannes Heinecke Oliver Hellwig Felix Henning Barbora Hladká Jaroslava Hlaváčová Florinel
+
+Hociung, Petter Hohle, Eva Huber, Jena Hwang, Takumi Ikeda, Anton Karl Ingason, Radu Ion, Elena Irimia, Olajide Ishola, Tomáš Jelinek, Anders Johannsen, Hildur Jónsdóttir, Fredrik Jørgensen, Markus Juutinen, Sarveswaran K, Huner Kaskara, Andre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, Václava Kettnerova, Jesse Kirchner, Elena Klementieva, Arne Köhn, Abdullatif Köksal, Kamil Kopacewicz, Timo Korkiakangas, Natalia Kotsyba, Jolanta Kovalevskaite, Simon Krek, Parameswari Krishnamurthy, Sookyoung Kwak, Veronika Laippala, Lucia Lam, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phng Lê hong, Alessandro Lenci, Saran Lertpradit, Herman Leung, Maria Levina, Cheuk Ying Li, Josie Li, Keying Li, Yuan Li, KyungTae Lim, Krister Lindén, Nikola Ljubesic, Olga Loginova, Andry Luthfi, Mikko Luukko, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Catalina Márnduc, David Mareček, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Hiroshi Matsuda, Yuji Matsumoto, Ryan McDonald, Sarah McGuinness, Gustavo Mendonça, Niko Miekka, Karina Mischenkova, Margarita Misirpashayeva, Anna Missilä, Cătălin Mitelulu, Maria Mitrofan, Yusuke Miyao, Amir Hossein Mojiri Foroushani, Amirsaeid Moloodi, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Shinsuke Mori, Tomohiko Morioka, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili Mūrisep, Pinkey Nainwani, Mariam Nakhlé, Juan Ignacio Navarro Horniacek, Anna Nedoluzhko, Gunta Nespore-Bérzkalne, Lng Nguyen Thi, Huyen Nguyen Thi Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Alireza Nourian, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Adédayo Olúokun, Mai Omura, Emeka Onwuegbuzia, Petya Osenova, Robert Ostling, Lilja Øvrelid, Şaziye Betül Özates, Arzucan Özgür, Balkiz Öztürk Başaran, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-Lapińska, Siyao Peng, Cenel-Augusto Perez, Natalia Perkova, Guy Perrier, Slav Petrov, Daria Petrova, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler Barbara Plank Thierry Poibau Larisa Ponomareva Martin Popel Lauma Pretkalinna Sophia Prévost Prokopis Prokopidis Adam Przepiörkowski Tiina Puolakainen,Sampo Pyysalo,Peng QiAndriela Raabis,Alexandre Rademaker Taraka RamaLoganathan Ramasamy Carlos Ramisch Fam Rashed Mohammad Sadegh Rasooli,Vinit Ravishankar Livy Real Petru Rebeja Siva Reddy Georg Rehm Ivan Riabov Michael Rießler Erika Rimkute Larissa Rinaldi Laura Rituma Luisa Rocha Eirikur Rögnvaldsson Mykhailo Romanenko Rudolf Rosa Valentin Rosca Davide Rovati Olga Rudina Jack Rueter Kristján Rúnarsson Shoval Sadde Pegah Safari Benoit Sagot Aleksi Sahala Shadi Saleh
+
+Alessio Salomoni, Tanja Samardžić, Stephanie Samson, Manuela Sanguinetti, Dage Särg, Baiba Saulite, Yanin Sawanakunanon, Kevin Scannell, Salvatore Scarlata, Nathan Schneider, Sebastian Schuster, Djame Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibussirri, Dmitry Sichinava, Einar Freyr Sigurðsson, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simko, María Šimková, Kiril Simov, Maria Skachedubova, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Steinhör Steingrímsson, Antonio Stella, Milan Straka, Emmett Strickland, Jana Strnadóva, Alane Suhr, Yogi Lesmana Sulestio, Umut Sulubacak, Shingo Suzuki, Zsolt Szántó, Dima Taji, Yuta Takahashi, Fabio Tamburini, Mary Ann C. Tan, Takaaki Tanaka, Samson Tella, Isabelle Tellier, Guillaume Thomas, Liisi Torga, Marsida Toska, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Utku Türk, Francis Tyers, Sumire Uematsu, Roman Untilov, Zdenka Urešová, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Aya Wakasa, Joel C. Wallenberg, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Paul Widmer, Seyi Williams, Mats Wiren, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Zhuoran Yu, Zdeněk Žabokrtský, Shorouq Zahra, Amir Zeldes, Hanzhi Zhu, and Anna Zhuravleva. 2020. Universal dependencies 2.7. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University.
+
+# A Language Models
+
+Hyperparameters and perplexities The results reported in the paper are averaged over three best models in terms of the validation perplexity after 40 training epochs for LSTMs and 50 training epochs for Transformer. The detailed information of the top 3 LSTM and Transformer models is described in table 4.
+
+For the LSTM models, we used embeddings of size 768, with 2 layers. The total parameters are 47,900,241 and we explored the following hyperparameters, for a total of 12 combinations:
+
+1. batch size: 32, 64 (only for learning rate 0.0001)
+2. dropout rate: 0.0, 0.1, 0.2, 0.3
+3. learning rate: 0.001, 0.0001
+
+For the Transformer models we used embeddings of size 768, with 16 layers, each with 16 heads. The total parameters are 126,674,513. Training was performed with stochastic gradient descent. The initial learning rate was fixed to 0.02 and we used a cosine scheduling on 50 epochs without annealing. The first epoch was dedicated to warmup with a linear incremental schedule for the learning rate. Batches are of size 64 run in parallel on 8 GPUs except for warmup where the size was fixed to 8. We explored the initial learning rate of 0.01 and 0.02, the dropout rate of 0.0, 0.1, and 0.2, resulting in a total of 6 combinations.
+
+# B Surface heuristics
+
+We defined four heuristics that a model could adopt to predict the verb's number only from surface information. And then we divided the test set into five subsets based on the number of heuristics, Table 5 describes the examples for each subgroup.
+
+# C Construction of test sets
+
+Extraction procedure Extraction of the object-verb agreement examples is based on the dependency structure and morphological information of sentences. Concretely, a valid example has to include a NOUN and VERB connected by an acl:relcl dependency arc as well as a direct object que (that); the auxiliary connected to the target verb has to be avoir (to have). Using the morphological information, we filtered out sentences in which the noun and the verb do not agree in number and gender as well as sentences in which not all words from the antecedent to the target(enclosed)
+
+occur in the language model's vocabulary. To reduce the importance of anaphoric resolution problems, we have ruled out the complex and ambiguous cases: long distance dependencies (First example in Figure 2) and coordinated object noun phrase as antecedent case (Second example in Figure 2). But we didn't exclude the propositional phrase as antecedent case, because there is no ambiguity in determining the antecedent of the relative pronoun, illustrated by the third example in Figure 2.
+
+Qualitative evaluation of extraction procedure Our automatic extraction procedure correctly identified 102 examples from automatically parsed UD treebanks sentences among 104 examples using the gold annotation of French UD treebanks. Our procedure excluded the first missed example by annotating the intervening relative pronoun que (that) as conjunction: formule qu'avec un sens de la nuance plus marseillais quebritannique, le presidente de l'académie a appliquée (formulaFem-Sg with a sense of nuance that was more Marseillais than British, that the president of the academy appliedFem-Sg). And for the second one: une maniere de revolution sur lui-même, qu'il a opérée... (A way of revolutionFem-Sg on himself, that he operatedFem-Sg...), the automatically parsed annotation erroneously identified the antecedent as 'way' instead of 'revolution'. The two missed examples reflect also the difficulty of this task for a model.
+
+Nonce test set To test the extent to which the lexical or collocational information contribute to model's performance on number agreement task, we adapted the generation procedure of Gulordava et al. (2018) to generate three "colorless green idea" (Chomsky et al., 1957) sentences for each original sentence: each content word of the original sentence is replaced with a random word from the same syntactic category (i.e., the same POS and morphological features). During the substitution procedure, we excluded the word forms that appeared in the treebank with more than one POS to make sure that the random words used are all with unambiguous POS(e.g., ...) can be a plural noun(data) or the plural past participle of verbdonner(give)). To respect the argument structure constraints, the target verb could only be replaced by another random transitive word. So the Nonce Test Set retains the grammatical syntax of original sentences but are highly semantically implausible.
+
+ | hidden/ embedding size | layers | batch size | dropout rate | learning rate | best epoch | ppl |
| LSTM | 768 | 2 | 32 | 0.1 | 0.001 | 21 | 40.5 |
| 768 | 2 | 32 | 0.2 | 0.0001 | 38 | 39.3 |
| 768 | 2 | 64 | 0.2 | 0.0001 | 36 | 37.9 |
| Transformer | 768 | 16 | 64 | 0 | 0.02 | 41 | 31.4 |
| 768 | 16 | 64 | 0.2 | 0.01 | 50 | 28.5 |
| 768 | 16 | 64 | 0.1 | 0.01 | 49 | 28.2 |
+
+Table 4: Hyperparameters and perplexities of top 3 LSTMs and Transformers used in this work
+
+| Subsets | Examples | Heuristics | class |
| 4 | h4Les offresh1 que les directeurs h2 onth3 acceptées... | h1,h2,h3,h4 | Plural |
| The offers_Pl that the_P1 directors_Pl have_P1 accepted_Pl ... | | |
| 3 | h4Le nombre d'offresh2 qu'ils onth3 acceptées... | h2,h3,h4 | Plural |
| The number_Sg of offers_Pl that they_P1 have_P1 accepted_Pl ... | | |
| 2 | Les offresh1h2 qu'il a acceptées... | h1,h2 | Plural |
| The offers_Pl that he_Sg has_Sg accepted_Pl ... | | |
| 1 | Les offresh1 que le directeur a acceptées... | h1 | Plural |
| The offers_Pl that the_Sg director_Sg has_Sg accepted_Pl ... | | |
| 0 | Le nombre d'offres que le directeur a acceptées... | none | Plural |
| The number_Sg of offers_Pl that the_Sg director_Sg has_Sg accepted_Pl ... | | |
+
+Table 5: Examples of five subsets according to the number of heuristics that a model could rely on to predict the verb form
+
+
+Figure 2: The test set excluded the complex long distance dependencies (1) and ambiguous coordinated object noun phrase (2), but kept the prepositional phrase as antecedent cases like (3)
+
+| corpus | size in sentences | LSTMs | Transformers |
| Nonce Test Set overall | 68,497 | 78.1 ±1.2 | 92.6 ±1.9 |
| 4 heuristics | 32,311 | 94.3 ±1.1 | 98.3 ±0.7 |
| 3 heuristics | 13,222 | 80.3 ±2.5 | 93.5 ±1.9 |
| 2 heuristics | 8,869 | 63.2 ±2.1 | 89.1 ±2.9 |
| 1 heuristic | 10,946 | 53.0 ±5.1 | 84.0 ±3.5 |
| 0 heuristic | 3,149 | 32.3 ±11 | 69.1 ±4.5 |
+
+Table 6: Accuracy achieved by LSTMs and Transformers on the nonce test set, based on prediction difficulty
+
+Table 7 gives an example of a nonsensical sentence converted from its original version.
+
+Mirror test set We generate a singular version of each plural object sentence and vice versa by substituting respectively the antecedent and target verb of each original sentence with their opposite number form. We converted also the adjective and pronoun modifiers of the antecedent to their opposite number form if they are present. At the end, we got a "inverted copy" of the original set in terms of class distribution, $35\%$ singular and $65\%$ plural compared to its original version: $65\%$ singular and $35\%$ plural. Table 7 gives an example of the mirror sentence converted from its original version.
+
+# D Detailed results
+
+Nonce Set The detailed results on Nonce Test Set are reported in Table 6.
+
+Distance Table 8 reports the average prediction accuracy on Original Test set as a function of distance between the antecedent and the target verb. The shortest distance (i.e. construction with only two intervening tokens: the relative pronoun and the auxiliary verb) is more challenging for both LSTMs and Transformers due to the attraction effect of the auxiliary. In this non-canonical construction(1,599 examples), the embedded subject in the objective clause occurs after its predicate. Our fine-grained analysis shows that in this noncanonical case, when the number of the intervening auxiliary is different with that of the past participle verb, LSTMs' performance drops to $41.9\%$ and Transformers still achieve an accuracy of $80\%$ , suggesting that Transformer are more robust to resist the lure of adjacent auxiliary attractor.
+
+| Test sets | Examples | label |
| Original | Les offres que le directeur a acceptées... | Pl |
| The offers_Pl that the director has accepted_Pl ... |
| Nonce | Les omellettes que le professeur a attachés... | Pl |
| The omelettes_Pl that the professor has attached_Pl ... |
| Mirror | L'offre que le directeur a acceptée | Sg |
| The offer_Sg that the director has accepted_Sg ... |
| Permuted | directeur a Les que offres le acceptées ... | Pl |
| director has The that offers_Pl the accepted_Pl ... |
+
+Table 7: Examples of test sets used in original and control experiments
+
+ | 2 tokens | 3-4 | 5-6 | 7-8 | 9-10 | 11-12 | 13-14 |
| LSTMs | 73.1±0.9 | 82.9±1.5 | 78.7±1.2 | 75.9±0.6 | 74.1±0.3 | 72±0.6 | 69.3±1.2 |
| Transformers | 88.0±3.0 | 95.1±1.2 | 92.4±1.6 | 89.7±1.9 | 87.8±2.2 | 85.2±2.2 | 83.1±1.7 |
| # examples | 1,599 | 44,012 | 14,945 | 4,799 | 1729 | 756 | 327 |
+
+Table 8: Accuracy as a function of distance (i.e. number of tokens) between the antecedent and the target verb
\ No newline at end of file
diff --git a/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/images.zip b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..65660f4a579c4273f99d35aa6a604fb6d0579d2b
--- /dev/null
+++ b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b9eb0410d18d6792e3660e907908df39d7bdf7b3217006da1ea5ee94df168e25
+size 396587
diff --git a/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/layout.json b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..335637bb9a59ac8343ebb5aa7db58208544429d4
--- /dev/null
+++ b/aretransformersamodernversionofelizaobservationsonfrenchobjectverbagreement/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07ff22d3c300c99441ace77c0921dfb727abbc7be6b388c85ce449493ec24426
+size 281067
diff --git a/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_content_list.json b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..628e8b8ad6caa54aa2f6c9544ecd7927bbb46dc3
--- /dev/null
+++ b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03ad0f13885df8fca281b90c64d51b858e58c0412776fbab5d00cb148e3b8173
+size 89090
diff --git a/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_model.json b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..18483fdb3a66a0142a6f6ad7e446f2ece5d996d0
--- /dev/null
+++ b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:462f176c6908dff9e16a1f7a16eba768284669cc77f32e1d2a32839b1acd76f2
+size 107228
diff --git a/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_origin.pdf b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5352f669582beff841872f88fbbb815834755232
--- /dev/null
+++ b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/9acfa347-5261-499d-bd28-d02f4a47f45d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7fdc25cd1cd5ee2399a70c4d4adbf311bb4db9912855044ffe72231a649a8738
+size 675846
diff --git a/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/full.md b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4dce67eb3b641b0d2fa5e2e408cb8b737974d6d9
--- /dev/null
+++ b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/full.md
@@ -0,0 +1,386 @@
+# Argument Pair Extraction with Mutual Guidance and Inter-sentence Relation Graph
+
+Jianzhu Bao $^{1,2}$ , Bin Liang $^{1,2}$ , Jingyi Sun $^{1,2}$ , Yice Zhang $^{1,2}$ , Min Yang $^{3}$ , Ruifeng Xu $^{1,4*}$
+
+1Harbin Institute of Technology (Shenzhen), China
+
+$^{2}$ Joint Lab of China Merchants Securities and HITSZ
+
+$^{3}$ Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences $^{4}$ Peng Cheng Laboratory, Shenzhen, China
+
+jianzhubao@gmail.com, bin.liang@stu.hit.edu.cn
+
+sunjingyihit@gmail.com, zhangyc_hit@163.com
+
+min.yang@siat.ac.cn, xuruifeng@hit.edu.cn
+
+# Abstract
+
+Argument pair extraction (APE) aims to extract interactive argument pairs from two passages of a discussion. Previous work studied this task in the context of peer review and rebuttal, and decomposed it into a sequence labeling task and a sentence relation classification task. However, despite the promising performance, such an approach obtains the argument pairs implicitly by the two decomposed tasks, lacking explicitly modeling of the argument-level interactions between argument pairs. In this paper, we tackle the APE task by a mutual guidance framework, which could utilize the information of an argument in one passage to guide the identification of arguments that can form pairs with it in another passage. In this manner, two passages can mutually guide each other in the process of APE. Furthermore, we propose an inter-sentence relation graph to effectively model the interrelations between two sentences and thus facilitates the extraction of argument pairs. Our proposed method can better represent the holistic argument-level semantics and thus explicitly capture the complex correlations between argument pairs. Experimental results show that our approach significantly outperforms the current state-of-the-art model.
+
+# 1 Introduction
+
+Argumentation mining has received increasing research attention in recent years. Existing studies can be categorized into monological argumentation (Stab and Gurevych, 2014; Eger et al., 2017; Potash et al., 2017; Kuribayashi et al., 2019) and dialogical argumentation (Swanson et al., 2015; Morio and Fujita, 2018; Chakrabarty et al., 2019), with the former identifying the argumentation structure of a single monological document, and the latter fo
+
+cusing on the analysis of argumentation in debates or discussions.
+
+Argument pair extraction (APE) is a new task within the field of dialogical argumentation, aiming at extracting interactive argument pairs from two argumentative passages of a discussion. Cheng et al. (2020) investigated this task in the context of peer review and rebuttal, as they involve rich argumentative and interactive discussions. An example of APE is shown in Figure 1, where a review passage and its corresponding rebuttal passage are segmented into arguments and non-arguments at sentence level. The arguments in review can form argument pairs with the arguments in rebuttal, according to the points they discuss.
+
+APE is a highly challenging task because we need to understand not only the argumentation structure presented by each side of the discussion, but also the interaction of arguments between the participants. The interactions between arguments can be complicated, for example, one argument may be paired with multiple other arguments, forming one-to-many relations. This task is essential for understanding the structure of dialogical argumentation and can also support other related tasks, such as argument generation (Hua et al., 2019a) and debate summarization (Chowanda et al., 2017). Due to the rich interaction of complex arguments, peer review and rebuttal are perfect resources for APE, and have also been exploited in other tasks (Hua et al., 2019b; Fromm et al., 2020).
+
+Cheng et al. (2020) proposed to tackle APE by decomposing it into a sequence labeling task and a sentence relation classification task, with the first subtask extracting the arguments in each review or rebuttal, and the second subtask determining whether two sentences belong to the same pair of arguments. These two subtasks are jointly optimized within a multi-task learning framework, and
+
+
+Figure 1: An example of APE. A review passage is shown on the left, and its corresponding rebuttal passage is shown on the right. Sent- $i$ denotes the $i$ -th sentence in the review/rebuttal, and Rev: $\text{Arg-}i/\text{Rep}$ : $\text{Arg-}i$ denotes the $i$ -th argument in the review/rebuttal. Each argument consists of one or more consecutive sentences. Arg-Pair- $i$ denotes the $i$ -th argument pair. In this example, two argument pairs are colored in green and blue respectively.
+
+then the argument pairs are obtained indirectly by combining the results of the two subtasks. However, this method is suboptimal for APE, because it lacks explicitly modeling of the argument-level interactive relations between argument pairs, and the two subtasks might not adapt well to each other.
+
+When humans perform this task, we will first identify an argument from the review passage. Then, keeping this argument in mind, we would further seek out the corresponding arguments from the rebuttal passage to obtain argument pairs. Similarly, this process can be reversed, i.e., we first identify an argument from the rebuttal passage, and then identify the argument in the review passage guided by the identified rebuttal argument. Inspired by this, we design a mutual guidance framework (MGF) to address APE. Our approach first identifies the arguments in the review and rebuttal by a non-guided sequence tagger. Then, incorporating the representations of identified arguments, a review-argument-guided sequence tagger and a rebuttal-argument-guided sequence tagger are utilized to determine argument pairs. Furthermore, we introduce an inter-sentence relation graph (ISRG) to better characterize the complex interactions between review and rebuttal. Unlike the previous method based on two subtasks, our approach can explicitly exploit argument-level semantic information to extract argument pairs more precisely.
+
+Experimental results show that our method significantly outperforms the state-of-the-art methods. Further analysis reveals the effectiveness of mutual guidance and ISRG. Also, our method is more superior when extracting one-to-many pairs.
+
+# 2 Task Definition
+
+Following the work of Cheng et al. (2020), we aim to automatically extract interactive argument pairs from peer review and rebuttal. Formally, given a review passage $\mathcal{V} = (s_1^v,s_2^v,\dots ,s_m^v)$ consisting of $m$ sentences and a rebuttal passage $\mathcal{B} = (s_1^b,s_2^b,\dots ,s_n^b)$ consisting of $n$ sentences, we first need to identify each argument in review and rebuttal, and obtain a review argument spans set $\hat{\mathrm{X}}^{v} = \{\hat{\alpha}_{1}^{v},\hat{\alpha}_{2}^{v},\ldots \}$ and a rebuttal argument spans set $\hat{\mathrm{X}}^{b} = \{\hat{\alpha}_{1}^{b},\hat{\alpha}_{2}^{b},\ldots \}$ , where $\hat{\alpha}_i^v$ and $\hat{\alpha}_i^b$ are sentence-level spans in review and rebuttal, respectively. Then, a set of interactive argument pairs $\hat{\mathrm{P}} = \{\hat{p}_1,\hat{p}_2,\dots \}$ should be extracted, where $\hat{p}_i\in \hat{\mathrm{X}}^v\times \hat{\mathrm{X}}^b$ is an interactive argument pair. For example, in Figure 1, the review argument spans set $\hat{\mathrm{X}}$ is $\{\hat{a}_1^v,\hat{a}_2^v\} = \{(3,5),(6,9)\}$ and the rebuttal argument spans set $\hat{\mathrm{Y}}$ is $\{\hat{a}_1^b,\hat{a}_2^b\} = \{(2,3),(4,5)\}$ . The argument pairs set $\hat{\mathrm{P}}$ is $\{(\hat{a}_1^v,\hat{a}_1^b),(\hat{a}_2^v,\hat{a}_2^b)\}$ .
+
+# 3 Proposed Approach
+
+We present a mutual guidance framework with an inter-sentence relation graph for APE, named MGF. Our approach can better utilize the holistic argument-level semantics and thus explicitly capture the complex correlations between argument pairs. The overall architecture is shown in Figure 2. In the following, we first introduce the inter-sentence relation graph, then describe the mutual guidance framework.
+
+
+Figure 2: The architecture of MGF.
+
+# 3.1 Inter-sentence Relation Graph
+
+In order to facilitate argument pair extraction, we capture the latent sentence relations between review and rebuttal by an inter-sentence relation graph. This graph regards every sentence in review and rebuttal as nodes, and is constructed from two perspectives: 1) From the in-passage perspective, we build edges among the sentences of individual review/rebuttal passage (in-passage edges) based on the relative positions between them. This kind of edge can emphasize the correlation between two sentences with close distance, as they may be in the same argument. 2) From the cross-passage perspective, we build edges between review sentences and rebuttal sentences (cross-passage edges) based on the co-occurring words between two sentences. Intuitively, two arguments in an argument pair are likely to share certain words since they are discussing the same point. Also, we find that there are co-occurring words in more than $80\%$ of the argument pairs of the Review-Rebuttal dataset (Cheng et al., 2020) (ignoring the stop words). Thus, this kind of edge could help capture the interactions between argument pairs by modeling the cross-passage sentence relations.
+
+In-passage Edge. Based on the relative positions between two sentences, the weights of the edge between every two in-passage sentences $\omega^{I}(s_{i}, s_{j})$
+
+can be computed as:
+
+$$
+\omega^ {I} \left(s _ {i}, s _ {j}\right) = \left\{ \begin{array}{l l} 1 + \left(1 - \frac {\mathcal {D} \left(s _ {i} , s _ {j}\right)}{\rho}\right) & \mathcal {D} \left(s _ {i}, s _ {j}\right) \leq \rho \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {1}
+$$
+
+where $s_i$ and $s_j$ are two sentences within an individual review/rebuttal passage, and $\mathcal{D}(s_i, s_j)$ denotes the relative distance between them. $\rho$ is the in-passage sentence distance threshold, and two sentences are connected only if their relative distance is not greater than $\rho$ . Since most passages are very long, this threshold $\rho$ can control the farthest retention distance, so as to reduce noise.
+
+Cross-passage Edge. Based on the co-occurring words between two sentences, the weights of the edge between every two cross-passage sentences $\omega^{C}(s_{i}, s_{j})$ can be computed as:
+
+$$
+\omega^ {C} \left(s _ {i}, s _ {j}\right) = \left\{ \begin{array}{l l} 1 + \frac {\mathcal {C} \left(s _ {i} , s _ {j}\right)}{\mathcal {C} _ {\text {m a x}}} & \mathcal {C} \left(s _ {i}, s _ {j}\right) > \varphi \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {2}
+$$
+
+where $s_i$ and $s_j$ are two sentences from two different passages, and $\mathcal{C}(s_i, s_j)$ denotes the number of co-occurring words of them. $\mathcal{C}_{\text{max}}$ is the maximum co-occurring words number of the corpus. $\varphi$ indicates the co-occurring words number threshold, and two passage sentences are connected only when the number of their co-occurring words is greater than $\varphi$ . Note that when calculating $\mathcal{C}(s_i, s_j)$ , we ignore the stop words.
+
+With the in-passage edges and the cross-passage edges defined above, the inter-sentence relation graph (ISRG) of review $\mathcal{V}$ and rebuttal $\mathcal{B}$ could be constructed, where the nodes are all sentences of review and rebuttal. Here, the adjacency matrix $\mathbf{A} \in \mathbb{R}^{(m + n) \times (m + n)}$ of ISRG can be derived as:
+
+$$
+\mathrm {A} _ {i j} = \left\{ \begin{array}{l l} \omega^ {I} \left(s _ {i}, s _ {j}\right) & s _ {i}, s _ {j} \in \mathcal {V} \\ \omega^ {I} \left(s _ {i}, s _ {j}\right) & s _ {i}, s _ {j} \in \mathcal {B} \\ \omega^ {C} \left(s _ {i}, s _ {j}\right) & s _ {i} \in \mathcal {V}, s _ {j} \in \mathcal {B} \\ \omega^ {C} \left(s _ {i}, s _ {j}\right) & s _ {i} \in \mathcal {B}, s _ {j} \in \mathcal {V} \end{array} \right. \tag {3}
+$$
+
+# 3.2 Mutual Guidance Framework
+
+Our proposed Mutually Guided Framework (MGF) first encodes the sentences and employs a non-guided sequence tagger to identify the arguments in the review and rebuttal. Then, after obtaining a relation-oriented sentence representation by graph convolution, two mutually guided taggers are used to extract argument pairs.
+
+Sentence Encoder. We apply BERT (Devlin et al., 2019) to obtain the representation of each sentence and use LSTM (Hochreiter and Schmidhuber, 1997) to encode the contextual long-term dependencies of sentences. Specifically, for each sentence $s_i$ from $\mathcal{V}$ or $\mathcal{B}$ , we feed it into BERT and get the sentence embedding $\mathbf{e}_i \in \mathbb{R}^{d_b}$ by mean pooling over all token representations, where $d_b$ is the vector dimension of the last layer of BERT. Hence, the sentences in $\mathcal{V}$ and $\mathcal{B}$ can be represented as $\mathbf{V} = (\mathbf{e}_1^v, \mathbf{e}_2^v, \dots, \mathbf{e}_m^v)$ and $\mathbf{B} = (\mathbf{e}_1^b, \mathbf{e}_2^b, \dots, \mathbf{e}_n^b)$ . Subsequently, $\mathbf{V}$ and $\mathbf{B}$ are separately fed into a bidirectional LSTM (BiLSTM), and the hidden states from both directions of each sentence are concatenated as the contextual sentence representation. In this way, the contextual sentence representation matrix of $\mathcal{V}$ and $\mathcal{B}$ can be derived:
+
+$$
+\mathbf {H} ^ {v} = \left(\mathbf {h} _ {1} ^ {v}, \mathbf {h} _ {2} ^ {v}, \dots , \mathbf {h} _ {m} ^ {v}\right) \tag {4}
+$$
+
+$$
+\mathbf {H} ^ {b} = \left(\mathbf {h} _ {1} ^ {b}, \mathbf {h} _ {2} ^ {b}, \dots , \mathbf {h} _ {n} ^ {b}\right) \tag {5}
+$$
+
+where $\mathbf{h}_i^v /\mathbf{h}_i^v\in \mathbb{R}^{2d_l}$ is the contextual sentence representation of the $i$ -th sentence in review/rebuttal, $d_{l}$ is the hidden size of LSTM.
+
+Non-guided Tagger. We use a CRF sequence tagger to identify all potential arguments, named non-guided tagger, which could provide explicit argument span information for the subsequent argument pairs extraction. Concretely, we feed the contextual sentence representations $\mathbf{H}^v$ and $\mathbf{H}^b$ into
+
+this CRF tagger, and the predicted label sequences for review and rebuttal could be obtained:
+
+$$
+Y ^ {v} = \left(y _ {1} ^ {v}, y _ {2} ^ {v}, \dots , y _ {m} ^ {v}\right) \tag {6}
+$$
+
+$$
+Y ^ {b} = \left(y _ {1} ^ {b}, y _ {2} ^ {b}, \dots , y _ {n} ^ {b}\right) \tag {7}
+$$
+
+where $y_{i}^{v} / y_{i}^{b}$ is the IOBES label for the $i$ -th sentence of review/rebuttal.
+
+According to these two label sequences, we could obtain the potential argument spans for review and rebuttal, i.e. $\mathrm{X}^v = \{\alpha_1^v,\alpha_2^v,\ldots \}$ and $\mathrm{X}^b = \{\alpha_1^b,\alpha_2^b,\ldots \}$ , where $\alpha_{i}^{v} / \alpha_{i}^{b}$ is the $i$ -th predicted argument span of review/rebuttal.
+
+Graph Aggregation Layer. Base on the inter-sentence relation graph constructed in Section 3.1, we use the contextual sentence representations $\mathbf{H}^v\in \mathbb{R}^{m\times 2d_l}$ and $\mathbf{H}^b\in \mathbb{R}^{n\times 2d_l}$ as the feature vectors of $(m + n)$ nodes in this graph. Then, we employ a graph convolutional network (GCN) (Kipf and Welling, 2017) to conduct information exchange between nodes:
+
+$$
+\mathbf {G} ^ {(0)} = \left[ \mathbf {H} ^ {v}; \mathbf {H} ^ {b} \right] \tag {8}
+$$
+
+$$
+\mathbf {G} ^ {(l + 1)} = \sigma (\widetilde {\mathbf {A}} \mathbf {G} ^ {(l)} \mathbf {W} ^ {(l)} + \mathbf {b} ^ {(l)}) \tag {9}
+$$
+
+where $\mathbf{G}^l\in \mathbb{R}^{(m + n)\times 2d_l}$ contains all node vectors in the $l$ -th layer of GCN and $\widetilde{\mathbf{A}}$ is the normalized adjacency matrix. $\mathbf{W}^{(l)}$ and $\mathbf{b}^{(l)}$ are learnable parameter matrix and bias term. $\sigma (\cdot)$ is the ReLU activation function commonly used in GCN.
+
+We keep the node vectors of the last layer of the GCN as the relation-oriented sentence representations of sentences for review $(\mathbf{G}^v)$ and rebuttal $(\mathbf{G}^b)$ :
+
+$$
+\mathbf {G} ^ {(L)} = \left[ \mathbf {G} ^ {v}; \mathbf {G} ^ {b} \right] \tag {10}
+$$
+
+$$
+\mathbf {G} ^ {v} = \left(\mathbf {g} _ {1} ^ {v}, \mathbf {g} _ {2} ^ {v}, \dots , \mathbf {g} _ {m} ^ {v}\right) \tag {11}
+$$
+
+$$
+\mathbf {G} ^ {b} = \left(\mathbf {g} _ {1} ^ {b}, \mathbf {g} _ {2} ^ {b}, \dots , \mathbf {g} _ {n} ^ {b}\right) \tag {12}
+$$
+
+where $\mathbf{g}_i^v /\mathbf{g}_i^b\in \mathbb{R}^{d_g}$ is the relation-oriented representation for the $i$ -th sentence in review/rebuttal, and $d_{g}$ is the output feature dimension of GCN.
+
+Mutually Guided Taggers. With the argument spans sets $(\mathbf{X}^v$ and $\mathbf{X}^b)$ produced by the non-guided tagger and the relation-oriented sentence representations $(\mathbf{G}^v$ and $\mathbf{G}^b)$ produced by GCN, we could extract argument pairs with two mutually guided taggers, i.e. review-argument-guided (RVAG) tagger and rebuttal-argument-guided (RBAG) tagger. These two taggers could guide each other and cooperate to extract argument pairs.
+
+For the RVAG tagger, we first use review argument spans set $\mathbf{X}^v$ to produce a representation of each potential review argument from $\mathbf{G}^v$ by mean pooling over the sentence representations in each argument span. Specifically, for the $k$ -th argument span $\alpha_k^v = (b_k, e_k)$ in $\mathbf{X}^v$ , the contextual representation of this argument $\mathbf{a}_k^v \in \mathbb{R}^{d_g}$ could be obtained by:
+
+$$
+\mathbf {a} _ {k} ^ {v} = \frac {1}{e _ {k} - b _ {k} + 1} \sum_ {i = b _ {k}} ^ {e _ {k}} \mathbf {g} _ {i} ^ {v} \tag {13}
+$$
+
+In this way, the representations of review arguments can be represented as $\mathbf{Q}^v = (\mathbf{a}_1^v,\mathbf{a}_2^v,\ldots)$ . Subsequently, to enable this $k$ -th review argument to guide the identification of its paired rebuttal arguments, we concatenate $\mathbf{a}_k^v$ to each rebuttal sentence representation $\mathbf{g}_i^b$ and then apply another BiLSTM to obtain the RVAG rebuttal sentence representations:
+
+$$
+\begin{array}{l} \overrightarrow {\mathbf {h}} _ {i} ^ {b, g} = \overrightarrow {\operatorname {L S T M}} \left(\mathbf {g} _ {i} ^ {b} \oplus \mathbf {a} _ {k} ^ {v}, \overrightarrow {\mathbf {h}} _ {i - 1} ^ {b, g}\right) (14) \\ \overleftarrow {\mathbf {h}} _ {i} ^ {b, g} = \overleftarrow {\mathrm {L S T M}} \left(\mathbf {g} _ {i} ^ {b} \oplus \mathbf {a} _ {k} ^ {v}, \overleftarrow {\mathbf {h}} _ {i - 1} ^ {b, g}\right) (15) \\ \mathbf {h} _ {i} ^ {b, g} = \overleftarrow {\mathbf {h}} _ {i} ^ {b, g} \oplus \overleftarrow {\mathbf {h}} _ {i} ^ {b, g} (16) \\ \end{array}
+$$
+
+where $\mathbf{h}_i^{b,g}\in \mathbb{R}^{d_l}$ is the RVAG representations for the $i$ -th sentence in rebuttal. In this way, the RVAG rebuttal sentence representation matrix $\mathbf{H}^{v,g} = (\mathbf{h}_1^{b,g},\mathbf{h}_2^{b,g},\dots ,\mathbf{h}_n^{b,g})$ could be obtained. Then, we input $\mathbf{H}^{v,g}$ into a CRF layer to identify the arguments that could form pairs with the $k$ -th review argument $\alpha_{k}^{v}$ .
+
+Similarly, the RBAG tagger can be conducted in the same manner, except that each identified rebuttal argument is used to guide the identification of its paired review arguments.
+
+# 3.3 Training
+
+The loss function of MGF consists of two parts, one for AM and the other for APE.
+
+For AM, we maximize the log-likelihood of the non-guided tagger:
+
+$$
+\mathcal {L} _ {a m} = \log p \left(\hat {\mathrm {Y}} ^ {v} | \mathcal {V}\right) + \log p \left(\hat {\mathrm {Y}} ^ {b} | \mathcal {B}\right) \tag {17}
+$$
+
+where $\hat{\mathrm{Y}}^v$ and $\hat{\mathrm{Y}}^b$ are the ground-truth IOBES label sequences of the review and rebuttal.
+
+For APE, the log-likelihood of the RVAG tagger and the RBAG tagger are as follows:
+
+$$
+\begin{array}{l} \mathcal {L} _ {a p e} = \sum_ {i} \log p \left(\hat {\mathrm {Y}} _ {i} ^ {b, r} \mid \mathcal {B}, \mathrm {X} ^ {v}\right) (18) \\ + \sum_ {i} \log p \left(\hat {\mathrm {Y}} _ {i} ^ {v, r} \mid \mathcal {V}, \mathrm {X} ^ {b}\right) (18) \\ \end{array}
+$$
+
+where $\hat{\mathrm{Y}}_i^{v,r}$ and $\hat{\mathrm{Y}}_i^{b,r}$ are the $i$ -th relation-oriented ground-truth IOBES label sequences of review and rebuttal. Concretely, all review arguments derived by the label sequence $\hat{\mathrm{Y}}_i^{v,r}$ are paired with the $i$ -th argument of the rebuttal.
+
+We sum the loss function of the above two parts to obtain the final training objective of $\mathbf{MGF}^1$ :
+
+$$
+\mathcal {L} = \mathcal {L} _ {a m} + \mathcal {L} _ {a p e} \tag {19}
+$$
+
+# 3.4 Inference
+
+During inference, we fuse the prediction of both RVAG tagger and RBAG tagger to obtain argument pairs. Specifically, let $\mathrm{Y}_k^{v,r}$ denote the relation-oriented label sequences predicted by the RBAG tagger, from which all review argument spans paired with the $k$ -th rebuttal argument can be obtained. We note that these review argument spans as $\mathrm{X}_k^{v,r} = (\alpha_{k,1}^v,\alpha_{k,2}^v,\ldots)$ and the $k$ -th rebuttal argument span as $\alpha_{k}^{b}$ . Accordingly, the argument pairs derived from $\mathrm{Y}_k^{v,r}$ can be denoted as $\mathrm{P}_k^{v,r} = ((\alpha_{k,1}^v,\alpha_k^b),(\alpha_{k,2}^v,\alpha_k^b),\ldots)$ . Further, we can obtain all argument pairs predicted by RBAG tagger $\mathrm{P}^{rbag}$ by:
+
+$$
+\mathrm {P} ^ {r b a g} = \bigcup_ {k} \mathrm {P} _ {k} ^ {v, r} \tag {20}
+$$
+
+Similarly, all argument pairs predicted by RVAG tagger $\mathrm{Prvag}$ can be obtained in the same manner.
+
+Then, we consider the union set of $\mathrm{Pr}^{rvag}$ and $\mathrm{Pr}^{rbag}$ as the prediction result of argument pairs, i.e. $\mathrm{P} = \mathrm{Pr}^{rvag} \cup \mathrm{Pr}^{rbag}$ . Our preliminary experimental results show that this approach can efficiently fuse the prediction results of RVAG tagger and RBAG tagger.
+
+# 4 Experimental Setup
+
+# 4.1 Dataset
+
+We conduct experiments on the Review-Rebuttal (RR) dataset proposed by Cheng et al. (2020). This dataset contains 4,764 review-rebuttal passage pairs of ICLR collected from openreview.net. Cheng et al. (2020) provided two versions of dividing RR dataset, namely RR-submission and RR-passage. In both versions, RR dataset is split by the ratio of 8:1:1 for training, development, and testing. In RR-submission, multiple review-rebuttal passage pairs of the same paper submission are in
+
+| RR | # Review-rebuttal pairs | 4,764 |
| # Argument pairs | 18.6K |
| # One-to-one argument pairs | 13.0K |
| # One-to-many argument pairs | 5.6K |
| Rev | # Sentences | 99.8K |
| # Arguments | 23.2K |
| Avg. # sentences per passage | 21.0 |
| Avg. # sentences per argument | 2.5 |
| Reb | # Sentences | 94.9K |
| # Arguments | 17.7K |
| Avg. # sentences per passage | 19.9 |
| Avg. # sentences per argument | 3.8 |
+
+Table 1: Statistics of RR dataset.
+
+the same train/development/test set, whereas RR-passage does not guarantee this. This distinction makes RR-submission more challenging, so our further experiments are conducted on RR-submission. The detailed statistics about RR dataset are summarized in Table 1.
+
+# 4.2 Implementation Details
+
+We evaluate our experiments by two metrics, namely argument mining (AM) and argument pair extraction (APE). Unlike Cheng et al. (2020), we do not use sentence pairing as an evaluation metric since we extract argument pairs directly instead of using sentence pairing as a subtask. We employ the precision (Pre.), recall (Rec.), and $\mathrm{F}_1$ scores to measure the performance on AM and APE. All experiments are performed 5 times with different random seeds, and the scores are averaged.
+
+Regarding the implementation of our model $^2$ , we adopt the uncased $\mathrm{BERT}_{\mathrm{Base}}$ as our base encoder, which is fine-tuned during training. All LSTMs used in our model are 1 layer with the hidden size of 512. Note that, the parameters of LSTMs and CRFs used in the three taggers are not shared. The AdamW optimizer (Kingma and Ba, 2015) is employed for parameter optimization, and the initial learning rates for BERT layer and other layers are set to 1e-5 and 1e-3, respectively. The dropout rate (Srivastava et al., 2014) is set to 0.5 and the batch size is 2. Our model is implemented in PyTorch (Paszke et al., 2019) on a NVIDIA Tesla V100 GPU. We train our model 10 epochs with early stopping strategy, and choose the best model parameters based on the best performance on the development set (average of $F_1$ score of AM and
+
+APE).
+
+# 4.3 Baselines
+
+To evaluate our mutual guidance framework (MGF), we compare it with several baselines:
+
+PL-H-LSTM-CRF (Cheng et al., 2020) independently trains a sequence labeling model and a sentence relation classification model, and then pipes the result together to obtain argument pairs.
+
+MT-H-LSTM-CRF (Cheng et al., 2020) is similar to PL-H-LSTM-CRF, except that it trains two subtasks in a multi-task framework. This is the current state-of-the-art method on RR dataset. Note that the BERT encoder used in this model is not fine-tuned during training.
+
+Besides, we implemented two additional baselines for further comparisons:
+
+Two-Step is another pipeline model. Unlike PL-HLSTM-CRF, this model first identifies all potential arguments by sequence labeling, then matches review arguments and rebuttal arguments by Cartesian products to determine argument pairs. Both steps are based on BERT.
+
+Non-FT-MGF is the implementation of our framework based on the sentence encoding method of MT-H-LSTM-CRF. It does not fine-tune BERT for a fair comparison with MT-H-LSTM-CRF.
+
+# 5 Results and Analysis
+
+# 5.1 Main Results
+
+The overall performance of our proposed framework and the baselines are shown in Table 2. Our model achieves the best performance on both RR-submission and RR-passage. On RR-submission, our model outperforms the current state-of-the-art model MT-H-LSTM-CRF by at least $1.01\%$ and $7.94\%$ in $\mathrm{F_1}$ score over AM and APE. On RR-passage, our model also outperforms MT-H-LSTM-CRF and obtains at least $0.79\%$ and $7.01\%$ higher $\mathrm{F_1}$ scores over AM and APE.
+
+We also show the results where the sentence encoder of MGF is replaced by that of MT-H-LSTM-CRF, namely Non-FT-MGF. Without employing BERT fine-tuning, Non-FT-MGF still outperforms MT-H-LSTM-CRF, which demonstrates that the performance gains we achieve are not solely due to BERT fine-tuning. It can also be observed that our model results can be further improved with BERT fine-tuning by comparing MGF with Non-FT-MGF.
+
+| Data | Method | Argument Mining | Argument Pair Extraction |
| Pre. | Rec. | F1 | Pre. | Rec. | F1 |
| RR-submission | PL-H-LSTM-CRF | 67.63 | 68.51 | 68.06 | 19.86 | 19.94 | 19.90 |
| MT-H-LSTM-CRF | 70.09 | 70.14 | 70.12 | 26.69 | 26.24 | 26.46 |
| Two-Step | 70.94 | 70.77 | 70.86 | 33.11 | 24.67 | 28.27 |
| Non-FT-MGF | 69.18 | 69.94 | 69.55 | 33.12 | 33.69 | 33.40 |
| MGF (Ours) | 70.40 | 71.87 | 71.13 | 34.23 | 34.57 | 34.40 |
| RR-passage | PL-H-LSTM-CRF | 73.10 | 67.65 | 70.27 | 21.24 | 19.30 | 20.23 |
| MT-H-LSTM-CRF | 71.85 | 71.01 | 71.43 | 30.08 | 29.55 | 29.81 |
| Two-Step | 71.94 | 71.51 | 71.72 | 34.31 | 26.87 | 30.14 |
| Non-FT-MGF | 71.22 | 70.49 | 70.85 | 35.20 | 34.11 | 34.65 |
| MGF (Ours) | 73.62 | 70.88 | 72.22 | 38.03 | 35.68 | 36.82 |
+
+
+Figure 3: Detailed results of AM $(\%)$ . * indicates the results we replicated, as the authors of MT-H-LSTM-CRF did not provide these results.
+
+Table 2: Comparison results with baselines on RR-submission and RR-passage (\%). The best scores are in bold.
+
+| Method | APE F1 | ∇ |
| MGF (Ours) | 34.40 | - |
| w/o RVAG Tagger | 33.11 | -1.29 |
| w/o RBAG Tagger | 31.94 | -2.46 |
| w/o ISRG | 30.65 | -3.75 |
| w/o IPE | 33.12 | -1.28 |
| w/o CPE | 32.33 | -2.07 |
+
+# 5.2 Detailed Results of Argument Mining
+
+Figure 3 shows the detailed results of AM on RR-submission. Here, we compare the performances of MGF and MT-H-LSTM-CRF on review passages and rebuttal passages, respectively. Since rebuttal passages are more clearly arranged and structured than review passages (Cheng et al., 2020), both models perform better on the former. Although our MGF yielded similar AM results to MT-H-LSTM-CRF on rebuttal passages, it shows significant improvement on more complex review passages.
+
+# 5.3 Ablation Study
+
+As shown in Table 3, we conduct ablation experiments to further evaluate the contribution of each
+
+Table 3: The results of ablation experiments on RR-submission $(\%)$ . The best scores are in bold.
+
+| Type of pairs | Method | APE Rec. |
| All | MT-H-LSTM-CRF* | 26.05 |
| MGF (Ours) | 34.57 |
| One-to-one | MT-H-LSTM-CRF* | 35.86 |
| MGF (Ours) | 41.37 |
| One-to-many | MT-H-LSTM-CRF* | 11.09 |
| MGF (Ours) | 17.71 |
+
+Table 4: Results of extracting one-to-many pairs on RR-submission (\%). Similar to Figure 3, * denotes the results that we replicated.
+
+component in our proposed MGF. The $\mathrm{F_1}$ score decreases heavily without mutual guidance. Specifically, the $\mathrm{F_1}$ score of APE decreases by $2.46\%$ if only RVAG tagger is used ( $w/o$ RBAG Tagger). Similarly, using only the RBAG tagger ( $w/o$ RVAG Tagger) decreases the $\mathrm{F_1}$ score by $1.29\%$ . Such results validate the effectiveness of our proposed mutual guidance framework. Furthermore, we can observe that the performance of using only RBAG tagger is better than that of using only RVAG tagger. This is possibly due to the fact that, on the AM task, the identification of the rebuttal arguments is more accurate than the review arguments (Figure 3), leading to better results when using identified rebuttal arguments to guide argument pair extraction.
+
+It can be observed that without our proposed inter-sentence relation graph ( $w/o$ ISRG), the $\mathrm{F_1}$ score drops heavily $(-3.75\%)$ . Going one step further, if we exclude the in-passage edges ( $w/o$ IPE), the $\mathrm{F_1}$ score will decrease by $1.28\%$ , indicating the necessity of capturing interactions between two sentences with close distance. Also, incorporating cross-passage edges into MGF ( $w/o$ CPE) can bring more significant $\mathrm{F_1}$ score improvement $(2.07\%)$ , because cross-passage edges can model the sentence relations cross two passages and thus facilitate the identification of interactive argument pairs.
+
+
+Figure 4: Impacts of graph parameters.
+
+
+
+
+
+# 5.4 Results of Extracting One-to-many Pairs
+
+We further compare the results on extracting one-to-many pairs on RR-submission in Table 4. We divide argument pairs of the test set into two subsets: one subset contains only one-to-one argument pairs, and the other subset contains only one-to-many argument pairs. Then, we compare the recall of MT-H-LSTM-CRF and MGF on the two subsets.
+
+It can be seen that our MGF model consistently outperforms MT-H-LSTM-CRF on both subsets. Furthermore, MGF is relatively more effective for one-to-many argument pairs, with a recall improvement of $6.62\%$ . This improvement comes from the ability of our model to take into account the entire review/rebuttal sequence when extracting argument pairs, so that multiple arguments that form pairs with the guiding argument could be extracted simultaneously through sequence tagging.
+
+# 5.5 Impacts of Graph Parameters
+
+The inter-sentence relation graph for modeling inter-sentence latent relations is a critical part of our model. Therefore, we further investigate the impacts of the graph parameters on the performance of MGF, including the threshold of in-passage sentence distance $\rho$ , the threshold of co-occurring words number $\varphi$ , and the number of GCN layers $l$ . The detailed results are shown in Figure 4.
+
+From Figure 4(a), our approach achieves the best performance with $\rho$ set to 1. With this setting, each sentence node in the graph is directly connected to the two sentence nodes that are adjacent to it in the passage. Such a phenomenon is consistent with our observation in Table 1 that the average number of sentences contained in each argument is 3.1. Since the majority of arguments contain a small number of sentences, we should not connect two sentences that have a long distance. Otherwise, the semantic representation of arguments will be distorted.
+
+According to Figure 4(b), we find that it is most
+
+appropriate to set $\varphi$ to 2. This suggests that two sentences with more than 2 co-occurring words are more likely to be from two inter-related arguments. If we set $\varphi$ too small, then too much noise will be introduced. Conversely, if we set $\varphi$ too large, then many sentence pairs from two inter-related arguments will be ignored by the graph.
+
+For the number of GCN layers $l$ , our approach performs best with 1 layer GCN, indicating that the inter-sentence relations can be modeled sufficiently without stacking many layers of GCN.
+
+# 5.6 Error Analysis
+
+To gain a deeper insight into our method, we analyze the prediction of our model. To be specific, we randomly sampled 100 samples from the test set of RR-submission, and then manually inspect the prediction results. Here are two major causes of errors.
+
+- It is difficult to extract argument pairs if there are no co-occurring or semantically similar words in two arguments. In this scenario, our proposed ISRG based on co-occurring words cannot provide valid information. Also, it is hard for the pre-trained model to capture the association between such argument pairs.
+- In some cases, our model identifies only a few important sentences instead of a complete argument. However, in some other cases, multiple consecutive arguments are identified as one argument. The reason is that we frame both AM and APE as sentence-level sequence tagging tasks. For such a task, the boundaries of arguments are often diverse and difficult to determine, so the model often misidentifies them.
+
+# 6 Related Work
+
+Most existing studies in the field of argumentation mining focus on monological argumentation, such as argumentation structure parsing (Stab and
+
+Gurevych, 2017; Afantenos et al., 2018; Kuribayashi et al., 2019; Hua et al., 2019b; Morio et al., 2020), automated essay scoring(Wachsmuth et al., 2016; Ke et al., 2018; Song et al., 2020), argument quality assessment(Wachsmuth et al., 2017; Gretz et al., 2020; Lauscher et al., 2020), argumentation strategies modeling(Khatib et al., 2016, 2017), etc.
+
+Since real-life argumentation is usually in the form of dialogue, some prior work focuses on dialogical argumentation. Morio and Fujita (2018) employed a pointer network to predict argumentation structures in discussion threads. Chakrabarty et al. (2019) studied the relations between argument components in online discussion forums with pre-trained models and discourse relations. Ji et al. (2019) proposed a discrete argument representation learning method to extract argument pairs. However, these studies above assumed that the boundaries of arguments have been given. Recently, Cheng et al. (2020) present a new task named argument pair extraction, which is more challenging as it requires both identifying arguments from plain text and extracting the interactive argument pairs.
+
+Our work is closely related to the argument relation prediction task. Many studies of argumentation structure parsing include argumentative relation prediction as a subtask(Kuribayashi et al., 2019; Morio et al., 2020; Bao et al., 2021). Since argument relation prediction is highly challenging, recently, more and more researchers study it as an independent task(Chen et al., 2018; Opitz and Frank, 2019; Cocarascu et al., 2020; Jo et al., 2021). Despite the strong connection, APE task is more challenging than argument relation prediction. Specifically, in argument relation prediction, arguments are given. But for APE, only two plain documents without any pre-labeled information are given, and we need to identify arguments in two documents and determine argument relations simultaneously.
+
+Graph neural networks (GNN) have shown promising performance in many NLP tasks, such as text classification(Yao et al., 2019; Ragesh et al., 2021), question answering(Tu et al., 2019; Qiu et al., 2019), sentiment analysis(Liang et al., 2021, 2020), text summarization(Xu et al., 2020; Yasunaga et al., 2017), etc. Recently, some works have attempted to introduce GNN into argumentation mining. Morio and Fujita (2019) performed argument component identification and classification by syntactic graph convolutional networks. Huang
+
+et al. (2021) proposed a heterogeneous argument attention network for argumentation persuasiveness prediction. In this paper, our proposed inter-sentence relation graph can effectively model the inter-relations between two sentences, thus facilitating APE.
+
+# 7 Conclusion
+
+In this paper, we propose an effective mutual guidance framework for argument pair extraction, named MGF, which enables arguments of two passages to mutually guide each other for extracting interactive argument pairs. In addition, we introduce an inter-sentence relation graph into our proposed MGF, which could effectively model the inter-relations between two sentences and thus improving the extraction of argument pairs. The experimental results demonstrate the effectiveness of our method. In the future, we plan to apply our method to datasets from more diverse domains beyond the peer review and rebuttal, such as social networks, debate competitions, etc.
+
+# Acknowledgments
+
+This work was partially supported by the National Natural Science Foundation of China (61632011, 61876053, 62006062, 62176076), the Guangdong Province Covid-19 Pandemic Control Research Funding (2020KZDZX1224), the Shenzhen Foundational Research Funding (JCYJ20180507183527919 and JCYJ20200109113441941), China Postdoctoral Science Foundation (2020M670912), Joint Lab of HITSZ and China Merchants Securities, Youth Innovation Promotion Association of CAS China (No. 2020357), and Shenzhen Science and Technology Innovation Program (Grant No. KQTD20190929172835662).
+
+# References
+
+Stergos D. Afantenos, Andreas Peldszus, and Manfred Stede. 2018. Comparing decoding mechanisms for parsing argumentative structures. Argument Comput., 9(3):177-192.
+Jianzhu Bao, Chuang Fan, Jipeng Wu, Yixue Dang, Jiachen Du, and Ruifeng Xu. 2021. A neural transition-based model for argumentation mining. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1:
+
+Long Papers), Virtual Event, August 1-6, 2021, pages 6354-6364. Association for Computational Linguistics.
+Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019. AMPERSAND: argument mining for persuasive online discussions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2933-2943. Association for Computational Linguistics.
+Di Chen, Jiachen Du, Lidong Bing, and Ruifeng Xu. 2018. Hybrid neural attention for agreement/disagreement inference in online debates. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 665-670. Association for Computational Linguistics.
+Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, and Luo Si. 2020. APE: argument pair extraction from peer review and rebuttal via multi-task learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7000-7011. Association for Computational Linguistics.
+Alan Darmasaputra Chowanda, Albert Richard Sanyoto, Derwin Suhartono, and Criscentia Jessica Setiadi. 2017. Automatic debate text summarization in online debate forum. In ICCSCI, pages 11-19.
+Oana Cocarascu, Elena Cabrio, Serena Villata, and Francesca Toni. 2020. Dataset independent baselines for relation prediction in argument mining. In Computational Models of Argument - Proceedings of COMMA 2020, Perugia, Italy, September 4-11, 2020, volume 326 of Frontiers in Artificial Intelligence and Applications, pages 45-52. IOS Press.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
+Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 11-22. Association for Computational Linguistics.
+
+Michael Fromm, Evgeniy Faerman, Max Berrendorf, Siddharth Bhargava, Ruoxia Qi, Yao Zhang, Lukas Dennert, Sophia Selle, Yang Mao, and Thomas Seidl. 2020. Argument mining driven analysis of peerreviews. CoRR, abs/2012.07743.
+Shai Gretz, Roni Friedman, Edo Cohen-Karlik, Assaf Toledo, Dan Lahav, Ranit Aharonov, and Noam Slonim. 2020. A large-scale dataset for argument quality ranking: Construction and analysis. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7805-7813. AAAI Press.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.
+Xinyu Hua, Zhe Hu, and Lu Wang. 2019a. Argument generation with retrieval, planning, and realization. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2661-2672. Association for Computational Linguistics.
+Xinyu Hua, Mitko Nikolov, Nikhil Badugu, and Lu Wang. 2019b. Argument mining for understanding peer reviews. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2131-2137. Association for Computational Linguistics.
+Kuo Yu Huang, Hen-Hsen Huang, and Hsin-Hsi Chen. 2021. HARGAN: heterogeneous argument attention network for persuasiveness prediction. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13045-13054. AAAI Press.
+Lu Ji, Zhongyu Wei, Jing Li, Qi Zhang, and Xuanjing Huang. 2019. Discrete argument representation learning for interactive argument pair identification. CoRR, abs/1911.01621.
+Yohan Jo, Seojin Bang, Chris Reed, and Eduard H. Hovy. 2021. Classifying argumentative relations using logical mechanisms and argumentation schemes. Trans. Assoc. Comput. Linguistics, 9:721-739.
+Zixuan Ke, Winston Carlile, Nishant Gurrapadi, and Vincent Ng. 2018. Learning to give feedback: Modeling attributes affecting argument persuasiveness in student essays. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial
+
+Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4130-4136. ijcai.org.
+Khalid Al Khatib, Henning Wachsmuth, Matthias Hagen, and Benno Stein. 2017. Patterns of argumentation strategies across topics. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1351-1357. Association for Computational Linguistics.
+Khalid Al Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016. A news editorial corpus for mining argumentation strategies. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 3433-3443. ACL.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, and Kentaro Inui. 2019. An empirical study of span representations in argumentation structure parsing. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4691-4698. Association for Computational Linguistics.
+Anne Lauscher, Lily Ng, Courtney Napes, and Joel R. Tetreault. 2020. Rhetoric, logic, and dialectic: Advancing theory-based argument quality assessment in natural language processing. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4563-4574. International Committee on Computational Linguistics.
+Bin Liang, Yonghao Fu, Lin Gui, Min Yang, Jiachen Du, Yulan He, and Ruifeng Xu. 2021. Target-adaptive graph for cross-target stance detection. In WWW '21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, pages 3453-3464. ACM / IW3C2.
+Bin Liang, Rongdi Yin, Lin Gui, Jiachen Du, and Ruifeng Xu. 2020. Jointly learning aspect-focused and inter-aspect relations with graph convolutional networks for aspect sentiment analysis. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona,
+
+Spain (Online), December 8-13, 2020, pages 150-161. International Committee on Computational Linguistics.
+Gaku Morio and Katsuhide Fujita. 2018. End-to-end argument mining for discussion threads based on parallel constrained pointer architecture. In Proceedings of the 5th Workshop on Argument Mining, ArgMining@EMNLP 2018, Brussels, Belgium, November 1, 2018, pages 11-21. Association for Computational Linguistics.
+Gaku Morio and Katsuhide Fujita. 2019. Syntactic graph convolution in multi-task learning for identifying and classifying the argument component. In 13th IEEE International Conference on Semantic Computing, ICSC 2019, Newport Beach, CA, USA, January 30 - February 1, 2019, pages 271-278. IEEE.
+Gaku Morio, Hiroaki Ozaki, Terufumi Morishita, Yuta Koreeda, and Kohsuke Yanai. 2020. Towards better non-tree argument mining: Proposition-level bi-affine parsing with task-specific parameterization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 3259-3266. Association for Computational Linguistics.
+Juri Opitz and Anette Frank. 2019. Dissecting content and context in argumentative relation analysis. In Proceedings of the 6th Workshop on Argument Mining, ArgMining@ACL 2019, Florence, Italy, August 1, 2019, pages 25-34. Association for Computational Linguistics.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035.
+Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. Here's my point: Joint pointer architecture for argument mining. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1364-1373. Association for Computational Linguistics.
+Lin Qiu, Yunxuan Xiao, Yanru Qu, Hao Zhou, Lei Li, Weinan Zhang, and Yong Yu. 2019. Dynamically fused graph network for multi-hop reasoning. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 6140-6150. Association for Computational Linguistics.
+
+Rahul Ragesh, Sundararajan Sellamanickam, Arun Iyer, Ramakrishna Bairi, and Vijay Lingam. 2021. Hetegcn: Heterogeneous graph convolutional networks for text classification. In WSDM '21, The Fourteenth ACM International Conference on Web Search and Data Mining, Virtual Event, Israel, March 8-12, 2021, pages 860-868. ACM.
+Wei Song, Ziyao Song, Lizhen Liu, and Ruiji Fu. 2020. Hierarchical multi-task learning for organization evaluation of argumentative student essays. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3875-3881. ijcai.org.
+Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929-1958.
+Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive essays. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland, pages 1501-1510. ACL.
+Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. Comput. Linguistics, 43(3):619-659.
+Reid Swanson, Brian Ecker, and Marilyn A. Walker. 2015. Argument mining: Extracting arguments from online dialogue. In Proceedings of the SIG-DIAL 2015 Conference, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2-4 September 2015, Prague, Czech Republic, pages 217-226. The Association for Computer Linguistics.
+Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, and Bowen Zhou. 2019. Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2704-2713. Association for Computational Linguistics.
+Henning Wachsmuth, Khalid Al Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 1680-1691. ACL.
+Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational argumentation quality assessment in natural language. In Proceedings of the 15th
+
+Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 176-187. Association for Computational Linguistics.
+Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5021-5031. Association for Computational Linguistics.
+Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7370-7377. AAAI Press.
+Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Parek, Krishnan Srinivasan, and Dragomir R. Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, August 3-4, 2017, pages 452-462. Association for Computational Linguistics.
+
+# Appendices
+
+# A Different Weights for Loss Functions
+
+| Weight | F1 |
| Lam | Lape | AM | APE |
| 0.25 | 0.75 | 70.01 | 33.98 |
| 0.5 | 0.5 | 71.13 | 34.40 |
| 0.75 | 0.25 | 70.51 | 34.33 |
+
+Table 5: The results of different weights for loss functions on RR-submission $(\%)$ . The best scores are in bold.
+
+As shown in Table 5, the impacts of the different weights are minimal. The performance of the model is optimal when two weights are the same.
\ No newline at end of file
diff --git a/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/images.zip b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..df46c6dca67701e3398dc81b08ae74ec6f4e9b3f
--- /dev/null
+++ b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:38a3c3feb05ac5ddbaf11fc816907e075bbaf8b865660197a0e65af9ef38c8c8
+size 545436
diff --git a/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/layout.json b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b6a675c0028c4f886234b9dd09f25a37a373e33a
--- /dev/null
+++ b/argumentpairextractionwithmutualguidanceandintersentencerelationgraph/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee5e0122a4eb06391624ee7fa817ecfa2aa3e61e8fc09b258e75ec7bd57e7158
+size 476987
diff --git a/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_content_list.json b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7727f8ac5a22f8ecc3d12238e33ca385111862d1
--- /dev/null
+++ b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bcc63b002f881c01493ecd4a3fa1664a6a7332e6fa3e79ee9886aea7cec1d024
+size 119422
diff --git a/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_model.json b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..70bf838e08d4938094ac60b4dd5b85fd4203cb9d
--- /dev/null
+++ b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b095014f129aa8f0b568c61d4db3e129d3f5b7eadd6639bbf9768387421de20c
+size 137876
diff --git a/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_origin.pdf b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f0655b533ba846fc50f6fc0cccb5f430cd6d0312
--- /dev/null
+++ b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/5a06d39a-9d59-462c-9974-74edf51a8450_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fee295ba6d0b50631894e276b17be8f6204209ce96f5ec613b9fa1c134f73808
+size 855697
diff --git a/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/full.md b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9c7d7c38a6703ce6f802208124055450dd692089
--- /dev/null
+++ b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/full.md
@@ -0,0 +1,466 @@
+# ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive Summarization
+
+Alireza Salemi $^{1}$ , Emad Kebriaei $^{1}$ , Ghazal Neisi Minaei $^{1}$ , Azadeh Shakery $^{1,2}$
+
+$^{1}$ School of Electrical and Computer Engineering
+
+College of Engineering, University of Tehran, Tehran, Iran
+
+$^{2}$ School of Computer Science
+
+Institute for Research in Fundamental Sciences (IPM), Iran
+
+{alireza.salemi, emad.kebriaei, ghazal.minaei, shakery}@ut.ac.ir
+
+# Abstract
+
+Abstractive text summarization is one of the areas influenced by the emergence of pre-trained language models. Current pre-training works in abstractive summarization give more points to the summaries with more words in common with the main text and pay less attention to the semantic similarity between generated sentences and the original document. We propose ARMAN, a Transformer-based encoder-decoder model pre-trained with three novel objectives to address this issue. In ARMAN, salient sentences from a document are selected according to a modified semantic score to be masked and form a pseudo summary. To summarize more accurately and similar to human writing patterns, we applied modified sentence reordering. We evaluated our proposed models on six downstream Persian summarization tasks. Experimental results show that our proposed model achieves state-of-the-art performance on all six summarization tasks measured by ROUGE and BERTScore. Our models also outperform prior works in textual entailment, question paraphrasing, and multiple choice question answering. Finally, we established a human evaluation and show that using the semantic score significantly improves summarization results.
+
+# 1 Introduction
+
+Abstractive text summarization is the task of generating a short, fluent, and concise text that contains novel words and phrases other than the original document, preserving the primary subjects in the document. In contrast with extractive summarization, which aims to select the most important parts of the text to generate a summary, in abstractive summarization, the main goal is to generate a new persuasive piece of text as the summary of a document.
+
+Earlier abstractive summarization works (Hermann et al., 2015; See et al., 2017; Rush et al.,
+
+2015) focused on training with large datasets containing pairs of documents and summaries in a supervised manner. By introducing Transformer (Vaswani et al., 2017) architecture and pre-training objectives and their positive impact on most NLP tasks, most current state-of-the-art (SOTA) methods focused on self-supervised objectives for pretraining Transformer architecture in abstractive summarization tasks (Liu and Lapata, 2019; Zhang et al., 2020a; Qi et al., 2020). However, current pre-training works give more points to the summary with more words in common with the main text and pay less attention to the semantic similarity between generated sentences and the original document.
+
+According to Simons (2017), the Persian language is one of the top 25 spoken languages in the world. However, there are limited research studies in Persian document summarization, and most of the prior works were mainly focused on extractive summarization. The main focus of this work is on Persian abstractive summarization. Nevertheless, our proposed method is language-independent.
+
+In this work, we first bring semantic similarity scores into a sentence selection schema to create a document's pseudo summary. Briefly, we prepare a summary corresponding to each document in a dataset by selecting important sentences based on semantic scores in a self-supervised manner. Next, we propose three novel objectives for pre-training a seq2seq Transformer. Our model, ARMAN, uses Transformer encoder-decoder structure and introduces a new combination of masking sentences with sentence shuffling and reordering objectives. We fine-tuned the models on six downstream tasks. According to an experiment, we found that letting the training model to copy pieces of the input text into the output summary does not lead to better results in downstream tasks. Experiment results showed that our proposed models obtained SOTA performance in all Persian abstractive sum
+
+marization datasets on both ROUGE (Lin, 2004) and BERTScore(Zhang et al., 2020b). Our models generated even better summaries than previous SOTA in zero and few shot settings when fine-tuned with a small number of document-summary pairs. We achieved SOTA results on two datasets with only 1K examples. Moreover, our proposed models performed well in other NLU tasks, including textual entailment, question paraphrasing, and multiple choice question answering. Finally, to ensure the significant improvement in summarization, we held a human evaluation, and we performed a student t-test on its results.
+
+The main contributions of this paper are three-fold:
+
+- We introduce a top-sentence selection algorithm based on a semantic score to make document-summary pairs in a self-supervised manner.
+- We propose three novel objectives to pre-train a Transformer encoder-decoder architecture for Persian abstractive text summarization that outperforms previous state-of-the-art models on six downstream tasks.
+- We created an abstractive summarization dataset called Tebyan.
+
+# 2 Related Work
+
+Automatic text summarization was mainly performed based on statistical methods (Nenkova, 2005); most of them were striving to rank sentences by extracting their features(Svore et al., 2007; Erkan and Radev, 2004; Filippova and Altun, 2013). By rising of sequence to sequence learning with neural networks (Hochreiter and Schmidhuber, 1997; Sutskever et al., 2014) and attention mechanism (Bahdanau et al., 2015) usage in abstractive summarization tasks (Nallapati et al., 2016), a new era in abstractive summarization began.
+
+By introducing Transformer (Vaswani et al., 2017) and Masked Language Modeling (MLM) methods of BERT (Devlin et al., 2019), most NLP tasks achieved a vast improvement gain using these pre-training methods and architectures. Following BERT's approach, many other Language Models were trained (Liu et al., 2019; Joshi et al., 2020) with differences in the amount of data used for pre-training and some optimizations on BERT's pre-training method; most of them were only Encoders. Furthermore, Encoder-Decoder models
+
+were trained with a mixture of pre-training tasks; T5 (Raffel et al., 2020) and BART (Lewis et al., 2020) are two of them.
+
+Since the pre-training of Transformer was successful on most NLP tasks, some models were pre-trained for specific duties; PEGASUS (Zhang et al., 2020a) is a pre-trained model that was trained specifically for summarization on C4 and HugeNews corpora. PEGASUS trained with Gap Sentence Generation (GSG) that masks the most important sentences based on syntactic similarity of sentences of a document. ARMAN is different from PEGASUS in that we mask the most important sentences based on the semantic similarity of sentences. Furthermore, we use only a single mask token for any consecutive sentences that should be masked. This approach helps the model learn how many sentences should be generated for each masked token in the input sequence. STEP (Zou et al., 2020) is another pre-trained summarization model trained with MLM, Next Sentence Generation (NSG), and Sentence Reordering (SR) objectives. ARMAN uses SR as one of the pre-training methods in a modified form; we change the order of sentences in the input document. The model should select the most important sentences using semantic similarity of sentences to the document, then reorder them in the actual order that they appeared in the original document.
+
+In the Persian language, some extractive summarization methods exist (Khademi et al., 2018; Rezaei et al., 2019; Kermani and Ghanbari, 2019; Khademi and Fakhredanesh, 2020), but to the best of our knowledge, we know just one model on abstractive summarization. Farahani et al. (2020b) have used ParsBERT (Farahani et al., 2020a) checkpoint with Rothe et al. (2020)'s method to train a new sequence to sequence model with pre-trained weights for the encoder and decoder. In this regard, ARMAN is one of the first works on abstractive summarization for the Persian language. Also, ARMAN was able to achieve SOTA results on all available datasets.
+
+# 3 Methodology
+
+This section introduces a sentence selection method based on semantic similarity scores to make a pseudo summary. Then, we propose three novel objectives for pre-training a seq2seq model for the abstractive summarization tasks.
+
+# 3.1 Top Sentence Selection (TSS)
+
+We introduce a new semantic-based approach for selecting important document sentences to make a pseudo summary in this work. The pseudo summary consists of important sentences of a given document, and the models are supposed to generate an output similar to the pseudo summary corresponding to the document. For comparison, we also use a syntactic-based metric to select sentences from the original document. Inspired by recent work in generating pseudo summaries (Zhang et al., 2020a), we select sentences from a document based on two strategies and concatenate them to create a pseudo summary. For each document in a data collection, we make a summary as described in Algorithm 1. At first, we calculate a score function for each pair of (sentence, document \ sentence). Then we calculate the top $m$ sentences and merge them to make the pseudo summary. The parameter $m$ is calculated based on the number of sentences.
+
+Algorithm 1: Top Sentence Selection
+Input:Document
+Output:Text,Summary for $s_i$ in Document do $r_i\coloneqq$ score_func $(s_i,$ Document $\backslash s_i)$ end for Summary $\coloneqq \emptyset$ Text $\coloneqq$ Document for $j\gets 1$ to m do $k\coloneqq$ argmax $\{r_i\}_{\forall s_i\notin S u m m a r y}$ Summary $\coloneqq$ SummaryU $\{s_k\}$ Text $\coloneqq$ Text $\backslash \{s_k\}$
+end for
+
+Syntactic-based approach: In this strategy, we create a pseudo summary by selecting and merging sentences from a document using a syntactic-based approach. ROUGE is a mainly used metric that calculates the similarity between a candidate sentence and a collection of reference sentences based on the overlap of N-grams (Lin, 2004). The higher the ROUGE score between two pieces of text, the more similar they are. The score-function in Algorithm 1 calculates the ROUGE1-F1 score between the sentence and remaining sentences of the document. PEGASUS (Zhang et al., 2020a) has used such a method as Gap Sentence Generation.
+
+Semantic-based approach: Although selecting sentences based on the ROUGE metric is simple, cost-effective, and usable in low-resource
+
+languages, ROUGE comes with some drawbacks (Kryscinski et al., 2019). In particular, ROUGE does not account for different words with the same meaning since it only calculates syntactical matches. Thus, if we have two sentences with the same meaning but expressed with different words, they will be assigned a low ROUGE score. To the best of our knowledge, this paper is the first to study semantic similarity in creating pseudo summaries and its effect on the quality of generated summaries.
+
+To consider the semantic score in calculating the similarity of two sentences, we used a recent BERTScore metric. BERTScore computes a similarity score for each token in the candidate sentence with each in the reference sentence using contextual embeddings (Zhang et al., 2020b). Due to the high computational cost of calculating this metric for each pair of (sentence, document\sentence), we used FastText (Bojanowski et al., 2017) pretrained embeddings instead of BERT contextual embeddings. According to BERTScore, for a reference $x$ , and a candidate $\hat{x}$ , the recall, precision, and F1 scores are:
+
+$$
+R _ {F T} = \frac {1}{| x |} \sum_ {\boldsymbol {x} _ {i} \in x} \max _ {\hat {\boldsymbol {x}} _ {j} \in \hat {\boldsymbol {x}}} \mathbf {x} _ {\mathrm {i}} ^ {\top} \hat {\mathbf {x}} _ {j},
+$$
+
+$$
+P _ {F T} = \frac {1}{| \hat {\boldsymbol {x}} |} \sum_ {\hat {\boldsymbol {x}} _ {j} \in \hat {\boldsymbol {x}}} \max _ {\boldsymbol {x} _ {i} \in \boldsymbol {x}} \mathbf {x} _ {\mathrm {i}} ^ {\top} \hat {\mathbf {x}} _ {j},
+$$
+
+$$
+F 1 _ {F T} = 2 \frac {P _ {F T} . R _ {F T}}{P _ {F T} + R _ {F T}}.
+$$
+
+For applying semantic score, the score function in Algorithm 1 calculates $F1_{FT}^1$ .
+
+# 3.2 Pre-training Objectives
+
+In this work, we propose new pre-training objectives and compare our models with the closely similar work of PEGASUS (Zhang et al., 2020a). We use Transformer encoder-decoder structure and introduce a new combination of masking sentences plus shuffling and reordering objectives. The general procedure of pre-training with the proposed objectives is shown in Figure 1.
+
+# TSS-ROUGE
+
+In this objective, we implemented PEGASUS for the Persian language to compare with our proposed models. The base architecture of this model is a Transformer encoder-decoder. Instead of masking words, we mask sentences with tokens. In
+
+
+Figure 1: The procedure of making input and output for pre-training Seq2Seq Transformer. TTS selects the salient sentences and divides the original document into text and summary parts. The summary part is the desired output that the Transformer should generate.
+
+order to generate pseudo summaries as input to this structure, the syntactic-based approach using the ROUGE metric is applied.
+
+# TSS-Semantic Similarity (SS)
+
+This objective takes semantically created pseudo summaries into account. This method is the same as the previous TSS-ROUGE. The semantic-based approach using the modified BERTScore is applied to generate pseudo summaries as input to the structure. The masking criterion is a bit different from TSS-ROUGE. We put only one token for any number of consecutive sentences that should be masked. In this way, the model learns to guess the number of sentences as well. In $20\%$ of the cases, instead of masking a sentence, we keep it in place; this will make the model learn to bring some pieces of the document into the summary. We call the trained model with this objective ARMAN(SS-80).
+
+# TSS-Shuffling (SH)
+
+In addition to considering a semantic-based approach for creating a pseudo summary, we apply span shuffling in a sentence and the masking objective together in this objective. In particular, instead of masking sentences $20\%$ of the cases, we shuffle a span of them. The intuition is that the model will learn not to just copy sentences in the final summary and be sensitive to precedence and latency at the span level. We call the trained model with this objective ARMAN(SH).
+
+# TSS-Modified Sentence Reordering (MSR)
+
+In this objective, we do masking as the TSS-Semantic Similarity objective does in $90\%$ of documents, and in $10\%$ of other documents, we shuffle all sentences. In the latter, the model should reorder sentences and keep the top $30\%$ of important sentences of the original document according to
+
+the semantic scores. The idea behind this method is that the model will learn to arrange the sentences in the correct order in the final summary. Moreover, the model will learn to care about important pieces of the document. In addition to enriching the summary semantically, this work also considers its brevity. We call the trained model with this objective ARMAN(MSR).
+
+# 4 Data Collection
+
+This section introduces the datasets used for pretraining and fine-tuning models and the procedure of cleaning corpora.
+
+# 4.1 Pre-training Datasets
+
+We merged four large Persian corpora from different sources for pre-training models, which contained formal and informal texts.
+
+irBlogs (AleAhmad et al., 2016) is a collection of $5\mathrm{M}+$ posts from $600\mathrm{K}+$ Persian weblogs. Some blogs use informal language for their posts, so this dataset has an enormous amount of informal texts, which could help our models become familiar with this type of Persian speech.
+
+MirasText (Sabeti et al., 2018) is an automatically produced text corpus for the Persian language by crawling over 250 Persian websites. This corpus contains around 2.8M articles and 1.4B words in all of the articles.
+
+CC100 (Conneau et al., 2020; Wenzek et al., 2020) is a monolingual dataset for $100+$ languages constructed from Commoncrawl snapshots. This dataset contains about 111GB of Persian raw text with 13.3B different tokens.
+
+YJC News $^2$ is a collection of articles gathered from the Young Journalist Club website $^3$ . This dataset contains news from various subjects, including 1M+ articles.
+
+# 4.2 Downstream Datasets
+
+For the Summarization task, five datasets were used. All datasets are publicly available and could be used to reproduce our results. Following Grusky et al. (2018), extractive density and coverage for each summarization dataset has been reported in Appendix A. Moreover, we used a Natural Language Understanding (NLU) dataset to test our models' performances on language modeling tasks.
+
+PN-Summary (Farahani et al., 2020b) is an abstractive summarization dataset consisting of 93,207 articles of various news categories crawled from six news agency websites.
+
+Wiki-Summary (Farahani, 2020b) is a dataset that was extracted from Wikipedia dump files. The main task of this dataset is to generate highlights for each article. There are two versions of this dataset; we used the first version in our experiments, consisting of 56,363 articles and highlight pairs.
+
+VOA Dataset (Farahani, 2020a) is a medium-sized corpus of 7.9 million words consisting of 38,952 articles of the VOA website4 from 2003 to 2008. The main task that was performed on this dataset was generating a headline for each article.
+
+PerKey (Doostmohammadi et al., 2018) is a key phrase extraction dataset for the Persian language crawled from six Persian news agencies. There are $553\mathrm{k}$ articles available in this dataset. Some of these articles have summaries, and all of them have titles.
+
+Tebyan Dataset accumulates 92,289 document-summary pairs that we have collected from the Tebyan website5. These articles consist of various subjects and are not limited to news articles. More information about this dataset is provided in Appendix A.
+
+ParsiNLU (Khashabi et al., 2020) is a collection of NLU tasks for the Persian language including Textual Entailment, Sentiment Analysis, Question Paraphrasing, Multiple Choice Question Answering, and Reading Comprehension tasks. We have fine-tuned our models on most of them to test their performances on NLU tasks.
+
+# 4.3 Preprocessing
+
+Due to the necessity of a massive amount of data for pre-training of language models, we needed to collect large datasets, but those datasets need to be cleaned. We adopted a heuristic function to produce an automatic pipeline for cleaning our pre-training datasets. First of all, for each document in each dataset, we separated the sentences and removed those which have the following characteristics; 1) sentences with less than five words 2) sentences that do not end with valid Persian end of sentence marks 3) sentences that contain some specific keywords from Persian webpages and javascript codes.
+
+Furthermore, we omitted documents with less than three sentences after the above cleaning. Next, we used the langdetect package to filter out any document which is not in Persian with the probability of 0.99. Lastly, we removed duplicate paragraphs of documents. More information about the size of each corpus after cleaning is reported in Appendix A. Our heuristic was inspired by methods from Raffel et al. (2020)'s work. This preprocessing procedure only has been used for the pre-training datasets.
+
+# 5 Experiments
+
+In this section, we compare ARMAN with previous works and conduct several experiments to assess the performance of the proposed methods. The codes for pre-training and fine-tuning of all models are publicly available on GitHub7.
+
+# 5.1 Pre-training and Implementation
+
+Our model is based on Transformer (Vaswani et al., 2017) encoder-decoder structure. We pre-trained ARMAN, which contained a 12 layer encoder and a 12 layer decoder with 768 embedding/hidden size, 3072 feed-forward filter size, and 12 self-attention heads. ARMAN and PEGASUS were trained on
+
+| Model | PN-Summary
+R1/R2/RL | Wiki-Summary
+R1/R2/RL | VOA
+R1/R2/RL | PerkeySummary
+R1/R2/RL | Perkey(title)
+R1/R2/RL | Tebyan
+R1/R2/RL |
| Transformerbase | 34.49/16.03/28.91 | 23.96/6.14/17.66 | 31.53/11.71/27.41 | 55.86/43.49/52.22 | 45.33/29.88/42.85 | 23.82/6.79/18.55 |
| PEGASUSbase | 45.67/27.81/39.71 | 31.98/11.63/23.79 | 47.55/28.68/43.57 | 62.82/51.96/59.48 | 53.99/39.3/51.72 | 37.2/21.23/31.47 |
| ParsBERTbase | 44.01/25.07/37.76 | 27.34/7.1/25.5 | 43.54/24.24/40.76 | - | - | - |
| mT5small | 42.25/24.36/35.94 | 15.2/4.73/12.64 | 42.32/25.57/38.99 | 33.88/19.17/28.75 | 28.5/12.55/25.91 | 27.16/12.08/21.27 |
| ARMAN(SS)base | 45.98/28.2/40.09 | 32.27/11.72/23.91 | 47.91/28.9/43.75 | 62.97/52.11/59.64 | 54.18/39.39/51.84 | 37.53/21.73/31.77 |
| ARMAN(SH)base | 45.89/28.03/39.89 | 32.04/11.78/23.83 | 46.96/27.88/42.93 | 63.47/52.71/60.16 | 54.5/39.9/52.19 | 37.6/21.77/31.82 |
| ARMAN(MSR)base | 46.19/28.41/40.27 | 32.48/11.86/24.08 | 48.23/29.52/44.27 | 63.59/52.87/60.3 | 54.81/40.17/52.51 | 37.79/21.85/31.98 |
+
+Table 1: A comparison of results for ARMAN(SS), ARMAN(SH), and ARMAN(MSR) with other pre-trained models on downstream tasks. These results are reported using ROUGE metrics.
+
+| Model | PN-Summary P/R/F1 | Wiki-Summary P/R/F1 | VOA P/R/F1 | PerkeySummary P/R/F1 | Perkey(title) P/R/F1 | Tebyan P/R/F1 |
| PEGASUSbase | 79.86/79.67/79.7 | 74.29/71.31/72.64 | 80.84/81.13/80.92 | 86.13/86.01/86.01 | 83.68/83.31/83.45 | 75.26/75.17/75.14 |
| ARMAN(SS)base | 80.08/79.74/79.85 | 74.24/71.48/72.71 | 81.02/81.13/81 | 86.27/86.01/86.09 | 83.65/83.36/83.46 | 75.48/75.32/75.32 |
| ARMAN(SH)base | 79.95/79.69/79.76 | 74.25/71.43/72.68 | 80.64/80.91/80.71 | 86.46/86.22/86.29 | 83.85/83.49/83.62 | 75.48/75.28/75.29 |
| ARMAN(MSR)base | 80.14/79.84/79.93 | 74.67/71.55/72.95 | 81.1/81.35/81.16 | 86.54/86.24/86.33 | 83.93/83.59/83.71 | 75.49/75.46/75.4 |
+
+Table 2: A comparison of results for ARMAN(SS), ARMAN(SH), and ARMAN(MSR) with other pre-trained models on downstream tasks. These results are reported using the original BERTScore metric.
+
+the mentioned pre-training corpora in section 4.1. The batch size and the training steps of pre-training were set to 128 and 1M, respectively. Adafactor (Shazeer and Stern, 2018) with square root learning rate decay and a dropout rate of 0.1 was used in pre-training and fine-tuning. Pre-training experiments were carried out on the Google Colab platform with TPU v2-8. It took almost 11 days for 1M steps to train ARMAN. Also, we sampled 1M documents from the CC100 dataset and used the SentencePiece Unigram algorithm (Kudo, 2018) to generate the vocabulary for our models. The size of the vocabulary was $96\mathrm{K}$ in all experiments.
+
+# 5.2 Fine-tuning on Text Summarization
+
+Abstractive summarization aims to produce a short, fluent, and concise text using advanced natural language techniques to extract essential information from the original document. We fine-tuned our pre-trained models on six downstream tasks. In all experiments, we set the input length $(L_{input})$ to 512 and output length to 256. Also, we used beam-search as Wu et al. (2016)'s approach with a beam-size of 8 and a length penalty of 0.8. More information about the experiments' setup is reported in Appendix B.
+
+Table 1 shows results based on standard ROUGE metrics. To compare summaries generated by our models with the state-of-the-art PEGASUSbase with a text generation evaluation metric, we reported results based on original BERTScore (Zhang et al., 2020b) (using bert-base-multilingual-cased
+
+as pre-trained contextual embeddings) in Table 2. Both tables show the performance improvements of ARMAN(MSR) $_{\text{base}}$ on all downstream datasets. According to tables 1 and 2, even ARMAN(SS) $_{\text{base}}$ , our basic proposed method, outperforms PEGASUS $_{\text{base}}$ in all datasets. These results show that considering the semantic similarity in pre-training objectives is critical in improving the final summary.
+
+In ARMAN(MSR)base, we encouraged the model to learn the correct relative orders between sentences by reordering at the sentence level. Results of this model show that the reordering objective gives an improvement in summarization. Our second model, ARMAN(SH)base, does not help in improving the quality of summaries. So, we conclude that shuffling at the span level leads to a sub-optimal response, as reported in Raffel et al. (2020).
+
+# 5.3 To copy or not to copy!
+
+We observed that $\mathrm{PEGASUS}_{\mathrm{large}}$ tries to copy sentences from the document into a generated summary when it is not fine-tuned on any summarization datasets. The intuition is that when the task is to copy a sentence, and in return for that copying the model gets an extra score, the model becomes biased towards copying the sentences to increase the probability of catching a significant match. In other words, it always copies some sentences from
+
+| Model | PN-Summary
+R1/R2/RL | Wiki-Summary
+R1/R2/RL | VOA
+R1/R2/RL | PerkeySummary
+R1/R2/RL | Perkey(title)
+R1/R2/RL | Tebyan
+R1/R2/RL |
| ARMAN(SS-80)base | 45.98/28.2/40.09 | 32.27/11.72/23.91 | 47.91/28.9/43.75 | 62.97/52.11/59.64 | 54.18/39.39/51.84 | 37.53/21.73/31.77 |
| ARMAN(SS-100)base | 46.33/28.57/40.38 | 32.36/11.78/24.1 | 47.73/28.95/43.89 | 62.83/51.92/59.53 | 54.25/39.51/51.92 | 37.64/21.78/31.94 |
+
+Table 3: Comparison of ARMAN(SS-80) and ARMAN(SS-100) results on tasks using ROUGE metrics.
+
+| Model | PN-Summary
+Dens/Cov | Wiki-Summary
+Dens/Cov | VOA
+Dens/Cov | PerkeySummary
+Dens/Cov | Perkey(title)
+Dens/Cov | Tebyan
+Dens/Cov |
| ARMAN(MSR)base | 8.29486/0.87188 | 2.55229/0.68437 | 4.59273/0.89648 | 13.08480/0.84591 | 2.50826/0.81320 | 18.56819/0.86931 |
| PEGASUsbase | 8.73796/0.87553 | 2.60724/0.68463 | 4.35264/0.88661 | 13.48538/0.84700 | 2.51945/0.81221 | 18.23422/0.87605 |
+
+Table 4: Comparison of ARMAN(MSR) and PEGASUS results on tasks using Density(Dens) and Coverage(Cov) (Grusky et al., 2018) metrics. ARMAN has less Density and Coverage in 4 out of 6 datasets.
+
+| Model | PN-Summary F1-T/P-S | Wiki-Summary F1-T/P-S | VOA F1-T/P-S | PerkeySummary F1-T/P-S | Perkey(title) F1-T/P-S | Tebyan F1-T/P-S |
| \( \text{ARMAN}(\text{MSR})_{\text{base}} \) | 0.71184/0.82143 | 0.29325/0.62210 | 0.58415/0.95609 | 0.58555/0.85361 | 0.61337/0.90528 | 0.33835/0.95138 |
| \( \text{PEGASUS}_{\text{base}} \) | 0.62983/0.81188 | 0.28139/0.60744 | 0.57440/0.94425 | 0.64304/0.85691 | 0.54963/0.89825 | 0.30000/0.86349 |
+
+Table 5: Comparison of ARMAN(MSR) and PEGASUS results on tasks using F1-Target(F1-T) and Precision-Source(P-S) (Nan et al., 2021). ARMAN has a higher F1-Target and Precision-Source in 5 out of 6 datasets.
+
+the input to the output with the hope that it will match the output because this yields a decrease in the loss function value.
+
+We set up an experiment to observe the behavior of our models when they are not encouraged to copy sentences of the input into the output. According to semantic score, all proposed methods selected $30\%$ of the top-ranked sentences. In this experiment, we pre-trained ARMAN(SS)base with two different values for masking rate in TSS objective; 1) ARMAN(SS-80)base masked only $80\%$ of important sentences and left the other $20\%$ unchanged in the input text, 2) ARMAN(SS-100)base masked all of the important sentences without copying any sentences from input text into the pseudo summary.
+
+Results in Figure 2 show that in a zero-shot setting, ARMAN(SS-100) $_{\text{base}}$ produces a higher ROUGE score when we do not consider copying in the pre-training objective. Additionally, we finetuned ARMAN(SS-100) and ARMAN(SS-80) on downstream tasks. Results in Table 3 and Figure 2 show that ARMAN(SS-100) $_{\text{base}}$ performs better than ARMAN(SS-80) before and after fine-tuning. Given these results, we used this more effective criteria in our best model, ARMAN(MSR) $_{\text{base}}$ .
+
+# 5.4 Factual Consistency and Abstractiveness
+
+From another perspective, we compared the abstractiveness and factual consistency of our best model, ARMAN(MSR), with PEGASUS on downstream summarization tasks because they are impor
+
+tant factors for assessing the quality of summaries.
+
+To compare the abstractiveness of models, we calculated the coverage and density (Grusky et al., 2018) of summaries generated by each model. A higher value for coverage indicates that the summary uses fewer novel words, and a higher value for density is an indicator of a more extractive summary. The average density and coverage of ARMAN(MSR) and PEGASUS on each dataset are reported in table 4. The results show that ARMAN has a lower density and coverage compared to PEGASUS in 4 out of 6 tasks. Also, in the Tebyan dataset, ARMAN has a higher density but lower coverage, which means ARMAN uses more novel words compared to PEGASUS. Therefore we conclude that ARMAN's summaries are more abstractive than PEGASUS.
+
+To compare the factual consistency of models, we calculated precision-source and F1-target (Nan et al., 2021) metrics. While the mentioned metrics evaluate entity-level factual consistency, they still gives considerable information about the factual consistency of models. In order to extract named entities, we used the ParsBERT (Farahani et al., 2020a) model, which was trained on the PAYMA (Shahshahani et al., 2019) dataset9. The average precision-source and F1-target of ARMAN(MSR) and PEGASUS on each dataset are reported in Table 5. The results show that ARMAN has a higher F1-target and precision-source score than PEGA-
+
+
+Figure 2: A comparison of results for ARMAN(SS-80), ARMAN(SS-100), ARMAN(SH), ARMAN(MSR), and PEGASUS on zero-shot learning using ROUGE metrics. ARMAN(SS-100) got remarkably better results in most downstream tasks in zero-shot experiments. More details are reported in Appendix C.
+
+
+Figure 3: Results of fine-tuning ARMAN(MSR) trained with 0, 10, 100, 1K, 10K examples of each downstream dataset for 2K steps. Also, results of Transformerbase, which trained on the whole dataset for 150K steps, and previous SOTA (if available) are shown. The results for other models are reported in Appendix C.
+
+SUS in 5 out of 6 tasks. Therefore, it seems ARMAN is more factually consistent than PEGASUS.
+
+# 5.5 Zero and Few Shot Summarization
+
+We studied our models in zero and few shot settings to make abstractive summarization a practical solution for real-world tasks where providing a large supervised collection of training and testing data is laborious. In a zero-shot setting, we pre-trained models on pre-training datasets and examined them on downstream tasks without finetuning. Results in Figure 2 show that our models outperformed PEGASUS. In a few-shot setting, we fed our best model with $10^{k}$ ( $k = 1, 2, 3, 4$ ) examples to study the model's results on low resource scenarios. In this experiment, $\mathrm{Transformer}_{\mathrm{base}}$ and $\mathrm{ARMAN}(\mathrm{MSR})_{\mathrm{base}}$ were trained for 150K and 2K steps, respectively. According to Figure 3, we observed that in Wiki Summary and VOA datasets, our model has beaten the state-of-the-art model with only seeing 1K samples. In a larger dataset, Perkey, our model did not get a better result than $\mathrm{Transformer}_{\mathrm{base}}$ because it was fine-tuned on the whole dataset with more steps. We conclude that our model gets an acceptable outcome in lower amounts of data and computational resources.
+
+# 5.6 NLU Results
+
+In order to study if ARMAN works well as a language model, we tested our models in Natural Language Understanding (NLU) tasks. According to
+
+Khashabi et al. (2020), we selected multiple-choice question-answering, textual entailment, sentiment analysis, and question paraphrasing tasks to examine our models' performance on them. For more information about these tasks and datasets, see Appendix A and Khashabi et al. (2020).
+
+According to the results in Table 6, ARMAN(SH) $_{\text{base}}$ has beaten other models in the natural part of Textual Entailment and Question Paraphrasing. This model learned how to arrange a disordered sentence. Thus, it makes sense why it is powerful in recognizing the same sentences with different written forms. In Multiple-Choice QA, our best-performing model achieves the highest accuracy in math and logic questions. Our proposed model, with semantic similarity and mask-only approach, surpasses others in literature questions. In the common knowledge task, WikiBERT $_{\text{base}}$ (Pyysalo et al., 2021) outperformed other models because it has been trained over a large Wikipedia dataset. In the Sentiment Analysis task, the proposed models could not achieve acceptable results compared to other models. A more detailed study about the behavior of models on NLU tasks is outside the scope of this work.
+
+# 5.7 Human Evaluation
+
+According to Kryscinski et al. (2019)'s work, we held a human evaluation experiment by considering ROUGE's drawbacks. Our purpose was to de
+
+ | Texual Entailment | Question Paraphrasing | Sentiment (sentence sent.) | Multiple-Choice Question Answering |
| Model | natural (accuracy) | translated (accuracy) | natural (accuracy) | translated (accuracy) | food (F1) | movie (F1) | literature (accuracy) | com-know (accuracy) | math & logic (accuracy) |
| mBERTbase | 48.7* | 51.6* | 80.4 | 75.3 | 55.2 | 48.6 | 31.1 | 28.6 | 33.8* |
| WikiBERTbase | 52.8* | 52.6* | 80 | 75.5 | 52 | 58.5 | 34.0 | 31.4 | 32.1 |
| ParsBERTbase | 51.8* | 53.9* | 79.4 | 72 | 59.1 | 56.8 | 35.4 | 29.5 | 32.5* |
| mT5small | 51.9 | 51 | 75.2 | 72 | 54.6 | 49.4 | 33.7* | 24.9 | 39.1* |
| PEGASUSbase | 54.5 | 52.6 | 80 | 76.1 | 51.9 | 56 | 40 | 27.7 | 45.1 |
| ARMAN(SS-80)base | 54.5 | 50.6 | 82.5 | 74.8 | 51.4 | 47 | 37.7 | 25.7 | 47.7 |
| ARMAN(SS-100)base | 54.2 | 53 | 79.9 | 72.8 | 50 | 52.9 | 41.4 | 27.4 | 43.1 |
| ARMAN(SH)base | 55.5 | 52.9 | 82.6 | 75.1 | 56.7 | 42 | 34.6 | 28.6 | 45.4 |
| ARMAN(MSR)base | 54.8 | 51.8 | 79.9 | 75.9 | 52 | 46 | 36.57 | 21.7 | 49.14 |
+
+termine whether semantic similarity makes better summaries than PEGASUS' GSG in the experiment. Also, we wanted to discover which model is the best from the human's viewpoint. We selected 30 documents from the PN-Summary dataset and the corresponding generated summaries from PEGASUS, ARMAN(SS-80), and ARMAN(MSR) models. We gave them to 10 participants and asked them to rank the generated summaries from the best to worst similar to Zou et al. (2020)'s work according to fluency, informativeness, and succinctness of the generated summaries. In order to perform statistical tests, we converted rankings into scores (score $= 4 - \text{rank}$ ). The experiment result is reported in Table 7. Moreover, we have performed some student t-test between models, and results are reported in Table 8. Those results show that ARMAN(MSR) is significantly better than other models ( $p < 0.05$ ). Furthermore, results show that ARMAN(SS-80) is not significantly better than PEGASUS but has an extremely small p-value ( $0.0507 > 0.05$ ).
+
+Table 6: A comparison of results on ParsiNLU tasks. Some of the reported results (marked with *) in Khashabi et al. (2020)'s work could not be reproduced according to their policies. So we reported the numbers that we ourselves got using their trained models in our experiments.
+
+| Model | Rank 1 | Rank 2 | Rank 3 | Score |
| PEGASUS | 31% | 35% | 34% | 1.97 |
| ARMAN(SS-80) | 38.33% | 34.67% | 27% | 2.11 |
| ARMAN(MSR) | 50.33% | 29% | 20.67% | 2.29 |
+
+# 6 Conclusion
+
+There are few models for generating abstractive summaries in the Persian language. This work introduces ARMAN, a Transformer encoder-decoder
+
+Table 7: Human evaluation results, proportions of model rankings, and average scores. Different models could have the same rankings in tests if they produced the same summary.
+
+| p-value | PEGASUS | ARMAN(SS) | ARMAN(MSR) |
| PEGASUS | - | 0.0507 | 2 × 10-5 |
| ARMAN(SS) | 0.0507 | - | 0.014 |
| ARMAN(MSR) | 2 × 10-5 | 0.014 | - |
+
+Table 8: The p-values for models in comparison. ARMAN(MSR) significantly improves results in comparison with ARMAN(SS-80) and PEGASUS $(p < 0.05)$ .
+
+based model pre-trained with a new combination of masking sentences with sentence shuffling and reordering objectives. We considered semantic similarities for important sentence selection to make document-summary input data in a self-supervised manner. The results show that the modified sentence selection and reordering model outperforms the most recent SOTA models in all six downstream tasks. Our model achieved a higher score than the previous SOTA with only 1K examples in the case of low supervised sample sizes. Finally, the human evaluation results show significant improvement over the dataset used for this experiment.
+
+In future work, investigating the effect of using contextual embeddings for selecting salient sentences for producing text and summary pairs might prove necessary. Furthermore, the ability of models on extractive summarization is worth scrutinizing since our objectives select salient sentences, which is similar to extractive summarization.
+
+# Acknowledgements
+
+We would like to thank the anonymous reviewers for their thoughtful and constructive comments. This research was supported in part by a grant from the Institute for Research in Fundamental Sciences (no. CS 1399-4-286).
+
+# References
+
+Abolfazl AleAhmad, MohammadSadegh Zahedi, Maseud Rahgozar, and Behzad Moshiri. 2016. irblogs: A standard collection for studyingPersian bloggers. Computers in Human Behavior, 57:195-207.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
+Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Ehsan Doostmohammadi, Mohammad Hadi Bokaei, and Hossein Sameti. 2018. Perkey: A persian news corpus for keyphrase extraction and generation. 2018 9th International Symposium on Telecommunications (IST).
+Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Int. Res., 22(1):457-479.
+Mehrdad Farahani. 2020a. News headline generation using bert2bert model. https://github.com/m3hrdadfi/news-headline-generation.
+Mehrdad Farahani. 2020b. Summarization using bert2bert model on wikisummary dataset. https://github.com/m3hrdadfi/wiki-summary.
+Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, and Mohammad Manthouri. 2020a. Parsbert: Transformer-based model for persian language understanding.
+Mehrdad Farahani, Mohammad Gharachorloo, and Mohammad Manthouri. 2020b. Leveraging parsesbert and pretrained mt5 for persian abstractive text summarization.
+
+Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1481-1491, Seattle, Washington, USA. Association for Computational Linguistics.
+Max Grusky, M. Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In *NAACL*.
+Karl Moritz Hermann, Tomáš Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 1693-1701, Cambridge, MA, USA. MIT Press.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.
+Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.
+Fatemeh Hojati Kermani and Shirin Ghanbari. 2019. Extractive persian summarizer for news websites. In 2019 5th International Conference on Web Research (ICWR), pages 85-89.
+Mohammad Ebrahim Khademi and Mohammad Fakhredanesh. 2020. Persian automatic text summarization based on named entity recognition. *Iranian Journal of Science and Technology, Transactions of Electrical Engineering*.
+Mohammad Ebrahim Khademi, Mohammad Fakhredanesh, and Seyed Mojtaba Hoseini. 2018. Conceptual text summarizer: A new model in continuous vector space.
+Daniel Khashabi, Arman Cohan, Siamak Shakeri, Pedram Hosseini, Pouya Pezeshkpour, Malihe Alikhani, Moin Aminnaseri, Marzieh Bitaab, Faeze Brahman, Sarik Ghazarian, Mozhdeh Gheini, Arman Kabiri, Rabeeh Karimi Mahabadi, Omid Memarrast, Ahmadreza Mosallanezhad, Erfan Noury, Shahab Raji, Mohammad Sadegh Rasooli, Sepideh Sadeghi, Erfan Sadeqi Azer, Niloofar Safi Samghabadi, Mahsa Shafaei, Saber Sheybani, Ali Tazarv, and Yadollah Yaghoobzadeh. 2020. Parsinlu: A suite of language understanding challenges for persian.
+Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540-551, Hong Kong, China. Association for Computational Linguistics.
+
+Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In ACL.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
+Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
+Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
+Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.
+Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulíçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.
+Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021. Entity-level factual consistency of abstractive text summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2727-2733, Online. Association for Computational Linguistics.
+Ani Nenkova. 2005. Automatic text summarization of newswire: Lessons learned from the document understanding conference. In Proceedings of the 20th National Conference on Artificial Intelligence - Volume 3, AAAI'05, page 1436-1441. AAAI Press.
+Sampo Pyysalo, Jenna Kanerva, Antti Virtanen, and Filip Ginter. 2021. WikiBERT models: Deep transfer learning for many languages. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 1-10, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
+
+Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. ProphetNet: Predicting future n-gram for sequence-to-SequencePre-training. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2401-2410, Online. Association for Computational Linguistics.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Hosein Rezaei, Seyed Amid Moeinzadeh, A. Shahgholian, and M. Saraee. 2019. Features in extractive supervised single-document summarization: Case of persian news. ArXiv, abs/1909.02776.
+Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging Pre-trained Checkpoints for Sequence Generation Tasks. Transactions of the Association for Computational Linguistics, 8:264-280.
+Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics.
+Behnam Sabeti, Hossein Abedi Firouzjaee, A. J. Choobbasti, S. J. Najafabadi, and Amir Vaheb. 2018. Mirastext: An automatically generated text corpus for persian. In LREC.
+Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics.
+Mahsa Sadat Shahshahani, Mahdi Mohseni, Azadeh Shakery, and Heshaam and Faili. 2019. Payma: A tagged corpus of persian named entities. Signal and Data Processing, 16(1).
+Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4596-4604. PMLR.
+Gary Simons. 2017. Ethnologue. SIL International, Dallas.
+Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, page 3104-3112, Cambridge, MA, USA. MIT Press.
+
+Krysta Svore, Lucy Vanderwende, and Christopher Burges. 2007. Enhancing single-document summarization by combining RankNet and third-party sources. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 448-457, Prague, Czech Republic. Association for Computational Linguistics.
+
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
+
+Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.
+
+Y. Wu, M. Schuster, Z. Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, M. Krikun, Yuan Cao, Q. Gao, Klaus Macherey, J. Klingner, Apurva Shah, M. Johnson, X. Liu, Lukasz Kaiser, Stephan Gouws, Y. Kato, Taku Kudo, H. Kazawa, K. Stevens, George Kurian, Nishant Patil, W. Wang, C. Young, J. Smith, Jason Riesa, Alex Rudnick, Oriol Vinyls, G. Corrado, Macduff Hughes, and J. Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. ArXiv, abs/1609.08144.
+
+Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In NAACL.
+
+Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020a. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.
+
+Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+
+Yanyan Zou, Xingxing Zhang, Wei Lu, Furu Wei, and Ming Zhou. 2020. Pre-training for abstractive document summarization by reinstating source text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3646-3660, Online. Association for Computational Linguistics.
+
+# A Datasets Statistics
+
+In this section, extra information about downstream datasets and pre-training text corpora is reported. Some of the datasets did not provide any validation split; however, the number of examples in the train/validation/test split and the average length of articles and summaries for each dataset is reported in Table 11. Additionally, the size of pre-training texts corpora before and after preprocessing is reported in Table 9.
+
+Following Grusky et al. (2018) and Zhang et al. (2020a), we have plotted the extractive fragment density/coverage plot for each downstream dataset in Figure 4. Grusky et al. (2018) defined them as
+
+$$
+\mathrm {C O V E R A G E (A , S)} = \frac {1}{| S |} \sum_ {f \in \mathrm {F (A , S)}} | \mathrm {f} |
+$$
+
+$$
+\mathrm {D E N S I T Y (A , S)} = \frac {1}{| S |} \sum_ {f \in \mathrm {F (A , S)}} | \mathrm {f} | ^ {2}
+$$
+
+where $A$ is an article and $S$ is the corresponding summary, and $F(A, S)$ is the set of shared sequences of tokens in $A$ and $S$ . The density for extractive summaries is higher than more abstractive summaries. Lower coverage shows the novelty of text fragments in summary. Figure 4 shows that our downstream datasets range from more extractive summaries to more abstractive ones.
+
+Tebyan dataset contains articles and summaries from a well-known Persian lifestyle website that includes various articles from different categories. In order to produce the Tebyan dataset, we have crawled 100K pages of their site. We removed all HTML tags using beautifulsoup410 for each page, and each page's primary content was stored with the author's provided summary, and paragraphs were separated with a newline character. Lastly, we have used Langdetect11 to remove articles that were not in Persian. After to this procedure, 92,289 articles and summaries were collected, so we separated them into three parts, $85\%$ for train, $7.5\%$ for validation, and $7.5\%$ for test split.
+
+We have tested ARMAN on NLU tasks with the ParsiNLU (Khashabi et al., 2020), which is a Persian NLU dataset. This dataset consists of 5 main tasks and translation as an extra task. In Table 10, the number of examples for train/validation/test of ParsiNLU is reported. It should be noted that we
+
+
+
+
+
+
+
+
+Figure 4: Density and coverage distributions across downstream datasets.
+
+
+
+
+
+did not test ARMAN on the Reading Comprehension of this dataset due to resource leakage. The Sentiment Analysis task of this dataset has two subtasks; sentence-level sentiment and aspect-based sentiment of a sentence. We have tested ARMAN on the sentence-level sentiment task. The Sentiment Analysis task of this dataset has two subtasks; sentence-level sentiment and aspect-based sentiment of a sentence. We have tested ARMAN on the sentence-level sentiment task. For Question Paraphrasing and Textual Entailment, this dataset contains two subtasks; sentences written by humans and sentences translated from English datasets into Persian, so we have reported the accuracy of models for each subtask separately.
+
+# B ARMAN Hyper Parameters and Training Settings
+
+In this section, we have described pre-training and fine-tuning parameters and settings. In Table 12, pre-training settings for $\mathrm{ARMAN_{base}}$ are reported. Tables 13 and 14 contain information about settings used in fine-tuning $\mathrm{ARMAN_{base}}$ and $\mathrm{Transformer_{base}}$ on summarization tasks. Also, the fine-tuning settings for NLU tasks are reported in
+
+Table 15. Finally, we have reported each model's parameter counts used in summarization tasks in Table 16.
+
+# C Low Resource Numbers and Settings
+
+Table 17 contains information about fine-tuning settings for low resource experiments in Section 5.5. The numbers in Figures 2 and 3 are reported in Table 18. We did not report the results of low resource experiments for ARMAN(SH), ARAMN(SS-80), ARMAN(SS-100), and PEGASUS in the main part of the paper, but they are reported in Tables 19, 20, 21, and 22.
+
+# D Samples
+
+Two samples of ARMAN(SS), ARMAN(MSR), and PEGASUS generated summaries that were used in the human evaluation test are shown in Figures 5 and 6. More than $50\%$ of participants believed that ARMAN(MSR)'s summaries were the best among all models in the human evaluation test, which shows that its summaries have high quality.
+
+| Pre-train Corpus/Dataset | Original Corpus/Dataset Size | Cleaned Corpus/Dataset Size |
| irBlogs | 7.1GB | 2.6GB |
| MirasText | 15.7GB | 6.8GB |
| YJC News | 3GB | 2.3GB |
| CC100 | 111GB | 53GB |
| Total | 136.8GB | 64.7GB |
+
+Table 9: Size of pre-training text corpora in GB for each corpus before and after cleaning.
+
+| Task | Number of Train Examples | Number of Validation Examples | Number of Test Examples |
| Reading Comprehension | 600 | 125 | 575 |
| Multiple-Choice | 1271 | 139 | 1050 |
| Sentiment Analysis | 1894 | 235 | 294 |
| Textual Entailment | 756 | 271 | 1751 |
| Question Paraphrasing | 1830 | 898 | 1916 |
+
+Table 10: Task name and the number of examples for ParsiNLU dataset.
+
+| Dataset | Train Count | Validation Count | Test Count | Article Average Length | Summary Average Length |
| PN-Summary | 82022 | 5592 | 5593 | 335 | 31 |
| Wiki-Summary | 45653 | 5073 | 5637 | 425 | 82 |
| VOA | 31550 | 3506 | 3896 | 179 | 11 |
| Perkey (summary) | 42077 | - | 19796 | 218 | 28 |
| Perkey (title) | 526445 | - | 24930 | 224 | 11 |
| Tebyan | 78445 | 6922 | 6922 | 819 | 37 |
+
+Table 11: The number of articles and summaries and the average length of them for each downstream dataset (lengths are reported in words count).
+
+| Model | Learning rate | Label Smoothing | Steps | Batch Size | Objective | Max Input Length | Max Output Length |
| PEGASUSbase | 0.01 | 0.0 | 1M | 128 | Ind-Orig | 512 | 128 |
| ARMAN(SS)base | 0.01 | 0.0 | 1M | 128 | TSS | 512 | 128 |
| ARMAN(SH)base | 0.01 | 0.0 | 1M | 128 | TSS+Shuffling | 512 | 128 |
| ARMAN(MSR)base | 0.01 | 0.0 | 1M | 128 | TTS+MSR | 512 | 128 |
+
+Table 12: Pre-training settings for ARMANbase models. We have used PEGASUSlarge (Zhang et al., 2020a) settings for maximum input and output length since they have searched for the best setting.
+
+| Dataset | Learning rate | Label Smoothing | Steps | Batch Size | Beam Size | Beam alpha | Max Input | Max Output |
| Perkey (summary) | 5 × 10-4 | 0.1 | 50K | 128 | 8 | 0.8 | 512 | 256 |
| Perkey (title) | 5 × 10-4 | 0.1 | 50K | 128 | 8 | 0.8 | 512 | 256 |
| PN-Summary | 5 × 10-4 | 0.1 | 50K | 128 | 8 | 0.8 | 512 | 256 |
| Tebyan | 5 × 10-4 | 0.1 | 50K | 128 | 8 | 0.8 | 512 | 256 |
| VOA | 5 × 10-4 | 0.1 | 20K | 64 | 8 | 0.8 | 512 | 256 |
| Wiki-Summary (v1) | 5 × 10-4 | 0.1 | 50K | 64 | 8 | 0.8 | 512 | 256 |
+
+Table 13: Fine-tuning settings for ARMANbase models on downstream summarization tasks and datasets.
+
+| Dataset | Learning rate | Label Smoothing | Steps | Batch Size | Beam Size | Beam alpha | Max Input | Max Output |
| Perkey (summary) | 5 × 10-4 | 0.1 | 150K | 128 | 8 | 0.8 | 512 | 256 |
| Perkey (title) | 5 × 10-4 | 0.1 | 150K | 128 | 8 | 0.8 | 512 | 256 |
| PN-Summary | 5 × 10-4 | 0.1 | 150K | 128 | 8 | 0.8 | 512 | 256 |
| Tebyan | 5 × 10-4 | 0.1 | 150K | 128 | 8 | 0.8 | 512 | 256 |
| VOA | 5 × 10-4 | 0.1 | 150K | 64 | 8 | 0.8 | 512 | 256 |
| Wiki-Summary (v1) | 5 × 10-4 | 0.1 | 150K | 64 | 8 | 0.8 | 512 | 256 |
+
+Table 14: Fine-tuning settings for Transformerbase models on downstream summarization tasks and datasets.
+
+| Dataset | Learning rate | Label Smoothing | Steps | Batch Size | Beam Size | Beam alpha | Max Input | Max Output |
| Multiple-Choice | 5 × 10-4 | 0.1 | 20K | 48 | 8 | 0.8 | 512 | 256 |
| Sentiment Analysis | 5 × 10-4 | 0.1 | 20K | 48 | 8 | 0.8 | 512 | 256 |
| Textual Entailment | 5 × 10-4 | 0.1 | 20K | 48 | 8 | 0.8 | 512 | 256 |
| Question Paraphrasing | 5 × 10-4 | 0.1 | 20K | 48 | 8 | 0.8 | 512 | 256 |
+
+Table 15: Fine-tuning settings for ARMANbase models on NLU tasks. Batch size 48 was chosen to be the same as other models that were trained on those tasks. We have converted the classification problem into the text to text problems.
+
+| Model | Parameters | Transformer Type |
| \( ARMAN_{base} \) | 223M | Vaswani et al. (2017)'s Encoder-Decoder |
| \( Transformer_{base} \) | 223M | Vaswani et al. (2017)'s Encoder-Decoder |
| \( PairsBERT_{base} \) (Farahani et al., 2020b) | 221M | Rothe et al. (2020)'s Encoder-Decoder |
| \( PEGASUS_{base} \) (Zhang et al., 2020a) | 223M | Vaswani et al. (2017)'s Encoder-Decoder |
| \( mT5_{small} \) (Xue et al., 2021) | 300M | Vaswani et al. (2017)'s Encoder-Decoder |
+
+Table 16: Parameters count for each tested model that was used for summarization. Reported numbers are in millions.
+
+| Dataset | Learning rate | Label Smoothing | Steps | Batch Size | Beam Size | Beam alpha | Max Input | Max Output |
| Perkey (summary) | 5 × 10-4 | 0.1 | 2K | 128 | 8 | 0.8 | 512 | 256 |
| Perkey (title) | 5 × 10-4 | 0.1 | 2K | 128 | 8 | 0.8 | 512 | 256 |
| PN-Summary | 5 × 10-4 | 0.1 | 2K | 128 | 8 | 0.8 | 512 | 256 |
| Tebyan | 5 × 10-4 | 0.1 | 2K | 128 | 8 | 0.8 | 512 | 256 |
| VOA | 5 × 10-4 | 0.1 | 2K | 64 | 8 | 0.8 | 512 | 256 |
| Wiki-Summary (v1) | 5 × 10-4 | 0.1 | 2K | 64 | 8 | 0.8 | 512 | 256 |
+
+Table 17: Fine-tuning settings for ARMANbase and PEGASUSbase models on downstream summarization tasks and datasets for low resource experiments.
+
+| examples | PN-Summary R1/R2/RL | Wiki-Summary R1/R2/RL | VOA R1/R2/RL | PerkeySummary) R1/R2/RL | Perkey(title) R1/R2/RL | Tebyan R1/R2/RL |
| 0 | 25.28/11.09/19.18 | 24.14/5.05/13.89 | 26.69/13.89/23.3 | 23.83/11.39/19.22 | 16.65/6.26/13.88 | 20.93/9.25/15.85 |
| 10 | 34.01/17.19/27.83 | 24.48/5.22/15.09 | 29.85/15.82/26.39 | 34.66/19.94/29.65 | 18.96/7.31/16.13 | 25.03/9.95/19.4 |
| 100 | 38.47/20.71/32.32 | 27.87/7.24/18.76 | 40.35/23.07/36.38 | 40.24/25.7/35.67 | 28.81/12.37/26.03 | 28.95/13.33/23.32 |
| 1K | 40.96/22.66/34.78 | 29.86/9.03/21.1 | 44.67/26.08/40.42 | 43.04/28.01/38.42 | 31.43/14.32/28.61 | 32.42/16.33/26.55 |
| 10K | 43.21/24.85/37.07 | 30.09/10.73/22.89 | 46.38/27.82/42.48 | 45.43/30.71/41 | 35.18/17.83/32.32 | 35.17/19.2/29.36 |
+
+Table 18: Low resource results of ARMAN(MSR) from Figures 2 and 3. By less than 1000 examples, ARMAN(MSR) has beaten the previous SOTA on VOA and Wiki-Summary datasets. Also, 10K examples and 2K fine-tuning steps got comparable results with previous SOTA in the Pn-Summary dataset.
+
+| examples | PN-Summary
+R1/R2/RL | Wiki-Summary
+R1/R2/RL | VOA
+R1/R2/RL | PerkeySummary
+R1/R2/RL | Perkey(title)
+R1/R2/RL | Tebyan
+R1/R2/RL |
| 0 | 24.08/10.34/18.37 | 22.73/4.58/13.24 | 27.13/14.14/23.67 | 23.91/12.07/19.6 | 16.69/6.36/13.9 | 20.1/8.74/15.23 |
| 10 | 34.71/17.63/28.51 | 24.6/5.43/15.27 | 33.88/18.5/30.06 | 38.27/23.88/33.61 | 22.93/8.97/20.15 | 25.91/11.4/20.27 |
| 100 | 38.67/20.67/32.49 | 27.41/7.42/19.03 | 41.05/23.39/37.03 | 41.34/26.31/36.58 | 26.83/11.28/24.09 | 29.13/13.49/23.52 |
| 1K | 40.95/22.78/34.7 | 30/8.68/20.75 | 44.22/25.11/39.77 | 43.11/28.06/38.45 | 31.2/14.17/28.28 | 33.1/17.24/27.31 |
| 10K | 43.07/24.84/37.05 | 29.83/10.43/22.58 | 46.8/27.87/42.86 | 45.19/30.43/40.76 | 34.79/17.53/31.83 | 34.71/18.83/28.97 |
+
+Table 19: Low resource results of ARMAN(SH). By less than 1000 examples, ARMAN(SH) has beaten the previous SOTA on VOA and Wiki-Summary datasets. Also, 10K examples and 2K fine-tuning steps got comparable results with previous SOTA in the Pn-Summary dataset.
+
+| examples | PN-Summary R1/R2/RL | Wiki-Summary R1/R2/RL | VOA R1/R2/RL | PerkeySummary R1/R2/RL | Perkey(title) R1/R2/RL | Tebayan R1/R2/RL |
| 0 | 18.92/7.96/15.04 | 21.87/4.14/13.22 | 21.8/10.81/19.02 | 17.19/7.36/14.02 | 13.33/4.8/11.31 | 17.67/7.05/13.7 |
| 10 | 37.1/19.18/30.54 | 24.84/5.99/16.45 | 33.84/17.6/30.09 | 35.36/21.14/30.58 | 25.17/10.47/22.21 | 27.74/12.97/22.39 |
| 100 | 39.26/21.2/33.21 | 27.54/7.28/18.89 | 41.17/23.15/37.31 | 40.6/25.89/36.22 | 28.54/12.23/25.72 | 30.83/15.42/25.33 |
| 1K | 40.51/22.38/34.43 | 29.75/8.66/20.72 | 44.09/25.51/39.9 | 42.64/27.58/38.05 | 30.73/13.87/27.89 | 32.27/16.31/26.49 |
| 10K | 43.03/24.82/36.91 | 29.36/10.2/22.33 | 46.88/27.96/42.91 | 44.94/30.18/40.51 | 34.53/17.31/31.65 | 34.78/18.74/28.94 |
+
+Table 20: Low resource results of ARMAN(SS-80). By less than 1000 examples, ARMAN(SS-80) has beaten the previous SOTA on VOA and Wiki-Summary datasets. Also, 10K examples and 2K fine-tuning steps got comparable results with previous SOTA in the Pn-Summary dataset.
+
+| examples | PN-Summary R1/R2/RL | Wiki-Summary R1/R2/RL | VOA R1/R2/RL | PerkeySummary) R1/R2/RL | Perkey(title) R1/R2/RL | Tebyan R1/R2/RL |
| 0 | 35.53/18.91/29.78 | 18.86/3.42/14.4 | 26.18/11.51/22.75 | 36.15/20.89/31.37 | 19.58/7.34/16.73 | 22.04/9.67/19.04 |
| 10 | 38.49/20.79/32.49 | 24.05/6.47/17.59 | 33.45/16.42/29.66 | 38.55/23.05/33.72 | 24.85/10.18/21.74 | 28.53/14.13/23.84 |
| 100 | 39.26/21.19/33.17 | 27.76/7.46/19.31 | 41.52/22.97/37.52 | 40.71/25.27/35.84 | 28.63/12.27/25.7 | 30.32/14.29/24.73 |
| 1K | 41.25/22.89/35.16 | 30.15/8.88/21.01 | 44.88/25.58/40.67 | 42.97/27.75/38.36 | 31.38/14.32/28.52 | 32.85/17.02/27.26 |
| 10K | 43.51/25.28/37.42 | 29.48/10.16/22.31 | 46.98/28.33/43.07 | 45/30.08/40.52 | 35.29/17.95/32.42 | 34.95/19/29.22 |
+
+Table 21: Low resource results of ARMAN(SS-100). By less than 1000 examples, ARMAN(SS-100) has beaten the previous SOTA on VOA and Wiki-Summary datasets. Also, 10K examples and 2K fine-tuning steps got comparable results with previous SOTA in the Pn-Summary dataset.
+
+| examples | PN-Summary R1/R2/RL | Wiki-Summary R1/R2/RL | VOA R1/R2/RL | PerkeySummary) R1/R2/RL | Perkey(title) R1/R2/RL | Tebyan R1/R2/RL |
| 0 | 25.32/11.25/19.45 | 23.11/4.49/13.24 | 21.51/9.56/18.36 | 23.34/10.41/18.68 | 12.93/4.2/10.81 | 19.27/8.16/14.64 |
| 10 | 35.18/17.65/28.99 | 24.06/5.49/16.36 | 30.14/14.83/26.5 | 34.49/19.2/29.36 | 16.75/6.14/14.29 | 26.81/11.39/20.98 |
| 100 | 37.94/20/31.71 | 27.27/6.81/18.32 | 40.23/22.68/36.46 | 39.97/25.06/35.4 | 26.47/11.08/23.75 | 28.86/13.12/23.27 |
| 1K | 39.91/21.88/33.82 | 29.46/8.61/20.61 | 42.67/23.8/38.53 | 42.11/27.09/37.48 | 29.97/13.46/27.19 | 31.87/16.01/26.2 |
| 10K | 42.27/24.06/36.2 | 29.3/10.13/22.22 | 46.04/27.15/41.91 | 44.72/29.97/40.37 | 33.98/16.93/31.11 | 34.59/18.73/28.91 |
+
+Table 22: Low resource results of PEGASUS.
+
+| Document | aS S W L J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J I J J I J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J JJ J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J J
+C K S W O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
+cK S W O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
+cL S W O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O D R Y S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L s d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y d r y
+cK S W O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
+cK S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L
+cK S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L
+cK T S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L S W L |
+
+Figure 5: The First sample of models' generated summaries in human evaluation tests.
+
+| Document | g lalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalAL |
| GyG lalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalALG C (lalal) |
| GyG lalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalAL GRC g (lal) |
| GyG lalalalalalalalalalalalalalalalalalalAL GRC g (lal) |
| GyG lalalalalalalalalalalalalalalAL GRC g (lal) |
| GyG lalalalalalalalalalAL GRC g (lal) |
| GyG lalalalalalalalalAL GRC g (lal) |
| GyG lalalalalalalalAL GRC g (lal) |
| GyG lalalalalalalAL GRC g (lal) |
| GyG lalalalalalAL GRC g (lal) |
| GyG lalalAL GRC g (lal) |
| GyG lalAL GRC g (lal) |
| GyG lalAL GRC g (lal) |
| GyG lalAL GRC g (lal) |
| GyG lalAL GRC g (lal) |
| GyG lalAL GRC g (lal) |
| GyG lalAL GRC g (lal) |
| G yG lalAL GRC g (lal) |
| G yG lalAL GRC g (lal) |
| G yG lalAL GRC g (lal) |
| G yG lalAL GRC g (lal) |
| G yG lalAL GRC g (lal) |
| G yG lalAL GRC g (lal) |
|
| ARMAN(SS) | g lalalalalalalalalAL g lalalalalalalAL g lalalAL g lal) |
| g lalalAL g lal) |
| g lal) |
| ARMAN(MSR) | g lal) |
| g lal) |
| g lal) |
| PEGASUS | g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lal) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
| g lAL) |
+
+Figure 6: The Second sample of models' generated summaries in human evaluation tests.
\ No newline at end of file
diff --git a/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/images.zip b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d0ad44e5f7f20b11a1590b7e97e540fdbec6f1bc
--- /dev/null
+++ b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:479737565ae6ea0618bb42da4a3b47526d3394b71523650feaec1ca61e19415c
+size 1337378
diff --git a/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/layout.json b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..deed34674e70caedf956e8a3a26f17918eaa5d66
--- /dev/null
+++ b/armanpretrainingwithsemanticallyselectingandreorderingofsentencesforpersianabstractivesummarization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4bd656c33fce70dd2009a069897f540fa9da13c754a894beda7358673326e350
+size 483522
diff --git a/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_content_list.json b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7f9362e1f2459c03a14eb42bc1f3ff9f3333c8ef
--- /dev/null
+++ b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b9fb694f6bd153df043b9a83ec2765190e9a3d66ad0911c17f061117b951ac21
+size 80145
diff --git a/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_model.json b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..078ed584bbfa90638bf6b0b86709be9b052f0a63
--- /dev/null
+++ b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb4ad1b47c9a1f14aafe7d8c2baba6fa4d78c4086be973d4774bc9e19f636fed
+size 96072
diff --git a/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_origin.pdf b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..af23e124dd128a19b31a0c562de9afe5d68539ed
--- /dev/null
+++ b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/8e543e6c-9f66-4664-82ee-ad78dc020208_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a24af790b5e458a5322f7f839f5d21b8b2543b1f4ca33c2916adb51e0d7612e2
+size 677861
diff --git a/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/full.md b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d9dbf2f11fc7c8f1fdd120ec630e1f37b3abf726
--- /dev/null
+++ b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/full.md
@@ -0,0 +1,315 @@
+# A Role-Selected Sharing Network for Joint Machine-Human Chatting Handoff and Service Satisfaction Analysis
+
+Jiawei Liu $^{1}$ , Kaisong Song $^{2,3}$ , Yangyang Kang $^{3}$ , Guoxiu He $^{4}$ , Zhuoren Jiang $^{5}$ , Changlong Sun $^{3,5}$ , Wei Lu $^{1*}$ , Xiaozhong Liu $^{6*}$
+
+$^{1}$ Wuhan University, Wuhan, China $^{2}$ Northeastern University, Shenyang, China $^{3}$ Alibaba Group, Hangzhou, China $^{4}$ East China Normal University, Shanghai, China $^{5}$ Zhejiang University, Hangzhou, China $^{6}$ Worcester Polytechnic Institute, Worcester, USA
+
+{laujames2017,weilu}@whu.edu.cn, {kaisong.sks,yangyang.kangyy}@alibaba-inc.com, gxhe@fem.ecnu.edu.cn, jiangzhuoren@zju.edu.cn, changlong.scl@taobao.com, xliul4@wpi.edu
+
+# Abstract
+
+Chatbot is increasingly thriving in different domains, however, because of unexpected discourse complexity and training data sparseness, its potential distrust hatches vital apprehension. Recently, Machine-Human Chatting Handoff (MHCH), predicting chatbot failure and enabling human-algorithm collaboration to enhance chatbot quality, has attracted increasing attention from industry and academia. In this study, we propose a novel model, Role-Selected Sharing Network (RSSN), which integrates both dialogue satisfaction estimation and handoff prediction in one multi-task learning framework. Unlike prior efforts in dialog mining, by utilizing local user satisfaction as a bridge, global satisfaction detector and handoff predictor can effectively exchange critical information. Specifically, we decouple the relation and interaction between the two tasks by the role information after the shared encoder. Extensive experiments on two public datasets demonstrate the effectiveness of our model.
+
+# 1 Introduction
+
+Chatbot, as one of the recent palpable AI excitements, has been widely adopted to reduce the cost of customer service (Qiu et al., 2017; Ram et al., 2018; Zhou et al., 2020). However, due to the complexity of human conversation, auto-chatbot can hardly meet all users' needs, while its potential failure perceives skepticism. AI-enabled customer service, for instance, may trigger unexpected business losses because of chatbot failures (Radziwill and Benton, 2017; Rajendran et al., 2019). Moreover, for chatbot adoption in sensitive areas, such as healthcare (Chung and Park, 2019) and criminal justice (Wang et al., 2020a), any subtle statistical miscalculation may trigger serious health and legal
+
+
+Figure 1: A snippet of a moderately satisfied customer service dialogue. There is a satisfaction rating at the end of the conversation. The utterance with an orange background color denotes a transferable utterance.
+
+consequences. To address this problem, recently, scholars proposed new dialog mining tasks to auto-assess dialogue satisfaction, a.k.a. Service Satisfaction Analysis (SSA) at dialogue-level (Song et al., 2019), and to predict potential chatbot failure via machine-human chatting handoff (MHCH) at utterance-level (Huang et al., 2018; Liu et al., 2021). In a MHCH context, algorithm can transfer an ongoing auto-dialogue to the human agent when the current utterance is confusing.
+
+Figure 1 depicts an exemplar dialogue of online customer service. In this dialogue, the chatbot gives an unsatisfied answer about shipping, thus causing the customer's complaint (local dissatisfaction $utter_{2}$ and $utter_{3}$ ). Ideally, chatbot should be able to detect the negative (local) emotion $(utter_{3})$ and tries to appease complaints, but this problem remains unresolved. If chatbot continues, the customer may cancel the deal and give a negative rating (dialogue global dissatisfaction). With MHCH
+
+(detests the risks of $utter_{2}$ and $utter_{3}$ ), the dialogue can be transferred to the human agent, who is better at handling, compensating, and comforting the customer and enhance customer satisfaction. This example illustrates the cross-impact between handoff and dialogue (local+global) satisfaction. Intuitively, MHCH and SSA tasks can be compatible and complementary given a dialogue discourse, i.e., the local satisfaction is related to the quality of the conversation (Bodigutla et al., 2019a, 2020), which can support the handoff judgment and ultimately affect the overall satisfaction. On the one hand, handoff labels of utterances are highly pertinent to local satisfaction, e.g., one can utilize single handoff information to enhance local satisfaction prediction, which ultimately contributes to the overall satisfaction estimation. On the other hand, the overall satisfaction is obtained by combining local satisfactions, which reflects the quality in terms of answer generation, language understanding, and emotion perception, and subsequently helps to facilitate handoff judgment.
+
+In recent years, researchers (Bodigutla et al., 2019a,b; Ultes, 2019; Bodigutla et al., 2020) explore joint evaluation of turn and dialogue level qualities in spoken dialogue systems. In terms of general dialogue system, to improve the efficiency of dialogue management, Qin et al. (2020) propose a co-interactive relation layer to explicitly examine the cross-impact and model the interaction between sentiment classification and dialog act recognition, which are relevant tasks at the same level (utterance-level). However, MHCH (utterance-level) and SSA (dialogue-level) target satisfaction at different levels. More importantly, handoff labels of utterances are more comprehensive and pertinent to local satisfaction than sentiment polarities. Meanwhile, customer utterances have significant impacts on the overall satisfaction (Song et al., 2019), which motivates us that the role information can be critical for knowledge transfer of these two tasks.
+
+To address the aforementioned issues, we propose an innovative Role-Selected Sharing Network (RSSN) for handoff prediction and dialogue satisfaction estimation, which utilizes role information to selectively characterize complex relations and interactions between two tasks. To the best of our knowledge, it is the pioneer investigation to leverage the multi-task learning approach for integrating MHCH and SSA. In practice, we first adopt a shared encoder to obtain the shared representations
+
+of utterances. Inspired by the co-attention mechanism (Xiong et al., 2016; Qin et al., 2020), the shared representations are then fed into the role-selected sharing module, which consists of two directional interactions: MHCH to SSA and SSA to MHCH. This module is used to get the fusion of MHCH and SSA representations. We propose the role-selected sharing module based on the hypothesis that the role information can benefit the tasks' performances. The satisfaction distributions of utterances from different roles (agent and customer) are different, and the effects for the tasks are also different. Specifically, the satisfaction of agent is non-negative. The utterances from agent can enrich the context of customer's utterances and indirectly affect satisfaction polarity. Thus, directly employing local satisfaction of agent into the interaction with handoff may introduce noise. In the proposed role-selected sharing module, we adopt local satisfaction based on the role information: only the local satisfaction from customer can be adopted to interact with handoff information. By this means, we can control knowledge transfer for both tasks and make our framework more explainable. The final integrated outputs are then fed to separate decoders for handoff and satisfaction predictions.
+
+To summarize, our contributions are mainly as follows: (1) We introduce a novel multi-task learning framework for combining machine-human chatting handoff and service satisfaction analysis. (2) We propose a Role-Selected Sharing Network for handoff prediction and satisfaction rating estimation, which can utilize different role information to control knowledge transfer for both tasks and enhance model performance and explainability. (3) The experimental results demonstrate that our model outperforms a series of baselines that consists of the state-of-the-art (SOTA) models on each task and multi-task learning models for both tasks. To assist other scholars in reproducing the experiment outcomes, we release the codes and the annotated dataset1.
+
+# 2 Related Work
+
+Due to the complexity of human conversation, current automatic chatbots are not mature enough and still fail to meet users' expectations (Brandtzaeg and Følstad, 2018; Jain et al., 2018; Chaves and Gerosa, 2020). Besides exploring novel dialogue models, dialogue quality estimation, service satisf
+
+faction analysis, and human intervention are vital strategies to enhance chatbot performance.
+
+Dialogue Quality and Service Satisfaction Analysis. Interaction Quality (IQ) (Schmitt et al., 2012) and Response Quality (RQ) (Bodigutla et al., 2019b) are dialogue quality evaluation metrics for spoken dialogue systems. Automated models to estimate IQ (Ultes et al., 2014; El Asri et al., 2014) and RQ (Bodigutla et al., 2019a,b, 2020) utilize various features derived from the dialogue content and output from spoken language understanding components. For chat-oriented dialogue system, Higashinaka et al. (2015a,b) introduce Dialogue Breakdown Detection task to detect a system's inappropriate utterances that lead to dialogue breakdowns. To efficiently analyze dialogue satisfaction, Song et al. (2019) introduce the task of service satisfaction analysis (SSA) based on multi-turn customer service dialogues. The proposed CAMIL model can predict the sentiment of all the customer utterances and aggregate those sentiments into overall service satisfaction polarity. Nevertheless, the sentiment of customer utterance is only one of the factors that influence service satisfaction.
+
+Machine Human Chatting Handoff. Another perspective of further enhancing the chatbot's performance is to combine chatbots with human agent. Recently, there are several works about human-machine cooperation for chatbots. Huang et al. (2018) propose the crowd-powered conversational assist architecture, namely Evorus, which integrates crowds with multiple chatbots and a voting system. Rajendran et al. (2019) utilize reinforce learning framework to transfer conversations to human agents once encountered new user behaviors. Different from them, Liu et al. (2021) mainly focus on detecting transferable utterances which are one of the keys to improve user satisfaction. They propose a DAMI network that utilizes difficulty-assisted encoding and matching inference mechanisms to predict the transferable utterance.
+
+Multi-task learning in dialogue system. For satisfaction estimation, Bodigutla et al. (2020) propose to jointly predict turn-level RQ labels and dialogue-level ratings. They utilize features from spoken dialogue system and BiLSTM (Hochreiter and Schmidhuber, 1997) based model to automatically weight each turn's contribution towards the rating. Ma et al. (2018) propose a joint framework that unifies two highly pertinent tasks. Both tasks are trained jointly using weight sharing to extract
+
+the common and task-invariant features while each task can still learn its task-specific features. To learn the correlation between two tasks, Qin et al. (2020) propose a DCR-Net. It adopts a stacked co-interactive relation layer to incorporate mutual knowledge explicitly. This model ignores the contextual information and isolated two types of information when performing interaction.
+
+# 3 Methodology
+
+Figure 2 shows the overall architecture of RSSN, which consists of three parts: Shared Utterance and Matching Encoder, Role-Selected Interaction Layer, and Decoder for MHCH and SSA. In this section, we will describe them in detail.
+
+Given a dialogue $D = [u_{1}, \dots, u_{L}]$ , it consists of a sequence of $L$ utterances with corresponding handoff labels $[y_{1}^{h}, \dots, y_{L}^{h}]$ , where $t = \{1 \leq t \leq L | t \in \mathbb{N}\}$ , $y_{t}^{h} \in \Psi$ and $\Psi = \{\text{normal, transferable}\}$ . Transferable indicates the dialogue should be transferred to the human agent, whereas normal indicates there is no need to transfer. The satisfaction polarity of dialogue $D$ is noted as $y^{s}$ , where $y^{s} \in \Omega$ and $\Omega = \{\text{well satisfied, met, unsatisfied}\}$ . Note that we perform the multi-task learning with the supervision of handoff labels and dialogue's satisfaction only. The local satisfaction distributions of utterances are only the latent estimation, which helps to predict the dialogue's satisfaction.
+
+# 3.1 Shared Utterance and Matching Encoder
+
+The shared encoder consists of a bidirectional LSTM (BiLSTM) to learn the utterance representation and a masked matching layer to capture the contextual matching information.
+
+Suppose $u_{t} = [w_{1},\dots,w_{|u_{t}|}]$ represents a sequence of words in the $t$ -th utterance. These words are mapped into corresponding word embeddings $\pmb{E}_{u_t}\in \mathbb{R}^{n\times |u_t|}$ , where $n$ is the word embedding dimension. By adopting semantic composition models with word embeddings, we can learn the utterance representation. In this work, we adopt a BiLSTM model and concatenate hidden states of forward and backward LSTM to learn the context-sensitive utterance representation $\pmb{v}_t\in \mathbb{R}^{2k}$ , where $k$ is the number of hidden units of LSTM cell. Formally, we have $\pmb{v}_t = \mathrm{BiLSTM}(\pmb{E}_{u_t})$ .
+
+In a dialogue, preceding utterances for each utterance provide helpful context information to estimate local satisfaction. Thus, within a dialogue, there is a high probability of inter-dependency with
+
+
+Figure 2: The architecture of our Role-Selected Sharing Network (RSSN) Network.
+
+respect to their context clues. To encapsulate the contextual matching and information flow in the dialogue, we feed the utterance representation into a unidirectional matching mechanism:
+
+$$
+\boldsymbol {v} _ {t} ^ {\prime} = \boldsymbol {v} _ {t} ^ {\top} [ \boldsymbol {v} _ {1}, \boldsymbol {v} _ {2}, \dots , \boldsymbol {v} _ {t - 1} ] \tag {1}
+$$
+
+After masking out the future information of the present utterance, the matching features of dialogue $D$ is a lower triangular matrix with the diagonal values removed. Then we concatenate the matching features with utterance representation to get $\hat{\boldsymbol{v}}_t = [\boldsymbol{v}_t';\boldsymbol{v}_t]$ . Finally, we obtain the initial shared utterances representations of MHCH $\pmb {H} = [\hat{\pmb{v}}_1,\dots,\hat{\pmb{v}}_L]$ and SSA $S = [\hat{v}_1,\dots,\hat{v}_L]$ .
+
+# 3.2 Role-Selected Interaction Layer
+
+In customer service dialogue, the roles of different participants would exhibit different characteristics (Song et al., 2019). Besides, we conjecture that MHCH and SSA have different impacts on each other. These two tasks indirectly establish a connection through various factors such as dialogue quality, satisfaction, and sentiment. At the same time, role information also plays an important role in both tasks. On the one hand, the utterances from agent can enrich the context of customer utterances and indirectly affect satisfaction polarity. In contrast, customer utterances tend to have a more direct
+
+impact on the dominating satisfaction polarity. On the other hand, the utterances of any participants can trigger machine-human chatting handoff. Thus, we propose the Role-Selected Interaction Layer, which contains two interaction directions: SSA to MHCH and MHCH to SSA, to model the relations and interactions between the two tasks separately.
+
+We first apply two Dense layers over the handoff information and satisfaction information respectively to make them more task-specific, which can be noted as $\pmb{H}^{\prime} = \mathrm{Dense}(\pmb{H})$ and $S^{\prime} = \mathrm{Dense}(S)$ , where $H^{\prime} \in \mathbb{R}^{L \times d}$ and $S^{\prime} \in \mathbb{R}^{L \times d}$ . Note that $d$ is the number of hidden units of the Dense layer.
+
+SSA to MHCH. Co-attention is an effective and widely used method to capture the mutual knowledge among the correlated tasks (Xiong et al., 2016; Qin et al., 2020). Inspired by the basic co-attention mechanism, we design the interaction mechanism separately according to the characteristics of tasks. In this way, task-relevant knowledge can be transferred mutually between two tasks. Specifically, the SSA to MHCH module produces comprehensive handoff representations incorporating the local satisfaction information. Since the agent utterances indirectly affect satisfaction polarity, directly employing local satisfaction of agent into the interaction with handoff may introduce noise. As a
+
+
+Figure 3: The relative handoff position distributions in three different service satisfaction ratings.
+
+consequence, we only adopt the local satisfaction information of customer to interact with handoff information. The process can be defined as follows:
+
+$$
+\boldsymbol {\alpha} ^ {s} = \operatorname {s o f t m a x} \left(\operatorname {M a s k} _ {c} \left(\boldsymbol {H} ^ {\prime} \left(\boldsymbol {S} ^ {\prime}\right) ^ {\top}\right)\right) \tag {2}
+$$
+
+$$
+M = \operatorname {D e n s e} \left(\left[ \alpha^ {s} S ^ {\prime}; H ^ {\prime} \right]\right) \tag {3}
+$$
+
+where $M \in \mathbb{R}^{L \times d}$ and $\operatorname{Mask}_c$ denotes that we mask out (setting to $-\infty$ ) all values of the future information and agent utterances.
+
+MHCH to SSA. As shown in Figure 3, we observe that the dialogue satisfaction rating is related to the handoff position. Intuitively, a handoff can be triggered by the local unsatisfied attitude of the customer, and the later handoff means users are unsatisfied before the end of the conversation. Prior study, Song et al. (2019), also found that user satisfaction at the dialogue level is usually determined by the attitudes of the last few utterances. We can derive that handoff at the later period of the conversation may result in a lower satisfaction rating. Thus, we adjust the interactive attention by positional weights, which can be computed as below:
+
+$$
+\beta_ {t} = \operatorname {s o f t m a x} \left(\left[ \frac {1}{L}, \dots , \frac {t}{L}, \dots , 1 \right] \odot I _ {p} \left(u _ {t}\right)\right) \tag {4}
+$$
+
+where $\odot$ is element-wise product and $I_{p}(\cdot)$ denotes a zero masking identity matrix to mask out future information. Finally, the positional weights $\Gamma = [\beta_1; \ldots; \beta_L]$ , where $\Gamma \in \mathbb{R}^{L \times L}$ . The mechanism gives more weight to the later handoff information. We apply the positional weights to the interaction:
+
+$$
+\boldsymbol {\alpha} ^ {m} = \operatorname {s o f t m a x} \left(\operatorname {M a s k} \left(\boldsymbol {S} ^ {\prime} \cdot \left(\boldsymbol {H} ^ {\prime}\right) ^ {\top} \cdot \boldsymbol {\Gamma}\right)\right) \tag {5}
+$$
+
+$$
+\boldsymbol {Q} = \operatorname {L a y e r N o r m} \left(\boldsymbol {\alpha} ^ {m} \cdot \boldsymbol {H} ^ {\prime} + \boldsymbol {S} ^ {\prime}\right) \tag {6}
+$$
+
+where Mask denotes that we mask out the future information (setting to $-\infty$ ), and LayerNorm denotes the layer normalization (Ba et al., 2016).
+
+# 3.3 Decoder for MHCH and SSA
+
+After the role-selected interaction layer, we can get the outputs $M = [m_1, \dots, m_L]$ and $Q = [q_1, \dots, q_L]$ . Then we adopt separate decoders to predict handoff and satisfaction rating.
+
+In terms of machine-human chatting handoff, the tendency of handoff also depends on the dialogue context. Thus, we feed the outputs of the interaction layer into an LSTM to connect the sequential information flow in the dialogue:
+
+$$
+\boldsymbol {h} _ {t} = \operatorname {L S T M} \left(\boldsymbol {m} _ {t}, \boldsymbol {h} _ {t - 1}\right) \tag {7}
+$$
+
+where $\pmb{h}_t \in \mathbb{R}^k$ is the hidden state for $u_t$ . Since there are no dependencies among labels, we simply use a softmax classifier for handoff prediction:
+
+$$
+\hat {\boldsymbol {y}} _ {t} ^ {h} = \operatorname {s o f t m a x} \left(W _ {\hbar} \boldsymbol {h} _ {t} + \boldsymbol {b} _ {\hbar}\right) \tag {8}
+$$
+
+where $W_{\hbar}\in \mathbb{R}^{|\Psi |\times k}$ and $\pmb {b}_{\hbar}\in \mathbb{R}^{|\Psi |}$ . $\hat{\pmb{y}}_t^h\in \mathbb{R}^{|\Psi |}$ is the predicted handoff probability distribution of $u_{t}$ .
+
+For service satisfaction analysis, we first apply a transformer block (Vaswani et al., 2017) to model the long-range context of the dialogue further. Formally, we have $\hat{Q} = \mathrm{Transformer}(Q)$ , where $\hat{Q} = \{\hat{q}_1,\dots,\hat{q}_L|\hat{q}_t\in \mathbb{R}^k\}$ .
+
+Then we utilize a softmax function for estimating local satisfaction distribution $\pmb{z}_t \in \mathbb{R}^{|\Omega|}$ of $u_t$ :
+
+$$
+\boldsymbol {z} _ {t} = \operatorname {s o f t m a x} \left(W _ {\xi} \hat {\boldsymbol {q}} _ {t} + \boldsymbol {b} _ {\xi}\right) \tag {9}
+$$
+
+where $W_{\xi}\in \mathbb{R}^{|\Omega |\times k}$ and $\pmb {b}_{\xi}\in \mathbb{R}^{|\Omega |}$
+
+Since only a fraction of customer utterances can contribute to the final satisfaction rating, we introduce an attention strategy that enables our model to attend to customer utterances of different importance when merging the local satisfaction distribution. Formally, we measure the importance of each customer utterances as below:
+
+$$
+\boldsymbol {\alpha} = \operatorname {s o f t m a x} \left(\operatorname {M a s k} _ {c} ^ {\prime} \left(\boldsymbol {g} ^ {\top} \tanh \left(W _ {\mu} \hat {\boldsymbol {Q}} ^ {\top} + \boldsymbol {b} _ {\mu}\right)\right)\right) \tag {10}
+$$
+
+where $\alpha \in \mathbb{R}^L$ . $W_{\mu} \in \mathbb{R}^{z \times k}$ , $b_{\mu} \in \mathbb{R}^z$ , and $g \in \mathbb{R}^z$ are trainable parameters. $z$ is the number of attention units. $\mathrm{Mask}_c'$ denotes the masking function used to reserve customer utterances. $g$ can be perceived as a high-level representation of a fixed query "Which is the critical utterance?"
+
+Finally, we obtain the overall satisfaction distribution $\hat{\pmb{y}}^s\in \mathbb{R}^{|\Omega |}$ as the weighted sum of local customer satisfaction distribution:
+
+$$
+\hat {\boldsymbol {y}} ^ {s} = \sum_ {t = 1} ^ {L} \alpha_ {t} \boldsymbol {z} _ {t} \tag {11}
+$$
+
+where $\alpha_{t}$ is the $t$ -th weight of utterance $u_{t}$ in $\alpha$ .
+
+| Statistics items | Clothes | Makeup |
| # Dialogues | 10,000 | 3,540 |
| # US (unsatisfied) | 2,302 | 1,180 |
| # MT (met) | 6,399 | 1,180 |
| # WS (well satisfied) | 1,299 | 1,180 |
| # Transferable Utterances | 16,921 | 7,668 |
| # Normal Utterances | 237,891 | 86,778 |
| Avg # Utterances | 25.48 | 26.67 |
| Avg # Tokens | 7.64 | 7.87 |
| Kappa | 0.85 | 0.88 |
+
+Table 1: Statistics of the datasets.
+
+# 3.4 Joint Training
+
+The objective function of MHCH is formulated as:
+
+$$
+\mathcal {L} _ {1} = - \frac {1}{L} \sum_ {t = 1} ^ {L} \sum_ {i = 1} ^ {| \Psi |} y _ {i, t} ^ {h} \log \left(\hat {y} _ {i, t} ^ {h}\right) \tag {12}
+$$
+
+The objective function of SSA is formulated as:
+
+$$
+\mathcal {L} _ {2} = - \sum_ {i = 1} ^ {| \Omega |} y _ {i} ^ {s} \log \left(\hat {y} _ {i} ^ {s}\right) \tag {13}
+$$
+
+Finally, we minimize the joint cross-entropy loss $\mathcal{L}$ , which is obtained as follow:
+
+$$
+\mathcal {L} (\Theta) = \mathcal {L} _ {1} + \eta * \mathcal {L} _ {2} + \delta \| \Theta \| _ {2} ^ {2} \tag {14}
+$$
+
+where $\eta \in \mathbb{R}^{+}$ denotes the trade-off parameter, $\delta$ denotes the $L_{2}$ regularization weight, and $\Theta$ denotes all the trainable parameters of model. We use backpropagation to compute the gradients of the parameters, and update them with Adam (Kingma and Ba, 2015) optimizer.
+
+# 4 Experiments and Results
+
+# 4.1 Dataset and Experimental Settings
+
+Our experiments are conducted based on two publicly available Chinese customer service dialogue datasets, namely Clothes and Makeup2, collected by Song et al. (2019) from Taobao3. Both datasets have service satisfaction ratings from customer feedbacks and annotated sentiment labels of utterances. Note that the sentiment labels do not participate in our training process and are only used for test. Meanwhile, we also annotate the transferable/normal labels for both datasets according to the existing specifications (Liu et al., 2021). Two
+
+annotators with professional linguistics knowledge participated in the annotation task.
+
+A summary of statistics, including Kappa value (Snow et al., 2008) for both datasets are given in Table 1. Clothes is a corpus with 10K dialogues in the Clothes domain, which has an imbalanced satisfaction distribution at dialogue level. Makeup is a corpus with 3,540 dialogues in the Makeup domain, which has a balanced satisfaction distribution dialogue level. Note that we do not adopt the original word segmentation. Figure 3 shows the relative handoff position distributions in different satisfaction ratings, where we take explicit request, negative emotion, and unsatisfactory answer handoffs into consideration. It indicates that handoff at the later phase of the conversation is more likely to get a lower service satisfaction rating.
+
+Except BERT-based model, all texts are tokenized by a popular Chinese word segmentation utility called jieba4. The datasets are partitioned for training, validation, and test with an 80/10/10 split. For the BERT-based methods, we fine-tune the pre-trained model. For the other methods, we apply the pre-trained word vectors initially trained on Clothes and Makeup corpora by using CBOW (Mikolov et al., 2013). The dimension of word embedding is set as 200. Other trainable model parameters are initialized by sampling values from the Glorot uniform initializer (Glorot and Bengio, 2010). The sizes of hidden state $k$ , Dense units $d$ , attention units $z$ , and batch size are selected from {32, 64, 128, 256, 512}. The dropout (Srivastava et al., 2014) rate and the loss weight $\eta$ are selected from (0, 1) by grid search. Finally, we train the models with an initial learning rate of $1.5 \times 10^{-3}$ and $2 \times 10^{-5}$ for regular baselines and BERT-based models. All the methods run on a server configured with a Tesla V100, 32 CPU, and 32G memory.
+
+# 4.2 Baselines
+
+We compare our model with 14 strong dialogue classification baseline models, which come from MHCH, SSA, and other similar tasks.
+
+Generic Baselines: HAN (Yang et al., 2016) and BERT(Devlin et al., 2019)+LSTM. We adopt outputs and the last hidden of RNN to predict handoff labels and the satisfaction rating, respectively.
+
+Baselines for the MHCH task: HEC (Kumar et al., 2018), DialogueRNN (Majumder et al., 2019), CASA (Raheja and Tetreault, 2019),
+
+| Models | Clothes | Makeup |
| MHCH | SSA | MHCH | SSA |
| F1 | Mac. F1 | GT-I | GT-II | GT-III | WS F1 | MT F1 | US F1 | Mac. F1 | Acc. | F1 | Mac. F1 | GT-I | GT-II | GT-III | WS F1 | MT F1 | US F1 | Mac. F1 | Acc. |
| HAN | 59.8 | 78.7 | 71.7 | 73.1 | 74.0 | 51.5 | 81.7 | 70.4 | 67.9 | 75.5 | 54.3 | 75.4 | 68.5 | 70.1 | 71.3 | 68.4 | 71.3 | 84.8 | 74.8 | 74.8 |
| BERT+LSTM | 60.4 | 78.9 | 73.4 | 74.9 | 75.9 | 42.2 | 84.2 | 72.9 | 66.4 | 77.6 | 59.1 | 78.0 | 72.0 | 73.0 | 73.7 | 66.7 | 72.9 | 87.2 | 75.6 | 76.0 |
| HEC | 59.8 | 78.7 | 71.2 | 72.3 | 73.0 | - | - | - | - | - | 57.1 | 76.8 | 68.0 | 69.5 | 70.5 | - | - | - | - | - |
| DialogueRNN | 60.8 | 79.2 | 73.1 | 74.6 | 75.6 | - | - | - | - | - | 58.3 | 77.4 | 68.8 | 70.5 | 71.6 | - | - | - | - | - |
| CASA | 62.0 | 79.8 | 73.6 | 75.0 | 75.9 | - | - | - | - | - | 58.4 | 77.5 | 70.6 | 72.7 | 73.9 | - | - | - | - | - |
| LSTMLCA | 62.6 | 80.1 | 72.4 | 73.9 | 74.8 | - | - | - | - | - | 57.4 | 77.0 | 70.2 | 71.7 | 72.6 | - | - | - | - | - |
| CESTa | 60.6 | 79.1 | 73.4 | 74.8 | 75.6 | - | - | - | - | - | 59.3 | 78.0 | 69.6 | 71.2 | 72.2 | - | - | - | - | - |
| DAMI | 66.7 | 82.2 | 74.2 | 75.9 | 77.1 | - | - | - | - | - | 61.1 | 79.0 | 73.3 | 74.4 | 75.2 | - | - | - | - | - |
| MILNET | - | - | - | - | - | 38.2 | 82.3 | 70.8 | 63.8 | 75.3 | - | - | - | - | - | 72.0 | 68.9 | 84.9 | 75.3 | 75.1 |
| HMN | - | - | - | - | - | 44.1 | 83.3 | 69.6 | 65.7 | 76.3 | - | - | - | - | - | 73.5 | 73.1 | 83.4 | 76.6 | 76.8 |
| CAMIL | - | - | - | - | - | 55.4 | 84.4 | 71.5 | 70.4 | 78.3 | - | - | - | - | - | 73.8 | 74.5 | 87.4 | 78.6 | 78.5 |
| MT-ES | 61.7 | 79.7 | 74.6 | 75.9 | 76.8 | 47.7 | 82.4 | 74.1 | 68.1 | 76.4 | 57.1 | 76.9 | 69.9 | 71.7 | 72.8 | 72.0 | 68.7 | 84.3 | 75.0 | 75.1 |
| JointBiLSTM | 62.0 | 79.9 | 75.0 | 76.1 | 76.9 | 26.7 | 82.1 | 69.4 | 59.4 | 74.5 | 59.3 | 78.0 | 70.1 | 72.0 | 73.1 | 74.5 | 72.2 | 83.7 | 76.8 | 76.8 |
| DCR-Net | 62.1 | 79.9 | 71.4 | 72.8 | 73.7 | 49.8 | 82.7 | 76.6 | 69.7 | 77.3 | 58.8 | 77.7 | 70.0 | 72.1 | 73.4 | 74.8 | 69.1 | 88.6 | 77.5 | 77.7 |
| RSSN(ours) | 69.2* | 83.6* | 78.4* | 79.5* | 80.3* | 56.0 | 85.1 | 74.0 | 71.7* | 79.5* | 65.9* | 81.5* | 75.1* | 76.6* | 77.6* | 77.4* | 76.1* | 88.9 | 80.8* | 80.8* |
+
+LSTMLCA (Dai et al., 2020), CESTa (Wang et al., 2020b), and DAMI (Liu et al., 2021).
+
+Baselines for the SSA task: MILNET (Angelidis and Lapata, 2018), HMN (Shen et al., 2018), and CAMIL (Song et al., 2019).
+
+Multi-task baselines: MT-ES (Ma et al., 2018), JointBiLSTM (Bodigutla et al., 2020), and DCR-Net (Qin et al., 2020). Specifically, We modify DCR-Net for our tasks by keeping the core self-attention and co-interactive relation layer.
+
+For DAMI, we adopt the open-sourced code5 to get the results. For DialogueRNN, we adapt the open-sourced code6 to MHCH by keeping the core component unchanged. For HAN, MILNET, HMN, and CAMIL of SSA, we adopt the reported results from Song et al. (2019). We re-implement the other models. For BERT+LSTM, we adopt Chinese BERT-base model7.
+
+# 4.3 Comparative Study
+
+Following Song et al. (2019), we adopt Macro F1 (Mac. F1) and Accuracy (Acc.) for evaluating the SSA task. For evaluating the MHCH task, we adopt F1, Macro F1 (Mac. F1), and Golden Transfer within Tolerance (GT-T) (Liu et al., 2021). GT-T considers the tolerance property of the MHCH task by the tolerance range $T$ , which allows a "biased" prediction within it. The adjustment coefficient $\lambda$ of GT-T penalizes early or delayed handoff. Likewise, we set $\lambda$ as 0, and set $T$ to range from 1 to
+
+Table 2: Experimental results of performance (%) comparison with base models on Makeup and Clothes test datasets. Underline shows the best performance for baselines. - means not applicable. Bold shows the best performance. * indicates statistical significance at $p < {0.05}$ level compared to the best performance of baselines.
+
+| Models | Clothes | Makeup |
| MHCH | SSA | MHCH | SSA |
| F1 | Mac. F1 | GT-I | Mac. F1 | Acc. | F1 | Mac. F1 | GT-I | Mac. F1 | Acc. |
| Average | 65.3 | 81.6 | 74.6 | 63.8 | 73.7 | 64.1 | 80.4 | 73.2 | 73.6 | 73.7 |
| Voting | 61.7 | 79.7 | 71.7 | 27.5 | 61.2 | 62.0 | 79.4 | 68.3 | 34.6 | 42.1 |
| Last | 66.4 | 82.1 | 74.6 | 67.0 | 75.6 | 62.8 | 79.8 | 71.5 | 76.6 | 76.8 |
| w/o Interact | 65.6 | 81.6 | 70.6 | 65.5 | 72.3 | 62.0 | 79.5 | 71.4 | 74.8 | 74.6 |
| w/o Select | 64.4 | 81.0 | 73.6 | 67.0 | 74.3 | 61.7 | 79.2 | 71.7 | 72.9 | 72.9 |
| w/o Position | 66.1 | 81.9 | 73.2 | 68.4 | 76.0 | 64.7 | 80.8 | 73.5 | 76.8 | 76.8 |
| Full Model | 69.2 | 83.6 | 78.4 | 71.7 | 79.5 | 65.9 | 81.5 | 75.1 | 80.8 | 80.8 |
+
+Table 3: Ablation study performance $(\%)$ on Clothes and Makeup test datasets. w/o denotes "without".
+
+3 corresponding to $GT-I$ , $GT-II$ , and $GT-III$ . The results of comparisons are shown in Table 2.
+
+We can observe that: (1) The proposed method outperforms all state-of-the-art models specific to one task in terms of all metrics on two datasets. This indicates that our proposed model can effectively capture useful information in both tasks by utilizing role and positional information to explicitly control the interaction between the two tasks. Hence, the performance of the two tasks can be boosted mutually. (2) By integrating MHCH with SSA, the multi-task learning model can obtain further improvements. Specifically, we find that the MHCH task has a positive influence on detecting the unsatisfied dialogue. Overall, DCR-Net and our model perform better than standalone models on US F1 of satisfaction prediction. Intuitively, it is mainly because the interaction with handoff can more comprehensively reflect the local dissatisfaction dialogues than solely sentiment polarity analysis, which helps the joint model better identify dissatisfied dialogues for the SSA task.
+
+
+Figure 4: An example dialogue with predictions and attention distribution. $\mathbf{C}_i / \mathbf{A}_i$ denotes Customer/Chatbot utterance, followed by true labels. The sentiment labels of Customer utterances are also given along with the handoff labels. The other columns are the predictions of our model, CAMIL and DAMI, respectively. The satisfaction ratings of ground truth and predictions are in the last row of the table. N/T denotes Normal/Transferrable.
+
+
+
+# 4.4 Ablation
+
+We perform several ablation tests in our model on two datasets and the results are recorded in Table 3. The results demonstrate the effectiveness of different components of our model.
+
+w/o Interact: We modify the full version of our model by only sharing parameters of the Utterance and Matching Encoder. The performance degradation demonstrates the effectiveness of modeling the relations between two tasks with interaction. w/o Select: We remove the Role-Select mechanism to ignore the role information during the interaction process. The performance degradation indicates that straightforward interaction may bring noisy information for both tasks. w/o Position: We remove the positional weights in the MHCH to SSA sub-module. It performs well but worse than Full Model since the position information provide prior knowledge for controlling context interaction. Average, Voting, and Last: Average takes the average of the local satisfaction distributions of customer utterances for classification. Voting directly maps the majority local satisfaction distributions of customer into satisfaction prediction. Last takes the last customer's satisfaction distribution as classification result. Average, Voting and Last are sub-optimal choices and perform worse than the Full Model. This is because the local satisfaction distributions contribute unequally to the overall satisfaction polarity. Also, the majority satisfaction polarity does not directly correlate with the overall satisfaction.
+
+# 4.5 Case Study
+
+Figure 4 illustrates our prediction results with an example dialogue, which is translated from Chinese text. In this case, three utterances $\mathrm{(A_4,C_5}$ and
+
+$\mathrm{C}_{6})$ are labeled as transferable, and two of them $(\mathbf{C}_5$ and $\mathbf{C}_6)$ are labeled as "negative emotion". Among them, $\mathrm{A}_{4}$ is an unsatisfactory response, which arouses negative emotions of the customer. DAMI only predicts $\mathrm{C}_{5}$ and $\mathrm{C}_{6}$ as transferable utterances. However our model successfully detects all the transferable utterances. By mapping local satisfaction distribution of utterances to sentiment of utterances, our model is able to predict reasonable sentiment polarities for customer utterances (detailed analysis is in Subsection 4.6). Considering the context, the customer describes his/her skin problem at $\mathrm{C}_{3}$ and asks for a recommendation. However, the chatbot does not give any recommendations and returns an irrelevant answer at $\mathrm{A}_{4}$ . We provide the attention distributions of the utterances on the right side of the example dialog. $\alpha_{5}^{s}$ and $\alpha_{6}^{s}$ are the SSA to MHCH attention distributions of $\mathrm{C}_{5}$ and $\mathrm{C}_{6}$ ; $\alpha_{5}^{m}$ and $\alpha_{6}^{m}$ are the MHCH to SSA attention distributions of $\mathrm{C}_{5}$ and $\mathrm{C}_{6}$ . We can observe that attention distributions are concentrated on $\mathrm{A}_{4}$ rather than other utterances. It is because $\mathrm{A}_{4}$ is the main cause of negative emotion and dissatisfaction. This again demonstrates that our model can capture the mutual influence between local satisfaction and handoff, which is useful for prediction. In terms of final satisfaction rating, although CAMIL correctly predicts the sentiments of customer utterances, it gives a wrong prediction of satisfaction rating. Our model correctly predicts the satisfaction rating as Unsatisfied by considering the negative emotions and its cause of the unsatisfied response.
+
+# 4.6 Results on Sentiment Classification
+
+Song et al. (2019) utilize multiple instance learning to predict the satisfaction rating and the sentiment of customer utterances with the supervision
+
+| Models | Clothes | Makeup |
| PO F1 | NE F1 | NG F1 | Mac. F1 | Acc. | PO F1 | NE F1 | NG F1 | Mac. F1 | Acc. |
| MILNET | 44.1 | 81.4 | 40.4 | 55.3 | 71.3 | 44.7 | 38.7 | 41.6 | 41.7 | 41.0 |
| CAMIL | 48.4 | 89.3 | 55.5 | 64.4 | 82.4 | 54.4 | 72.5 | 51.6 | 59.5 | 64.7 |
| RSSN | 63.5 | 90.1 | 58.4 | 70.7 | 83.8 | 51.3 | 67.8 | 54.6 | 57.9 | 61.6 |
+
+Table 4: Results of sentiment classification by different models on Clothes and Makeup test datasets.
+
+of the dialogue's satisfaction labels only during the training process. Similarly, our satisfaction prediction is based on the estimation of local satisfaction distributions while the utterance sentiment or satisfaction labels are unobserved. To compare and analyze the performance of utterance-level sentiment classification, we map these distributions into sentiments of utterances as the sentiment prediction results according to the distribution polarities, i.e., unsatisfied $\rightarrow$ negative (NG), met $\rightarrow$ neutral (NE), well-satisfied $\rightarrow$ positive (PO).
+
+In Table 4, we compare the sentiment prediction results of MILNET, CAMIL, and our model. On Clothes dataset, our RSSN performs better than other baselines, while it performs worse than CAMIL on Makeup dataset. It is worth noting that our model achieves the best performance on both Clothes and Makeup datasets in terms of NG F1 metric. It indicates that MHCH task is sensitive to negative emotion and contributes more to negative emotion recognition than separate SSA models. From Table 2, we can also see that our model performs better than separate SSA models in terms of US F1, which is consistent with the findings of sentiment classification.
+
+# 5 Conclusions and Future works
+
+In this paper, we propose an innovative multi-task framework for service satisfaction analysis and machine-human chatting handoff, which deliberately establishes the mutual interrelation for each other. Specifically, we propose a Role-Selected Sharing Network for joint handoff prediction and satisfaction estimation, utilizing role and positional information to control knowledge transfer for both tasks. Extensive experiments and analyses reveal that explicitly modeling the interrelation between the two tasks can boost the performance mutually.
+
+However, our model has not been calibrated to account for user preferences and biases, which we plan to address in future work. Moreover, we will further explore how to adjust the handoff priority with the assistance of personalized information.
+
+# Acknowledgments
+
+We thank the anonymous reviewers for their valuable comments and suggestions. This work is supported by the National Natural Science Foundation of China (61876003, 62106039), the National Key R&D Program of China (2020YFC0832505), the Fundamental Research Funds for the Central Universities, and Alibaba Group through Alibaba Research Intern Program and Alibaba Research Fellowship Program.
+
+# References
+
+Stefanos Angelidis and Mirella Lapata. 2018. Multiple instance learning networks for fine-grained sentiment analysis. TACL, 6:17-31.
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.
+Praveen Kumar Bodigutla, Lazaros Polymenakos, and Spyros Matsoukas. 2019a. Multi-domain conversation quality evaluation via user satisfaction estimation. arXiv preprint arXiv:1911.08567.
+Praveen Kumar Bodigutla, Aditya Tiwari, Spyros Matsoukas, Josep Valls-Vargas, and Lazaros Polymenakos. 2020. Joint turn and dialogue level user satisfaction estimation on multit-domain conversations. In Proc. of EMNLP: Findings, pages 3897-3909.
+Praveen Kumar Bodigutla, Longshaokan Wang, Kate Ridgeway, Joshua Levy, Swanand Joshi, Alborz Geramifard, and Spyros Matsoukas. 2019b. Domain-independent turn-level dialogue quality evaluation via user satisfaction estimation. arXiv preprint arXiv:1908.07064.
+Petter Bae Brandtzaeg and Asbjørn Følstad. 2018. Chatbots: Changing user needs and motivations. *Interactions*, 25(5):38-43.
+Ana Paula Chaves and Marco Aurelio Gerosa. 2020. How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design. *IJHCI*, pages 1-30.
+Kyungyong Chung and Roy C Park. 2019. Chatbot-based healthcare service with a knowledge base for cloud computing. Cluster Computing, 22(1):1925-1937.
+Zhigang Dai, Jinhua Fu, Qile Zhu, Hengbin Cui, Yuan Qi, et al. 2020. Local contextual attention with hierarchical structure for dialogue act recognition. arXiv preprint arXiv:2003.06044.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL, pages 4171-4186.
+
+Layla El Asri, Hatim Khouzaimi, Romain Laroche, and Olivier Pietquin. 2014. Ordinal regression for interaction quality prediction. In Proc. of ICASSP, pages 3221-3225.
+Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proc of AISTATS, pages 249-256.
+Ryuichiro Higashinaka, Kotaro Funakoshi, Masahiro Araki, Hiroshi Tsukahara, Yuka Kobayashi, and Masahiro Mizukami. 2015a. Towards taxonomy of errors in chat-oriented dialogue systems. In Proc. of SIGDIAL, pages 87-95.
+Ryuichiro Higashinaka, Masahiro Mizukami, Kotaro Funakoshi, Masahiro Araki, Hiroshi Tsukahara, and Yuka Kobayashi. 2015b. Fatal or not? Finding errors that lead to dialogue breakdowns in chat-oriented dialogue systems. In Proc. of EMNLP, pages 2243-2248.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
+Ting-Hao Kenneth Huang, Joseph Chee Chang, and Jeffrey P. Bigham. 2018. Evorus: A crowd-powered conversational assistant built to automate itself over time. In Proc. of CHI, pages 1-13.
+Mohit Jain, Pratyush Kumar, Ramachandra Kota, and Shwetak N Patel. 2018. Evaluating and informing the design of chatbots. In Proc. of DIS, pages 895-906.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR (Poster).
+Harshit Kumar, Arvind Agarwal, Riddhiman Dasgupta, and Sachindra Joshi. 2018. Dialogue act sequence labeling using hierarchical encoder with CRF. In Proc. of AAAI, pages 3440-3447.
+Jiawei Liu, Zhe Gao, Yangyang Kang, Zhuoren Jiang, Guoxiu He, Changlong Sun, Xiaozhong Liu, and Wei Lu. 2021. Time to transfer: Predicting and evaluating machine-human chatting handoff. In Proc. of AAAI, pages 5841-5849.
+Jing Ma, Wei Gao, and Kam-Fai Wong. 2018. Detect rumor and stance jointly by neural multi-task learning. In Proc. of WWW, pages 585-593.
+Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander F. Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive RNN for emotion detection in conversations. In Proc. of AAAI, pages 6818-6825.
+Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
+
+Libo Qin, Wanxiang Che, Yangming Li, Minheng Ni, and Ting Liu. 2020. DCR-Net: A deep co-interactive relation network for joint dialog act recognition and sentiment classification. In Proc. of AAAI, pages 8665–8672.
+Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, and Wei Chu. 2017. AliMe chat: A sequence to sequence and rerank based chatbot engine. In Proc. of ACL, pages 498-503.
+Nicole Radziwill and Morgan Benton. 2017. Evaluating quality of chatbots and intelligent conversational agents. Software Quality Professional, 19(3):25.
+Vipul Raheja and Joel Tetreault. 2019. Dialogue Act Classification with Context-Aware Self-Attention. In Proc. of NAACL, pages 3727-3733.
+Janarthanan Rajendran, Jatin Ganhotra, and Lazaros C. Polymenakos. 2019. Learning end-to-end goal-oriented dialog with maximal user task success and minimal human agent use. TACL, 7:375-386.
+Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational AI: The science behind the alexa prize. arXiv preprint arXiv:1801.03604.
+Alexander Schmitt, Stefan Ultes, and Wolfgang Minker. 2012. A parameterized and annotated spoken dialog corpus of the CMU let's go bus information system. In Proc. of LREC, pages 3369-3373.
+Chenlin Shen, Changlong Sun, Jingjing Wang, Yangyang Kang, Shoushan Li, Xiaozhong Liu, Luo Si, Min Zhang, and Guodong Zhou. 2018. Sentiment classification towards question-answering with hierarchical matching network. In Proc. of EMNLP, pages 3654-3663.
+Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In Proc. of EMNLP, pages 254-263.
+Kaisong Song, Lidong Bing, Wei Gao, Jun Lin, Lujun Zhao, Jiancheng Wang, Changlong Sun, Xiaozhong Liu, and Qiong Zhang. 2019. Using customer service dialogues for satisfaction analysis with context-assisted multiple instance learning. In Proc. of EMNLP, pages 198-207.
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958.
+Stefan Ultes. 2019. Improving interaction quality estimation with BiLSTMs and the impact on dialogue policy learning. In Proc. of SIGDIAL, pages 11-20.
+
+Stefan Ultes, Robert ElChab, and Wolfgang Minker. 2014. Application and evaluation of a conditioned hidden markov model for estimating interaction quality of spoken dialogue systems. *Natural Interaction with Robots*, Knowbots and Smartphones, pages 303-312.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NIPS, pages 5998-6008.
+Tianyi Wang, Yating Zhang, Xiaozhong Liu, Changlong Sun, and Qiong Zhang. 2020a. Masking orchestration: Multi-task pretraining for multi-role dialogue representation learning. In Proc. of AAAI, pages 9217-9224.
+Yan Wang, Jiayu Zhang, Jun Ma, Shaojun Wang, and Jing Xiao. 2020b. Contextualized emotion recognition in conversation as sequence tagging. In Proc. of SIGDIAL, pages 186-195.
+Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In Proc. of ICML, pages 2397-2406.
+Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proc. of NAACL, pages 1480-1489.
+Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2020. The design and implementation of XiaoIce, an empathetic social chatbot. Computational Linguistics, 46(1):53-93.
\ No newline at end of file
diff --git a/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/images.zip b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a0403dfca18082c3e0e59088e935bef5f4b2b83d
--- /dev/null
+++ b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac8ed4320818ead25b1056434f62684c2ef65e8e5e29e1c16bf8ecf1cd776a36
+size 532643
diff --git a/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/layout.json b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8ff1e9e5f71b243d6bac77c738d6100d98a309e7
--- /dev/null
+++ b/aroleselectedsharingnetworkforjointmachinehumanchattinghandoffandservicesatisfactionanalysis/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d5c8437710b6109e2b678dec2f1cfaf759f5ddc76f99ae7316b123f3b796a82
+size 392363
diff --git a/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_content_list.json b/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..bc1069a64a08666ce9b8c982ec9c37cccd8626f6
--- /dev/null
+++ b/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:df0ed78fec9a6dce22bf663ff6ac3d8a0362d11475d763d8d33f7f572c621c0e
+size 97950
diff --git a/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_model.json b/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e37527ca01a9803c1488ef6a55eda6d61f95c905
--- /dev/null
+++ b/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2ea65c965e46050ee0744d27bc3997add1cbd658e1455e71cb38a4190dd3daf1
+size 115899
diff --git a/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_origin.pdf b/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..959b8a87ef5337f7189f5eb69a5e6928988a6a66
--- /dev/null
+++ b/arootofaproblemoptimizingsinglerootdependencyparsing/6b0d7c6b-6314-41c1-9e08-3a1305a6df76_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee2ad89d249b64e8b3f7625a99bbb1919fe4f212c6041b7c553ce66e87e0b6fa
+size 1086472
diff --git a/arootofaproblemoptimizingsinglerootdependencyparsing/full.md b/arootofaproblemoptimizingsinglerootdependencyparsing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a4c547f3b96b6508aad48f0f2c463c2ef3d0852
--- /dev/null
+++ b/arootofaproblemoptimizingsinglerootdependencyparsing/full.md
@@ -0,0 +1,499 @@
+# A Root of a Problem: Optimizing Single-Root Dependency Parsing
+
+Miloš Stanojevic
+School of Informatics
+University of Edinburgh
+m.stanojevic@ed.ac.uk
+
+# Abstract
+
+We describe two approaches to single-root dependency parsing that yield significant speed ups in such parsing. One approach has been previously used in dependency parsers in practice, but remains undocumented in the parsing literature, and is considered a heuristic. We show that this approach actually finds the optimal dependency tree. The second approach relies on simple reweighting of the inference graph being input to the dependency parser and has an optimal running time. Here, we again show that this approach is fully correct and identifies the highest-scoring parse tree. Our experiments demonstrate a manyfold speed up compared to a previous graph-based state-of-the-art parser without any loss in accuracy or optimality.
+
+# 1 Introduction
+
+Dependency parsing is one of the core steps in many Natural Language Processing pipelines. Given its wide and large-scale use, both in academic and commercial settings, even moderate improvements in the speed and accuracy of a dependency parser may significantly impact its utility. In this paper, we show how to improve the speed of graph-based dependency parsers (McDonald et al., 2005; Qi et al., 2020) without compromising at all on accuracy.
+
+Graph-based dependency parsers work in two steps. The first step forms a complete weighted directed graph of words and a special $ROOT$ token by computing the weights using a trained statistical model. The second step then executes the main inference procedure: it identifies a directed spanning tree (often referred to as
+
+Shay B. Cohen
+School of Informatics
+University of Edinburgh
+scohen@inf.ed.ac.uk
+
+
+Figure 1: Time proportion used by the neural and MST component when parsing with Stanza on GPU.
+
+arborescence) in this graph, aiming to maximize its weight, and retaining $ROOT$ as the root node of the arborescence.
+
+While some of the previous work to optimize the speed of graph-based parsers focused on the first step (Anderson and Gomez-Rodriguez, 2020), we demonstrate in Figure 1 that most of the parsing time is actually spent on the spanning tree inference routine. As sentence length increases, the gap between the spanning tree inference time and time spent on constructing the weighted graph increases significantly.3
+
+MST search is often done using the Chu-Liu-Edmonds (CLE) algorithm (Chu and Liu, 1965; Edmonds, 1967) that runs in $\mathcal{O}(n^3)$ where $n$ is the sentence length. Tarjan (1977) presents a relatively complicated way of implementing the CLE algorithm in $\mathcal{O}(n^2)$ . Tarjan's algorithm is often cited in NLP literature, but to the best of our knowledge has never been implemented for dependency parsing. This is due to the common
+
+| algorithm | appeared in | current implementation worst-case | claimed worst-case dense graph | average-case dense graph | claimed worst-case sparse graph |
| Gabow-Tarjan | Gabow and Tarjan (1984) Zmigrod et al. (2020) | O(n2 log n) | O(n2) | O(n2) | O(m log n) |
| Naïve | mentioned in Zmigrod et al. (2020) and in Section 3 | n/a | O(n3) | O(n3) | O(mn + n2 log n) |
| Root Preselection | code of some parsers (undocumented) and thoroughly discussed in Section 3 | O(n3) | O(n3) | O(n2) | O(mn + n2 log n) |
| Reweighting | introduced in Section 4 | O(n2) | O(n2) | O(n2) | O(m + n log n) |
+
+Table 1: Algorithms for single-root dependency parsing. The sentence length is denoted by $n$ , and the number of the edges in the input graph by $m$ .
+
+belief that the original CLE often works well in practice (see footnote 2 in Zmigrod et al. 2020 or end of $\S 4.2.2$ in Kübler et al. 2009). We test this claim, and show that significant improvements can be made over CLE.
+
+The (unconstrained) MST algorithm such as CLE produces a tree with one root node, namely the special token $ROOT$ , but that root node may have multiple edges coming out of it. Yet, in some widely-used dependency treebanks, such as Universal Dependencies (Nivre et al., 2018), only one edge is permitted to come out of $ROOT$ . We will refer to the task of finding an MST that contain only one outgoing edge out of $ROOT$ as single-root or constrained MST parsing.
+
+Zmigrod et al. (2020) provide an implementation of the non-trivial Gabow and Tarjan algorithm to compute a constrained MST with only one dependency edge coming out of $ROOT$ . While both Gabow and Tarjan and Zmigrod et al. argue that this algorithm could be implemented in $\mathcal{O}(n^2)$ , they do not describe or follow such an implementation. The only existing implementation of this algorithm runs in $\mathcal{O}(n^2 \log n)$ , which is the best worst-case asymptotic running time tested in the literature for single-root dependency parsing.
+
+In this paper, we provide two alternative approaches of computing the constrained MST by using an unconstrained MST algorithm as a subroutine. Both of these algorithms are very simple to implement and understand. We prove that the first one of them, on average, has the same asymptotic running time as the unconstrained algorithm used as a subroutine. The second algorithm has the same worst-case asymptotic runtime as the unconstrained algorithm which is optimal for complete graphs.
+
+Worst-case complexity does not guarantee
+
+that an algorithm will be fast in practice (Roughgarden, 2019): the actual speed might be influenced by constant factors, memory access patterns and the difficulty of the typical input instances (Moret, 2002). This is why we test all our algorithms in a typical settings encountered in dependency parsing. Additionally, we propose a simple heuristic that recognizes if the input instance is "easy" and if so returns the correct solution even before running the full algorithm.
+
+As a guide to this paper, the algorithms for single-root dependency parsing, both previously published ones and the ones presented in this paper, are shown in Table 1 together with their associated computational complexity. In the next section we will introduce the basic concepts from Gabow and Tarjan algorithm that Zmigrod et al. (2020) have put into practice for single-root dependency parsing. This is the only previously published work on single-root dependency parsing. Section 3 shows the Root Preselection algorithm for single-root MST parsing and proves its correctness and average runtime complexity. Section 4 shows even better Reweighting algorithm that performs well not only in average, but also in the worst-case. Section 5 introduces the ArcMax trick that improves practical speed of any MST parser by recognizing the "easy" cases mentioned above. Section 6 experimentally tests and verifies all of these findings.
+
+# 2 The Gabow-Tarjan Algorithm
+
+Gabow and Tarjan (1984) present an algorithm that solves a much more general combinatorial optimization problem than single-root MST parsing. Concretely, they abstract a family of optimization problems as an optimization of a minimum weight base of a matroid. We will not
+
+describe here the full theory and workings of this algorithm but just present a few important points as related to the aspect of MST parsing. For a good introduction to the use of matroids for combinatorial optimization see Cormen et al. (2009, §16.4).
+
+Many combinatorial optimization problems can be framed as a search for the minimum weight base of a matroid, a structure that consists of a set of "independent subsets" of a ground set, generalizing the notion of linear independence in vector spaces. Let us consider a minimum spanning tree problem over undirected graphs. This can be solved with a graphic matroid. In this matroid, the ground set contains all the edges of the graph, while independent sets contain all forests (sets of edges that do not form a cycle). A base of this matroid is a spanning tree. Finding a minimum weight base of graphic matroid is equivalent to finding a minimum spanning tree. $^4$
+
+Gabow and Tarjan extend the definition of the problem by introducing a coloring of the elements of matroid's ground set: every element can be marked as green or red. In the case of graphic matroid the coloring would be applied to the edges of the graph. Gabow and Tarjan described a matroid optimization method that finds a minimum weight base that contains exactly $q$ red elements, given $q \in \mathbb{N}$ . Let $\beta_{i}$ stand for a set of all optimal bases with $i$ red elements. Let swap $(e,f)$ for base $B$ stand for a pair of matroid's ground elements for which it holds that $B / \{e\} \cup \{f\}$ is also a base and that $e$ is green and $f$ is red. Swaps can be ranked from smallest to largest by $weight(f) - weight(e)$ . Gabow and Tarjan prove the following theorem:
+
+Theorem 1 (Augmentation Theorem; Gabow and Tarjan 1984). Suppose $B$ is a base in $\beta_{i-1}$ and $\beta_i \neq \emptyset$ . If $(e, f)$ is a smallest swap for $B$ , then $B / \{e\} \cup \{f\} \in \beta_i$ .
+
+The Augmentation Theorem specifies the general approach of the Gabow and Tarjan algorithm: start by finding the optimal base for the smallest possible number of red elements (this number is matroid/task dependent) and then increase the number of red elements by incrementally finding the smallest swap that introduces more red elements. Stop when we
+
+have the desired number of red elements in the base.
+
+While this general algorithm applies to undirected spanning trees (they form a matroid), it does not straightforwardly apply to directed spanning trees because they do not form a matroid. To accommodate for this Gabow and Tarjan extend their definition of a swap so that, instead of one, multiple swaps lead from one optimal base to another of a lower order.
+
+So how does this relate to single-root dependency parsing? If we color all edges red, except for those that are connected to the artificial $ROOT$ node which will be colored green, we can look for a directed MST with only one green edge (or equivalently with $n - 1$ red edges). This is a special case of the general Gabow and Tarjan (1984) algorithm. An adaptation of that algorithm to dependency parsing was presented by Zmigrod et al. (2020).
+
+While it is stated by both Gabow and Tarjan (1984) and Zmigrod et al. (2020) that this algorithm can be implemented in $\mathcal{O}(n^2)$ for dense graphs by using data structures from Tarjan (1977), it is not trivial to see how to do that. Indeed, to the best of our knowledge, the only implementation of this algorithm for dependency parsing runs in $\mathcal{O}(n^2\log n)$ . Even implementing the original unconstrained Tarjan (1977) algorithm is non-trivial, and its presentations with this level of efficiency in the literature historically include errors. The correct efficient $\mathcal{O}(n^2)$ algorithm is distilled and described in our Appendix A, and in our experiments we contrast its implementation against the less efficient ones.
+
+# 3 The Root Preselection Algorithm
+
+There is a simple meta-algorithm algorithm for single-root (constrained) dependency parsing, when given access to an unconstrained solver as a subroutine. Imagine we want to find the best single-root dependency tree that contains an arc from $ROOT$ to a single particular word in the sentence. We can accomplish this by disconnecting all other words from the $ROOT$ and running the unconstrained MST parser (equivalently, give the relevant edges weight of $-\infty$ ). Now, we can repeat this process for all the words and compare the weights of the single-root dependency trees that are found for each
+
+word. The best tree in this comparison will be globally best single-root dependency tree. If the runtime complexity of the underlying unconstrained MST parser is $\mathcal{O}(T(n))$ for a sentence of length $n$ , the asymptotic runtime of this meta-algorithm is $\mathcal{O}(nT(n))$ . We refer to this algorithm as Naive algorithm.
+
+In practice, a simple heuristic is applied in several dependency parsers on top of Naive algorithm (Parser-v3, Stanza, SuPar). The adapted algorithm with the heuristic first runs the usual unconstrained MST parsing. If the tree that is found contains only one word connected to the root, the algorithm returns it as the answer. Otherwise, the parser applies the Naive algorithm but only over the words connected to the root in the unconstrained parse. Since this adapted algorithm preselects the nodes to which to apply the naive algorithm we refer to it as the Root Preselection Algorithm.
+
+We turn to explain that this undocumented heuristic is actually correct, and will always return the best single-rooted tree. We basically describe why the root edge in the constrained case has to be one of the root edges in the unconstrained spanning tree.
+
+The reason for this stems from an extension of Augmentation Theorem for directed graphs by Gabow and Tarjan. This theorem establishes the connection between the optimal solution of $i - 1$ red elements and an optimal solution of $i$ red elements. It relates them with the optimal swap (in the extended version for directed graphs it is multiple swaps), where each swap removes a green element and replaces it with a red element. What this means in the context of dependency parsing is that an optimal solution with $i$ edges connected to $ROOT$ contains all the edges connected to $ROOT$ from the optimal solution with $i - 1$ edges connected to $ROOT$ . This recurrence implies that the edge to $ROOT$ from the constrained single-root dependency parse is present in the unconstrained case, so it is valid for the algorithm above to concentrate only on finding the optimal edge in the set of root edges provided by the unconstrained algorithm.
+
+The runtime of this algorithm depends on the number of words connected to $ROOT$ in the unconstrained MST. If there is only one edge to $ROOT$ in the unconstrained MST,
+
+the complexity is $\mathcal{O}(T(n))$ . If there is more than one edge from $ROOT$ , the complexity is $\mathcal{O}((r + 1)T(n))$ . We can write this complexity for any number $r$ of edges connected to $ROOT$ as $\mathcal{O}\left((r + 1 - I_1(r))T(n)\right)$ where $I_1(\cdot)$ is an indicator function that returns 1 if the input is 1, otherwise it returns 0. Clearly, the worst case of this algorithm is the same as the worst case of the naive algorithm because $r$ can be as large as $n$ , but it is interesting to see what is the average computational complexity for this algorithm.
+
+To study the average time complexity of Pres-election algorithm we need to compute expected runtime under some probability distribution of the number of edges connected to $ROOT$ in the unconstrained MST:
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathcal {O} (p r e s e l e c t (n)) \right] = \mathbb {E} \left[ r + 1 - I _ {1} (r) \right] T (n) \\ = \left(\mathbb {E} [ r ] + 1 - P (r = 1)\right) T (n). \tag {1} \\ \end{array}
+$$
+
+This average complexity expresses the intuition that if the weights of the graph are more likely to produce unconstrained MST with small number of root edges, the algorithm will be faster. So what can we say about the probability over the number of root edges? In practice there are two extreme cases: graph weights in the initial stages of training and in the final stage after training. We will analyze them both in turn.
+
+For the initial stage of training, when the parsing model is only initialized, it is reasonable to assume that the distribution over possible spanning trees is uniform. We can compute the probability of having $r$ root edges by finding the ratio of the number of spanning trees rooted in $ROOT$ that contain $r$ root edges and the total number of spanning trees rooted in $ROOT$ . The total number of spanning trees is given by Cayle's formula $(n + 1)^{n - 1}$ (Cayley, 1889). The number of spanning trees with root edges that go trough particular $r$ nodes can be computed using Matrix-Tree Theorem (Tutte, 1984). To compute the number of spanning trees with any $r$ root edges we need to correct the number by
+
+multiplying with the number of $r$ combinations. This gives us the following distribution over the number of root edges:
+
+$$
+P (r; n) = \frac {\binom {n} {r} r n ^ {n - r - 1}}{(n + 1) ^ {n - 1}}. \qquad (2)
+$$
+
+When we put Equation 2 into Equation 1, we get that the average case complexity under the uniform distribution of spanning trees is:
+
+$$
+\mathbb {E} \left[ \mathcal {O} \left(p r e s e l e c t (n)\right) \right] =
+$$
+
+$$
+\left(\frac {2 n}{n + 1} + 1 - \frac {n ^ {n - 1}}{(n + 1) ^ {n - 1}}\right) T (n).
+$$
+
+This expectation is monotonically increasing with $n$ . We can compute the upper bound with:
+
+$$
+\begin{array}{l} \lim _ {n \rightarrow \infty} \mathbb {E} [ \mathcal {O} (p r e s e l e c t (n)) ] = \left(3 - \frac {1}{e}\right) T (n) \\ < \quad 2. 6 4 T (n) \\ \end{array}
+$$
+
+This shows that the Preselection algorithm for constrained MST parsing performs, on average, just as well as any unconstrained MST algorithm with only a small constant overhead. This is true under the assumption of uniform distribution over trees. The probability of the number of roots that need to be explored depends only mildly on the number of words: the larger $n$ the larger is the probability of having multiple root edges, but for any $n$ it converges to a small value. The number of having more than $r$ number of edges drops rapidly for any $n$ : $P(r > 4) < 0.02$ , $P(r > 5) < 0.004$ , $P(r > 6) < 0.0005$ . In other words, it is very unlikely that this algorithm will need to explore more than a few of different root edges.
+
+What about the distribution of root edges with unconstrained MST after training? For that case we can expect the distribution to be even more peaked over having only few root edges because the training data often has only few root edges (or only one in the case of Universal Dependencies). To test that we collected 10 sentences for each sentence length from the English portion of News Commentary v16 corpus. We ran the English bi-affine model of Stanza (Qi et al., 2020) and computed the average number of root edges for each sentence length. The plot with these counts is shown in Figure 2 as trained weights line. The plot also shows random weights which represents
+
+the uniform spanning tree distribution. To simulate this distribution we sample the weight of each edge of the graph from the uniform distribution. It is easy to see that in expectation all spanning trees will have the same weight.
+
+Zmigrod et al. mention that the distribution of the number of root edges in a trained model depends on the amount of training data. The trained English model in this plot should represent the distribution with the smallest number of root edges since this language has the largest amount of training data. The random weights on this plot should be approximately a lower bound on the number of root edges of a model with small amount of training data.
+
+
+Figure 2: The number of root edges in unconstrained MST for two different types of graph weights.
+
+This plot shows that in the weights produced by a trained English model, the number of unconstrained MSTs with multiple roots is small. This means that the Root Preselection algorithm will perform even better than in the random weights setting. This plot also confirms that the expected number of root edges for a randomly initialized weights is smaller than 2 for any sentence length. Clearly, the variance in the number of roots is much higher in the random weights than in the trained weights.
+
+While the Preselection algorithm is used in practice by several implementations, to the best of our knowledge, the proof of its correctness and of its average-case complexity analysis that was presented in this section is new.
+
+# 4 The Root Reweighting Algorithm
+
+We turn to present a new algorithm for single-root dependency parsing that is as fast as the best unconstrained dependency parsing algorithm even in the worst case. It is based on a very simple observation that subtracting a constant value $c$ from the weights of all edges coming out of $ROOT$ :
+
+- decreases the weight of any tree with $k$ root edges by $k \cdot c$ ,
+- does not change the ranking among the trees with the same number of root edges, and
+- does potentially change the ranking among the trees with different number of root edges.
+
+By choosing the right constant $c \in \mathbb{R}$ we can arrange all trees with more than one root edge to have lower weight than any tree with only one root edge. Let us denote by $w(\cdot)$ the function that provides the value of the weight of an edge in the original graph. Let $n$ stand for the number of words. In a complete graph we have $n + 1$ nodes due to the artificial ROOT token. Any spanning tree in this graph will have $n$ edges.
+
+In the original graph, before the constant is subtracted, we know for certain that the score of any spanning tree is not smaller than $n \min_e w(e)$ and not bigger than $n \max_e w(e)$ . After constant $c$ is subtracted from all edges coming out of $ROOT$ , all trees with $k$ root edges will have their score decreased by $k \cdot c$ . In this modified graph, any spanning tree with $k$ root edges will have score that is upper bounded by $n \max_e w(e) - kc$ and lower bound of $n \min_e w(e) - kc$ . We want the lowest scoring single-root tree to have a higher score than any $k$ -root tree for $k \geq 2$ . More formally, we want the following equation to hold:
+
+$$
+n \min _ {e} w (e) - c > n \max _ {e} w (e) - k c \tag {3}
+$$
+
+for all $2 < k < n$ .
+
+This implies that $c$ should satisfy:
+
+$$
+c > n \left(\max _ {e} w (e) - \min _ {e} w (e)\right) \tag {4}
+$$
+
+A value of $c$ that satisfies this constraint is:
+
+$$
+c = 1 + n \left(\max _ {e} w (e) - \min _ {e} w (e)\right) \tag {5}
+$$
+
+So by applying unconstrained MST over a graph with the following weight function we in fact obtain the best single-root solution:
+
+$$
+w ^ {\prime} (e) = \left\{ \begin{array}{l l} w (e) - c & \text {i f} s r c (e) = R O O T \\ w (e) & \text {o t h e r w i s e} \end{array} \right. \tag {6}
+$$
+
+There are multiple advantages of this algorithm. First, it is simple to understand and implement. Assuming an existing implementation of any unconstrained MST algorithm, this algorithm could be implemented very easily, without incurring further cost to the asymptotic complexity. A full implementation (in Python) is described in Appendix B.1. It is simpler to implement even in comparison to the Root Preselection algorithm described in Section 3.
+
+The second advantage is that we could use any implementation of an unconstrained MST as a subroutine. As mentioned before, there is no precise description nor implementation of Gabow and Tarjan algorithm that runs in $\mathcal{O}(n^2)$ . The fastest implementation of Gabow and Tarjan is by Zmigrod et al. that runs in $\mathcal{O}(n^2\log n)$ . The Root Reweighting algorithm can easily be implemented in $\mathcal{O}(n^2)$ by just using the unconstrained MST algorithm of Tarjan (1977) as a subroutine.
+
+The third advantage is that, unlike the Pres-election algorithm, the Reweighting algorithm always runs the unconstrained MST algorithm only once per sentence. This means that it will be asymptotically fast for any distribution of spanning trees.
+
+Finally, in comparison to Zmigrod et al. (2020), the Reweighting algorithm provides for a great flexibility in choosing the underlying unconstrained MST algorithm that is used as a subroutine. In our experiments we use the MST algorithm of Tarjan for dense graphs that runs in $\mathcal{O}(n^2)$ . If the graph were sparse, for example, due to pruning of unlikely or forbidden edges, we could use the unconstrained MST algorithm of Gabow et al. (1986) as a subroutine which runs in $\mathcal{O}(m + n\log n)$ where $m$ is the number of edges in the input graph. In addition, if we want to perform single-root projective MST parsing, we could use the algorithm of Eisner (1996) as a subroutine. Our algorithm also applies to $k$ -best parsing. Assuming any existing unconstrained k-best parsing algorithm (such
+
+
+(a) Trained English input
+
+
+(b) Random input
+Figure 3: General MST algorithms performance.
+
+as Camerini et al. 1980; Hall 2007; Zmigrod et al. 2021), the Reweighting algorithm can easily incorporate the constraint that all returned $k$ -best trees have a single root edge by just changing the weights of the input graph before calling the unconstrained $k$ -best algorithm.
+
+In short, this simple algorithm has all the advantages of the previous single-root algorithms and none of their disadvantages.
+
+# 5 The ArcMax Trick
+
+The Reweighting algorithm from Section 4 that uses Tarjan's algorithm as a subroutine is the best possible algorithm we could hope for in the worst-case with respect to asymptotic complexity. No algorithm can be asymptotically faster than $\mathcal{O}(n^2)$ for complete graphs.
+
+Tarjan's algorithm, works in two phases. The first one recursively contracts cycles that result by picking the best edge that enters each node. The second phase then reverses the recursion by expanding each contraction. To do all of this, the algorithm needs to keep track of all the contracted cycles and of modifications to the weights entering the cycles. All of these operations are asymptotically optimal, but they do incur some constant overhead. There are some input instances whose structure is such that we can avoid this overhead and avoid running the full Tarjan's algorithm altogether. Zhang et al. (2017) show that the neural models are often learned so accurately that just picking the input arc with the highest weight for each word often gives a valid tree.
+
+If for each node we just pick the arc with the highest weight and check if these arcs form a tree we could avoid running the whole MST algorithm. We call this trick ArcMax trick. In principle it could be applied to any MST algorithm, but it would not give equal benefits to all of them. Zhang et al. apply it over the CLE algorithm but in that case it is redundant: CLE, as its first step, performs the same step as ArcMax. Zhang et al. do not report any speed improvements.
+
+We show that a speedup can be achieved if this trick is used as a procedure before Tarjan's algorithm. Tarjan's algorithm requires that the graph is strongly connected. In order to achieve this we have to add edges that enter $ROOT$ and set their weight to $-\infty$ . This means that Tarjan's algorithm will always find cycles to contract even if the problem is simple and could be solved by picking the maximum edges entering each word. To address this, we add the ArcMax trick to Tarjan's algorithm.
+
+Checking whether ArcMax edges form a non-projective tree can be done in linear time: do depth-first search from the $ROOT$ node and in the end check if all words are visited. Checking for the projective tree can also be done in linear time by constructing a shift-reduce oracle (linear time), running it over the sentence (linear time) and checking whether in the end the only token left on the stack is $ROOT$ (constant time). The code for the checks of projective and non-projective trees is in Appendix B.2.
+
+For the single-root constraint we need to ex
+
+
+(a) Trained English input
+
+
+(b) Random input
+
+
+Figure 4: Single-Root MST algorithms performance without ArcMax.
+Figure 5: Single-Root MST algorithms performance with ArcMax in trained weights setting.
+
+tend this trick to also verify that the extracted tree has only one edge coming out of $ROOT$ . For the Root Preselection and Reweighting algorithms we have a choice of when to apply the ArcMax trick: before the single-root algorithm or inside it just before it calls the unconstrained MST algorithm that is used as a subroutine. For Root Preselection, in practice, there is no difference in performance. However, for Reweighting the choice is crucial. After reweighting is applied, the edges coming out of $ROOT$ will not be the best edges that enter any word, therefore ArcMax will never be useful since it will not produce any complete tree. This is why the ArcMax trick can be applied to parsing with the Reweighting algorithm only if it is used before Reweighting is called.
+
+# 6 Experiments
+
+In this section we experimentally answer some questions about the performance (speed) of different variations of the algorithms we described. We test algorithms in two settings. The first setting has the graph weights from a trained English dependency paring model from the state-of-the-art Stanza parser (Qi et al., 2020). The parser is applied to sentences of different length selected from the News Commentary v16 corpus. For each length we select exactly ten sentences. The second setting uses graphs with weights sampled from a uniform distribution. This setting should be similar to the initial stages of training of most models. The number of generated random graphs is the same as the number of sentences from the trained setting. We will refer to the first setting as trained weights and to the latter as random weights. We stress that we do not test for accuracy, but for speed. Accuracy of all the tested algorithms remains unchanged.
+
+Which unconstrained algorithm is the fastest? Figure 3 shows the plots for two different settings for CLE, Tarjan and ArcMax+Tarjan. In the trained setting it is visible that worst-case complexity analysis is in fact not informative about the actual performance of the algorithm. CLE outperforms Tarjan's algorithm precisely because it can stop the algorithm if the problem is easy, as described in
+
+Section 5. In the random setting, Tarjan's algorithm works better than CLE. When we add ArcMax trick to Tarjan we get an algorithm that works best in both settings: it optimizes execution on the easy trained setting and it uses robustness of Tarjan's algorithm in the random setting without slowing it down.
+
+Use ArcMax before or after single-root step? As mentioned in Section 5 there are two places where the ArcMax trick could be used. We argued that using it before the single-root step is preferable. The results in Figure 6 confirm that.
+
+
+Figure 6: Comparison on the timing of using the ArcMax trick in the trained weights setting.
+
+Which single-root algorithm is the fastest? First we compare the algorithms without using the ArcMax trick. Figure 4 shows this comparison. Both Preselection and Reweighting significantly outperform the algorithm of Zmigrod et al.. Reweighting outperforms Zmigrod et al. on average $2.6\mathrm{x}$ on trained weights and $3.6\mathrm{x}$ on random weights. The difference is most extreme on the longest sentences. The performance curve of Zmigrod et al. also seems much more volatile.
+
+For Preselection, we can see that the average-case analysis from Section 3 is much more informative of the performance than the worst-case analysis. For Preselection vs. Reweighting we see that in the random setting the performance of Reweighting is much more stable with very low variance and that it consistently outperforms Preselection.
+
+If we apply the ArcMax trick to all of these algorithms, they all get much faster but the
+
+relative speed between them stays the same. To see that, compare the results in Figures 4a and 5. We do not show the results on the random setting because they are equivalent to those without ArcMax in Figure 4b.
+
+When using all of the techniques in our paper together, namely ArcMax+Reweighting+Tarjan, we get an algorithm that is on average 11x faster than the algorithm of Zmigrod et al. when applied to the output of a trained parser. A better implementation of Zmigrod et al. could possibly make this algorithm more competitive but it is unlikely that it would compensate for this large performance gap.
+
+Is Reweighting algorithm always the fastest? While the Reweighting algorithm both theoretically and practically improves over the Preselection algorithm, it should be mentioned that the performance in practice depends on the implementation of Tarjan's algorithm as the underlying unconstrained MST algorithm. If instead of Tarjan we used CLE, the Preselection algorithm will work better on the trained input (see Figure 11 in Appendix). The main reason for that is that CLE algorithm, unlike Tarjan, has a computational complexity that varies, depending on the input, between $\mathcal{O}(n^2)$ and $\mathcal{O}(n^3)$ . On trained input, in general, CLE tends to be closer to its best-case complexity because there are not many cycles to be contracted. However, the Reweighting algorithm changes the weights of a graph in such a way that there are always cycles that need to be contracted and thereby causes CLE to be closer to its worst-case complexity. This problem does not exist with Tarjan's algorithm that both in best and worst-case runs in $\mathcal{O}(n^2)$ . Our recommendation is to use $\operatorname{ArcMax} + \operatorname{Reweighting} + \operatorname{Tarjan}$ as the fastest and most stable algorithm, but if some other unconstrained algorithm is used in place of Tarjan's, it should be tested if the Reweighting algorithm runs faster than Preselection.
+
+# 7 Conclusion
+
+We demonstrated how to obtain significant speed-ups in single-root dependency parsing. The two proposed algorithms are fast, flexible, easy to understand and simple to implement in comparison to previously published ones.
+
+# Acknowledgments
+
+We thank the anonymous reviewers for their comments and useful feedback. We also thank the developers of the Stanza parser for enabling our work and analysis. This work was supported by ERC H2020 Advanced Fellowship GA 742137 SEMANTAX grant and Bloomberg. Milos Stanojevic is especially grateful to Xin Zhan and the staff of NHS Western General Hospital. During the last year they have helped significantly in making this paper possible, but the importance of their dedicated work goes far beyond that.
+
+# References
+
+Mark Anderson and Carlos Gómez-Rodríguez. 2020. Distilling neural networks for greener and faster dependency parsing. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 2-13, Online. Association for Computational Linguistics.
+P. M. Camerini, L. Fratta, and F. Maffioli. 1979. A note on finding optimum branchings. Networks, 9(4):309-312.
+Paolo M. Camerini, Luigi Fratta, and Francesco Maffioli. 1980. Ranking arborescences in o(km log n) time. European Journal of Operational Research, 4(4):235-242. Combinational Optimization.
+Arthur Cayley. 1889. A theorem on trees. Quarterly Journal of Mathematics, 23:376-378.
+Y. Chu and T. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396--1400.
+Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2009. Introduction to Algorithms, Third Edition, 3rd edition. The MIT Press.
+J. Edmonds. 1967. Optimum branchings. J. Res. Nat. Bur. Standards, pages 233--240.
+Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics.
+Harold N Gabow, Zvi Galil, Thomas Spencer, and Robert E Tarjan. 1986. Efficient algorithms for finding minimum spanning trees in undirected and directed graphs. Combinatorica, 6(2):109-122.
+
+Harold N Gabow and Robert E Tarjan. 1984. Efficient algorithms for a family of matroid intersection problems. Journal of Algorithms, 5(1):80-131.
+Keith Hall. 2007. K-best spanning tree parsing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 392-399, Prague, Czech Republic. Association for Computational Linguistics.
+Sandra Kübler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis lectures on human language technologies, 1(1):1-127.
+Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of human language technology conference and conference on empirical methods in natural language processing, pages 523-530.
+B. M. E. Moret. 2002. Towards a discipline of experimental algorithms. In M. H. Goldwasser, D. S. Johnson, and C. C. McGeoch, editors, Data Structures, Near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges, pages 197-214.
+Joakim Nivre, Mitchell Abrams, Zeljko Agic, Lars Ahrenberg, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mittelu, Victoria Basmov, John Bauer, Sandra Bellato, Kepa Bengoetxea, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blokland, Victoria Bobicev, Carl Borstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candido, Bernard Caron, Gauthier Caron, Gulsen Cebiroglu Eryigit, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, Slavomir Čeplö, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Āçrī Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Díaz de Ilarraza, Carly Dickerson, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaz Erjavec, Aline Etienne, Richard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarína Gajdoshová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökirmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Gržitis, Bruno Guillaume, Céline Guillot-Barbance, Nizar
+
+Habash, Jan Hajic, Jan Hajic jr., Linh Ha My, Na-Rae Han, Kim Harris, Dag Haug, Barbora Hladka, Jaroslava Hlavacova, Florinel Hociung, Petter Hohe, Jena Hwang, Radu Ion, Elena Irimia, Olajide Ishola, Tomasz Jelinek, Anders Johannsen, Fredrik Jorgensen, Hiner Kasikara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, Václava Kettnerova, Jesse Kirchner, Kamil Kopacewicz, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Lucia Lam, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phuong Lê Hong, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Nikola Ljubesic, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Cátalina Máranduc, David Mareček, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonça, Niko Miekka, Margarita Misirpashayeva, Anna Missilä, Cátalin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Shinsuke Mori, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Mūürisep, Pinkey Nainwani, Juan Ignacio Navarro Horniacek, Anna Nedoluzhko, Gunta Nešpore-Berzkalne, Luong Nguyen Thi, Huyen Nguyen Thi Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adedayo Olókun, Mai Omura, Petya Osenova, Robert Ostling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti Agnieszka Patejuk, Guilherme Paulino-Passos Siyao Peng, Cenel-Augusto Perez, Guy Perrier Slav Petrov, Jussi Piitulainen, Emily Pitler Barbara Plank, Thierry Poibeaumartin Popel Lauma Pretkalinna, Sophie Prévost Prokopis Prokopidis Adam Przepiörkowski Tiina Puolakainen,Sampo Pyysalo Andriela Raabis Alexandre Rademaker Loganathan Ramasamy Taraka Rama Carlos Ramisch,Vinit Ravishankar Livy Real Siva Reddy Georg Rehm Michael Rießler Larissa Rinaldi Laura Rituma Luisa Rocha Mykhailo Romanenko Rudolf Rosa Davide Rovati Valentin Rosca Olga Rudina Jack Rueter Shoval Sadde Benoit Sagot Shadi Saleh Tanja Samardžić Stephanie Samson Manuela Sanguinetti Baiba Saulite Yanin Sawanakunanon Nathan Schneider Sebastian Schuster,Djame Seddah,Wolfgang Seeker,Mojgan Seraji Mo Shen Atsuko Shimada,Muh Shohibussirri,Dmitry Sichinava Natalia Silveira Maria SimkoVáril Simov Aaron Smith Isabela Soares-Bastos Carolyn Spadine Antonio Stella Milan Straka,Jana Strnadova Alane Suhr Umut Sulubacak Zsolt SzantóDima Taji Yuta Takahashi,Takaaki
+
+Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdenka Urešová, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Jing Xian Wang, Jonathan North Washington, Seyi Williams, Mats Wirén, Tsegay Woldemariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, Zdeněk Žabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2018. Universal dependencies 2.3. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University.
+
+Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.
+
+Tim Roughgarden. 2019. Beyond worst-case analysis. *Communications of the ACM*, 62(3):88-96.
+
+R. E. Tarjan. 1977. Finding optimum branchings. Networks, 7(1):25-35.
+
+W. T. Tutte. 1984. Graph Theory, volume 21 of Encyclopedia of Mathematics and Its Applications. Addison-Wesley, Menlo Park, CA.
+
+Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 665-676, Valencia, Spain. Association for Computational Linguistics.
+
+Ran Zmigrod, Tim Vieira, and Ryan Cotterell. 2020. Please mind the root: Decoding arborescences for dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4809-4819, Online. Association for Computational Linguistics.
+
+Ran Zmigrod, Tim Vieira, and Ryan Cotterell. 2021. On finding the k-best non-projective dependency trees. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1324-1337, Online. Association for Computational Linguistics.
+
+Uri Zwick. 2013. Lecture notes on "Analysis of Algorithms": Directed Minimum Spanning Trees (More complete but still unfinished).
+
+# A Tarjan's Unconstrained MST Algorithm
+
+The original Chu-Liu-Edmonds algorithm (CLE) runs in $\mathcal{O}(n^3)$ . Tarjan (1977) improves this by using advanced data structures. Tarjan proposes two variations of the algorithm. The first one runs in $\mathcal{O}(m\log n)$ where $m$ is the number of edges and $n$ is the number of nodes. In the case of dense input graphs, such as those in dependency parsing, where the number of edges is $n^2$ the complexity of this algorithm is $\mathcal{O}\left(n^{2}\log n\right)$ . Tarjan proposed a second version of the algorithm that in the case of dense graphs has complexity of $\mathcal{O}\left(n^{2}\right)$ . These two versions of the algorithm differ only in the type of a priority queue that they use.
+
+The description of the algorithm in the original paper is not very accessible and contains a small error. Camerini et al. (1979) fixes this error and introduces some simplifications. Zwick (2013) provides a very accessible introduction to this algorithm, but unfortunately also with some errors and it does not cover the optimization for dense graphs. Our presentation here is a synthesis of these previous presentations. We assume that the reader is familiar with the standard CLE algorithm, Union-Find (disjoint sets) and meldable heaps (such as Fibonacci Heaps). For an introduction of CLE, see Kübler et al. (2009, §4.3.3). For an introduction to Union-Find and Fibonacci Heaps see Cormen et al. (2009, §19 and §21)
+
+Just like the CLE algorithm, Tarjan's algorithm works in two phases. The first phase performs all the detection and contractions of cycles. Phase two expands those contractions to recover the optimal spanning tree. The algorithm for Phase I is shown in Algorithm 1. This is the first version of the algorithm that runs in $\mathcal{O}\left(n^{2} \log n\right)$ on dense graphs. We explain later how to modify it to get $\mathcal{O}(n^{2})$ runtime.
+
+The algorithm uses the following data structures:
+
+- $P[i]$ is a priority queue that contains all edges that enter (super-)node $i$
+- $in[i]$ stores the best edge that enters the (super-)node $i$ ,
+- $prev[i]$ stores the (super-)node that precedes (super-)node $i$ on the path that is currently being formed,
+
+- parent $[i]$ stores the super-node (cycle) in which (super-)node $i$ takes part,
+
+- children $[i]$ stores all the (super-)nodes that are part of the cycle represented by supernode $i$ (inverse of parent).
+
+One of the main insights of Tarjan is that when we do contraction of cycles, we do not need to explicitly change the edges that enter and leave the cycle. Instead, we keep the edges as they are but keep a separate disjoint-set data structure that will tell us for any edge to which cycle its source and target belong. This disjoint-set is represented by parent array. To make disjoint set operations efficient two heuristics are often applied in combination: union-by-rank and path-compression. Union-by-rank complicates implementation slightly and is not very important because even without it Tarjan's algorithm has the same runtime since disjoint-set is not a bottleneck. Path-compression is sufficient to get a fast runtime, but path compression destroys the tree (it maintains only the information of which node the root of the tree is). Since Phase II of Tarjan's algorithm needs the whole tree we should keep a separate array that works like parent, but unlike parent it is used only for the destructive find operation of the disjoint-sets. We do not put this in the pseudo-code since it would complicate the presentation.
+
+Tarjan's algorithm requires that the graph is strongly connected. We can easily ensure this in $\mathcal{O}(n)$ time by adding edges with weight $-\infty$ between every node $i$ and $i + 1$ in both directions (assuming any arbitrary ordering between nodes).
+
+The algorithm starts at an arbitrary node $a$ . It takes the highest scoring edge entering $a$ (line 9) and finds the cycle (super-node) to which the source of the edge belongs. There are three cases to be explored for this edge.
+
+1. this is a self-loop, i.e. both source and target belong to an already collapsed cycle, in that case just move to the next best edge of the current node,
+2. this is an extension of the path, in that case we move to the source of the path,
+3. this edge closes a cycle, in that case we collapse the cycle into a super-node.
+
+
+(a) Graph
+(b) Cycle tree
+Figure 7: Example run of Tarjan's algorithm.
+
+
+
+When we collapse the cycle in case 3, we meld the priority queues with the edges of all the nodes that participate in the cycle. This is why case 1 is possible: after collapsing done by case 3 we do not remove the edges that are within the elements that are inside the cycle. This is the key point that differentiates the two versions of Tarjan's algorithm.
+
+The first version can use any implementation of a priority queue that has an efficient meld operation, for example Fibonacci Heaps can do it in constant time. With that heap implementation the algorithm needs to do $m$ extract_max operations and $n$ meld operations that gives complexity $\mathcal{O}(m\log n)$ .
+
+The second version of the algorithm which is optimized for dense graphs has a very different implementation of a priority queue. In this version of the algorithm, a queue is only a simple array of length $n$ (number of nodes) where each element is a weight of some source edge entering the current node or a $NaN$ value if that edge is already extracted. Extracting the maximum in this representation is done by a linear scan trough the array. Melding is an interesting operation here because it is a lossy operation. Imagine that we need to meld two queues of this type named $a$ and $b$ . If both queues have entering edges from some node $i$ , the melded queue needs to store only the highest scoring one (we care only about the best edge that enters the cycle from some outside node). So $c[i] = \max(a[i], b[i])$ . If either $a[i]$ or $b[i]$ is $NaN$ then $c[i]$ will be $NaN$ too. This will remove self-loops and therefore eliminate
+
+the need for case 1 of the first version of the algorithm. A Python implementation of this priority queue is shown in Figure 12.
+
+As a note, we should mention that some papers mention that Radix sort is needed for implementing this efficient queue. This stems from Tarjan's original paper that mentions Radix sort for initialization of the queue. However, Radix sort is not needed in the complete graph. The reason why Tarjan proposes Radix sort is to avoid worst-case complexity when graph is not sparse but not fully complete either. If we want to stretch this analogy, we can see our implementation as one that uses Counting sort instead of Radix sort.
+
+Since this version of a queue has a slower extract_max and slower meld, what is its purpose? The main advantage of this type of queue is that it removes the self-loop edges that appear with contraction so the case 1 described above will never appear. That means that this version of the algorithm has only $n$ extract_max operations and $n$ meld operations which gives total runtime of $\mathcal{O}(n^2)$ .
+
+The Algorithm 2 presents the second phase Tarjan's algorithm that decomposes the cycle tree constructed by the first phase. For a more detailed description of this phase, see Zwick (2013).
+
+As an illustration of the first phase of Tarjan's algorithm consider the graph in Figure 7a. Imagine that the original graph contained only black edges. In order to apply Tarjan's algorithm we first needed to add gray edges with cost $-\infty$ (we did not add similar edges for
+
+$3 \rightarrow 2$ and $4 \rightarrow 3$ to keep things simple). This graph is now strongly connected. We can start parsing from any node in the graph. Let us assume we started from node 2. We take the best non-visited edge entering the current node (extract_max $(P[a])$ ). That gives us the edge from node 4. We go to node 4 and repeat the same process that leads us to node 3 and then 2. We have formed a cycle by building the path backwards. This cycle is contracted forming a super-node 5. A note of this is taken in Figure 7b that represents non-compressed version of the disjoint-set structure. We continue choosing the best edge from node 5 and get to node 1 and then 0. When we take the best edge entering node 0 we form a cycle. Notice that this edge has weight $-\infty$ but that is still the best edge that enters 0. Finally we form the cycle that covers the whole graph. Notice that for this phase of the algorithm it does not matter which node is our designated root node. The choice of the root plays part only in the second phase.
+
+# B Python Implementation
+
+# B.1 Reweighting Implementation
+
+The implementation of the Reweighting algorithm is shown in Figure 8. As input it accepts two arguments. One of them is a function that does unconstrained MST search. This can be an implementation of Chu-Liu-Edmonds or Tarjan's algorithm.
+
+The scores parameter is a NumPy square matrix (np.array) with shape $(n + 1, n + 1)$ . Every entry scores [i, j] represents the weight of arc that leaves node $j$ and enters node $i$ (i.e. $j \rightarrow i$ ). Node 0 is ROOT node by convention. All edges entering ROOT (i.e. scores [0, :]) are in most implementation set to $-\infty$ to force MST the solution to have ROOT as root. Also all self-loops (diagonal entries) are set to $-\infty$ .
+
+While it is in general fine to use $-\infty$ to signify disconnected edges, it would make Reweighting Equation 6 not behave correctly and make every spanning tree have weight $\infty$ . That is why with the first line we replace all infinite values with a $NaN$ value. The other lines just apply the Equation 6 before calling the unconstrained MST function.
+
+Algorithm 1 Tarjan Phase I - Collapsing
+1: for $i\in V$ do $\triangleright$ Initialization
+2: $P[i]\gets$ priority_queue(( $(j,i)\in G\}$ )
+3: in[i] $\leftarrow$ null
+4: prev[i] $\leftarrow$ null
+5: parent[i] $\leftarrow$ null
+6: children[i] $\leftarrow$ null
+7: $a\gets$ arbitrary vertex
+8: while $P[a]\neq \emptyset$ do
+9: $(u,v)\gets$ extract_max(P[a])
+10: $b\gets$ find(u)
+11: if $a = b$ then
+12: continue $\triangleright$ This is a self-loop
+13: else
+14: in[a] $\leftarrow$ (u,v)
+15: prev[a] $\leftarrow$ b
+16: if in[u] $\equiv$ null then
+17: $a\gets b$ $\triangleright$ Path extension
+18: else
+19: $c\gets$ newvertex() $\triangleright$ New cycle
+20: $i\gets a$
+21: do $\triangleright$ Collect nodes in the cycle
+22: insert(children[c],find(i))
+23: $i\gets$ prev[i]
+24: while $i\neq a$
+25: for $i\in$ children[c] do
+26: parent[i] $\leftarrow$ c
+27: add_const(P[i],-w(in[i]))
+28: $P[c]\gets$ meld(P[c],P[i])
+29: $a\gets c$
+
+Algorithm 2 Tarjan Phase II - Expanding
+1: $R\gets \emptyset$
+2: procedure DISMANTLE(u)
+3: while parent[u] $\neq$ null do
+4: for $v\in$ children[parent[u]]\{u} do
+5: parent[v] $\leftarrow$ null
+6: if children[v] $\neq$ null then
+7: insert(R,v)
+8: DISMANTLE(r)
+9: while $R\neq \emptyset$ do
+10: $c\gets$ extract(R)
+11: $(u,v)\gets in[c]$
+12: in[v] $\leftarrow$ (u,v)
+13: DISMANTLE(v)
+14: return{in[u] $\mid$ u $\in V\backslash \{r\}$ }
+
+# B.2 ArcMax Implementation
+
+Figure 9 shows the implementation of all functions needed for the ArcMax optimization. The arcmax function takes three arguments. scores and mst_func are the same as in the previous case. The one_root argument is a Boolean flag defining whether we want to perform single-root edge parsing or not.
+
+This function has three main parts. The part that computes the highest scoring edge entering every node (scores.argmax), the part that checks whether the subgraph is a tree, and an optional third part that will be executed if a subgraph is not a valid tree. The scores.argmax part runs in $\mathcal{O}(n^2)$ but in practice it is extremely fast because it does a very simple operation that is implemented in $C$ under the hood. Checking whether the sub-graph is a tree is_tree is done quickly in linear time. The full MST parsing is performed in $\mathcal{O}(n^2)$ (or $\mathcal{O}(n^3)$ if we use CLE) only if the previous fast checks fail.
+
+Function fast_single_root_mst shows how to combine ArcMax and Reweighting, assuming that there is an existing implementation of some unconstrained MST parsing algorithm such as Tarjan's.
+
+For the projective case we would need to replace function is_tree with the function is_project_tree from Figure 10 and to replace tarjan with eisner. The algorithm for checking of whether the tree is projective in Figure 10 runs in linear time because it visits every arc in the sub-graphs only once.
+
+Figure 8: Python implementation of Reweighting algorithm
+```python
+def reweighting(scores, MST_func):
+ scores2 = np.where(np.isinf(scores), np.nan, scores)
+ n = scores.shape[0] - 1 # number of words
+ scores[:, 0] -= 1 + n*(np.nanmax(scores2) - np.nanmin(scores2))
+ return MST_func(scores)
+```
+
+Figure 9: Python implementation of ArcMax optimization
+```python
+def is_tree(proposal):
+ # proposal[i] is a parent of node i
+ n = proposal.shape[0] # number of words + 1 for ROOT
+ # convert child-parent pointers to parent-child
+ children = [[for_in_range(n)]
+ for i in range(1, n):
+ children[proposal[i]].append(i)
+ # do depth-first search iteratively
+ is.visitied = np.zeros(n, dtype=bool)
+ stack = [0]
+ while len(layer) != 0:
+ i = stack.pop()
+ is.visitied[i] = True
+ stack.append(children[i])
+ return is.visitied.all() # true if all nodes were visited
+def arcmax(scores, one_root, MST_func):
+ proposal = scores.argmax(axis=1) # find best arc for each node
+ root_count = sum(proposal[1:]) == 0)
+ if is_tree(proposal) and (root_count == 1 or not one_root):
+ return proposal
+ else:
+ return MST_func(scores)
+def fast_unconstrained_mst(scores):
+ return arcmax(scores, False, tarjan)
+def fast_single_root_mst(scores):
+ return arcmax(scores, True, lambda x: reweighting(x, tarjan))
+```
+
+```python
+def is_projective_treeproposal):
+ n = proposal.shape[0]
+ dep_s_count = np.zeros(n, dtype=int)
+ for i in range(1, n):
+ dep_s_count[proposal[i]] += 1
+stack = [0]
+for i in range(1, n):
+ stack.append(i)
+while len(layer) > 1:
+ right = stack.pop()
+left = stack.pop()
+if proposal(left) == right:
+ # exists left arc
+ stack.append(right)
+ dep_s_countright] == 1
+elif proposal(right) == left and dep_s_countright] == 0:
+ # exists right arc
+ stack.append(left)
+ dep_s_countleft] == 1
+else:
+ # no attachments possible
+ # restore stack and move to next word
+ stack.append(left)
+ stack.append(right)
+break
+return stack == [0]
+```
+
+
+Figure 10: Python implementation of a linear-time check for whether a sub-graph is a projective tree.
+(a) Trained English input
+
+
+(b) Random input
+Figure 11: Comparison of combinations of Meta-Algorithm (Preselection, Reweighting) and MST algorithm (Tarjan, CLE).
+
+class EdgePriorityQueue:
+```python
+def __init__(self, node_id: int, edgeweights: np.ndarray): self.target = np.full(edgeweights.shape, node_id) selfweights = edgeweights selfweights[node_id] = np.nan
+def __len__(self) -> int: return np.count_nonzero(~np.isnan(selfweights))
+def extract_max(self) -> (int, int, float): i = np.nanargmax(selfweights) if np.isnan(selfweights[i]): # nanargmax bug with -inf i = np.argmax(np.isinf(selfweights)) w = selfweights[i] selfweights[i] = np.nan return i, self.target[i], w
+def meld_inplace(self, other) -> None: to_replace $=$ (selfweights $< _{\cdot}$ otherweights) self.target[to_replace] $=$ other.target[to_replace] selfweights[to_replace] $=$ otherweights[to_replace] selfweights[np.isnan(otherweights)] $=$ np.nan
+def add_const(self, const: float) -> None: selfweights[^np.isinf(selfweights)] += const
+```
+
+
+(a) Trained English input
+
+
+Figure 12: Python implementation of the priority queue needed for the dense graph version of Tarjan's algorithm.
+(b) Random input
+Figure 13: Comparison against Stanza's implementation of single-root dependency parsing.
\ No newline at end of file
diff --git a/arootofaproblemoptimizingsinglerootdependencyparsing/images.zip b/arootofaproblemoptimizingsinglerootdependencyparsing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..58a0a56bf5fade89b3da17697189effaff8a6ea4
--- /dev/null
+++ b/arootofaproblemoptimizingsinglerootdependencyparsing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d69a12475bd9c00eb991568d7932f383da67fb0a395ad6708e5f72c5d53ba57
+size 511598
diff --git a/arootofaproblemoptimizingsinglerootdependencyparsing/layout.json b/arootofaproblemoptimizingsinglerootdependencyparsing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..60693e51d2ee7ee20864df97b5940d6133e3dcb2
--- /dev/null
+++ b/arootofaproblemoptimizingsinglerootdependencyparsing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51aa66e682f1f12959530de9dbf3fa514787b29da98c18d51d9628c302075c94
+size 565447
diff --git a/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_content_list.json b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..64dca766a29684f3ecdac0b18bb50400fbfd0139
--- /dev/null
+++ b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51b0a15e2c3c213f77d6ef6ca09f7e7dcd1d1070e46ebf1d3fa5927de8a8813e
+size 72985
diff --git a/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_model.json b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..cce6ac5ad4c532d2d8c4a63ad639599b45c52469
--- /dev/null
+++ b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a91a60938a2706975381d465006df947142794f8c48e86f675a1aae55a4a9bcd
+size 88197
diff --git a/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_origin.pdf b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fab775bb7ca6bf1c56eea7a437c5b032e3850adb
--- /dev/null
+++ b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/49f37105-ea2c-41ce-853f-91110ffb6960_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35705839603fcc8c5d449e5fa8feb982b9ec06e9ff5d65afba70c24caa406c95
+size 807745
diff --git a/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/full.md b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..80dd23687e2a64ffde30b843723b4637d771cf76
--- /dev/null
+++ b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/full.md
@@ -0,0 +1,238 @@
+# A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems
+
+Sunghyun Park*, Han Li*, Ameen Patel, Sidharth Mudgal, Sungjin Lee, Young-Bum Kim, Spyros Matsoukas, Ruhi Sarikaya
+
+Amazon Alexa AI
+
+{sunghyu, lahl, paameen, sidmsk, sungjinl, youngbum, matsouka, rsarikay}@amazon.com
+
+# Abstract
+
+Natural Language Understanding (NLU) is an established component within a conversational AI or digital assistant system, and it is responsible for producing semantic understanding of a user request. We propose a scalable and automatic approach for improving NLU in a large-scale conversational AI system by leveraging implicit user feedback, with an insight that user interaction data and dialog context have rich information embedded from which user satisfaction and intention can be inferred. In particular, we propose a domain-agnostic framework for curating new supervision data for improving NLU from live production traffic. With an extensive set of experiments, we show the results of applying the framework and improving NLU for a large-scale production system across 10 domains.
+
+# 1 Introduction
+
+For a conversational AI or digital assistant system (Kepuska and Bohouta, 2018), Natural Language Understanding (NLU) is an established component that produces semantic interpretations of a user request, which typically involves analysis in terms of domain, intent, and slot (El-Kahky et al., 2014). For instance, the request "Play a song by Taylor Swift" can be interpreted as falling within the scope of Music domain with Play Song intent and Taylor Swift identified for Artist slot.
+
+Without an accurate semantic understanding of the user request, a conversational AI system cannot fulfill the request with a satisfactory response or action. As one of the most upstream components in the runtime workflow (Sarikaya, 2017), NLU's errors also have a wider blast radius that propagate to all subsequent downstream components, such as dialog management, routing logic to back-end applications, and language generation.
+
+
+Figure 1: An example of implicit user feedback, specifically an indication of user dissatisfaction and user rephrase behavior, that can be used to create new supervision data to correct NLU errors. The left side shows the dialog history and the right side shows the ranked NLU interpretations for each user request.
+
+A straight-forward way to improve NLU is through human annotations, but they are labor-intensive and expensive. Such annotations require at least multiple tiers of annotations (e.g., end user experience, error attribution, and semantic interpretation), and it is hard to consider all relevant contextual conditions. They are also limited by the existing annotation guidelines that may be outdated or that may not accurately reflect user expectations. Due to these limitations, leveraging user feedback, both implicit and explicit, from real production systems is emerging as a new area of research.
+
+Our work makes three main contributions. First, this work is the first in the literature to introduce a scalable, automatic and domain-agnostic approach for leveraging implicit user feedback to continuously and directly improve the NLU component of a large-scale conversational AI system in production. This approach can be applied week over week to continuously and automatically improve NLU towards better end-to-end user experience, and given that no human annotation is required, the approach also raises minimal user privacy concerns. Our approach of using implicit feedback is based on our insight that user interaction data and dialog context have rich information embedded from which user
+
+satisfaction and intention can be inferred (see Figure 1). Second, we propose a general framework for curating supervision data for improving NLU from live traffic that can be leveraged for various subtasks within NLU (e.g., domain/intent classification, slot tagging, or cross-domain ranking). Last, we show with an extensive set of experiments on live traffic the impact of the proposed framework on improving NLU in the production system across 10 widely used domains.
+
+# 2 Background and Problem Definition
+
+The NLU component typically has three main types of underlying models - domain classifiers, intent classifiers, and slot taggers (El-Kahky et al., 2014). The three modeling tasks can be treated independently (Gao et al., 2018) or as a joint optimization task (Liu and Lane, 2016; Hakkani-Tur et al., 2016), and some systems have a model to rank across all domains, intents and slots on a certain unit of semantic interpretation (Su et al., 2018).
+
+Leveraging implicit feedback from the users has been widely studied in the context of recommendation systems (Hu et al., 2008; He et al., 2016; Liu et al., 2010; Loni et al., 2018; Rendle et al., 2012; He and McAuley, 2016; He et al., 2016; Wang et al., 2019) and search engines (Joachims, 2002; Sugiyama et al., 2004; Shen et al., 2005; Bi et al., 2019). In such systems, common types of implicit user feedback explored include a history of browsing, purchase, click-through behavior, as well as negative feedback. Leveraging implicit feedback in the context of conversational AI systems is relatively unexplored, but it has been applied for rewriting the request text internally within or post the Automatic Speech Recognition (ASR) component (Ponnusamy et al., 2019), improving the Natural Language Generation component (Zhang et al., 2018), and using user engagement signals for improving the entity labeling task specifically focused on Music domain (Muralidharan et al., 2019). We note that compared to explicit feedback (Petrushkov et al., 2018; Iyer et al., 2017), using implicit feedback is more scalable and does not introduce friction in user experience. But it comes with a challenge of the feedback being noisy, and leveraging the feedback is more difficult when there is no sufficient data such as for improving tail cases (Wang et al., 2021a,b).
+
+In this paper, we specifically focus on two types of implicit user feedback - dissatisfaction of expe
+
+rience (to understand what to fix, e.g., users prematurely interrupting a system's response) and clarification of intention through rephrase (to understand how to fix, e.g., users clarifying their requests by rephrasing the previous request in simpler terms). In this work, we assume that there are mechanisms already in place to automatically (1) infer user dissatisfaction ( $f_{\text{defect}}$ in Section 2.3) and also (2) detect whether a given request is a rephrase of a previous request ( $f_{\text{rephrase}}$ in Section 3). There are many ways to build these two mechanisms, either rule-based or model-based. Due to space limitation, we leave more details of the two mechanisms outside the scope of this paper. For completeness and better context to the reader however, we briefly describe various ways to build them, which would be straightforward to adapt and implement.
+
+# 2.1 User Dissatisfaction Detection
+
+Unless we specifically solicit users' feedback on satisfaction after an experience, user feedback is mostly implicit. There are many implicit user behavior signals that can help with detecting user dissatisfaction while interacting with a conversational AI system. They include termination (stopping or cancelling a conversation or experience), interruption (barging in while the system is still giving its response), abandonment (leaving a conversation without completing it), error-correcting language (preceding the follow-up turn with "no, ..." or "I said, ..."), negative sentiment language showing frustration, rephrase or request reformulation, and confirmation to execute on an action (Beaver and Mueen, 2020; Sarikaya, 2017).
+
+Although not strictly from the user behavior, there are other signals from the system action and response that are also useful. They include generic error-handling system responses ("I don't know that one.", the templates executed for generating natural language error-handling responses (the song entity is not found for playing music), and the absence of a response (Beaver and Mueen, 2020; Sarikaya, 2017). There are also component-level signals such as latency or low confidence scores for the underlying models within each component such as ASR or NLU.
+
+For more advanced approaches, we can combine the signals from the user behavior and the system together, try to model user interaction patterns, and use additional context from past interaction history beyond immediate turns (Jiang et al., 2015; Ultes
+
+and Minker, 2014; Bodigutla et al., 2020). Furthermore, user satisfaction can depend on usage scenarios (Kiseleva et al., 2016), and for specific experiences like listening to music, we can adapt related concepts such as dwell time in the search and information retrieval fields to further fine-tune.
+
+# 2.2 User Rephrase Detection
+
+There are many lines of work in the literature that are closely related to this task under the topics of text/sentence semantic similarity detection and paraphrase detection. The approaches generally fall into lexical matching methods (Manning and Schutze, 1999), leveraging word meaning or concepts with a knowledge base such as WordNet (Mihalcea et al., 2006), latent semantic analysis methods (Landauer et al., 1998), and those based on word embeddings (Camacho-Collados and Pilehvar, 2018) and sentence embeddings (Reimers and Gurevych, 2019). In terms of modeling architecture, Siamese network is common and has been applied with CNN (Hu et al., 2014), LSTM (Mueller and Thyagarajan, 2016), and BERT (Reimers and Gurevych, 2019). The task is also related to the problems in community question-answering systems for finding semantically similar questions and answers (Srba and Bielikova, 2016).
+
+# 2.3 Problem Definition
+
+Denote $\mathcal{T} = (\Sigma, \Pi, N, A)$ to be the space of all user interactions with a conversational AI system with each request or turn $t_i = (u_i, p_i, c_i, a_i) \in \mathcal{T}$ consisting of four parts: $u_i \in \Sigma$ is the user request utterance, $p_i \in \Pi$ is the semantic interpretation for $u_i$ from NLU, $c_i \in N$ is the contextual metadata (e.g., whether the device has a screen), and $a_i \in A$ is the system action or response. Here, we are proposing a general framework that allows a scalable and automatic curation of supervision data to improve NLU, and we keep the unit of the semantic interpretation abstract for generalizability, which can be for one or a combination of NLU subtasks of domain classification, intent classification, and slot tagging. For instance, one possible interpretation unit would be domain-intent-slots tuple, which is what we use in our experiments described in Section 4. Although we only focus on NLU in this paper, the approach here can be extended to improve other components in a conversational AI system such as skill routing (Li et al., 2021).
+
+We define a session of user interaction $s = \{t_1, t_2, \ldots, t_q\} \subseteq \mathcal{T}$ which is a list of time-
+
+consecutive turns by the same user. Denote $m_{t}$ to be the NLU component at timestamp $t$ . We collect the interaction session data $S_{live} = \{s_{1}, s_{2}, \ldots, s_{n}\}$ from live traffic for a certain period of time $\Delta$ (e.g., one week) starting at time $t$ , from which we curate new supervision data to produce $m_{t + \Delta}$ with improved performance. Specifically, given a tool $f_{\text{defect}}$ for automatic analysis of user dissatisfaction for each turn, we process $S_{live}$ to identify all turns that indicate user dissatisfaction, $t_{i} \in \mathcal{D}_{\text{defect}}$ , which we call a defective turn or simply a defect. The key challenges then are how to (1) identify target defects which are high-confidence defects that can be targeted by NLU (i.e., there is sufficient disambiguation power within NLU that it can learn to produce different results if given specific supervision) and that are likely causing repeated and systematic dissatisfaction of user experience, and (2) find a likely better interpretation for the target defects to change system action or response that leads to user satisfaction.
+
+# 3 Solution Framework
+
+The framework involves two deep learning models - Defect Identification Model (DIM) for addressing the first challenge of identifying target defects and Defect Correction Model (DCM) for the second challenge of correcting them by automatically labeling them with a likely better semantic interpretation (see Figure 2). It is straight-forward to apply DIM and DCM on the production traffic log to curate new supervision data for improving NLU.
+
+Data Preparation: We collect the user interaction session data from the production log $S_{live}$ for an arbitrary period of time (e.g., past one week). Given a user dissatisfaction analysis tool $f_{defect}$ and a rephrase analysis tool $f_{rephrase}$ , we tag $t_j \in s_i$ as a defect if $f_{defect}$ detects user dissatisfaction for the turn and we tag $t_j \in s_i$ as a rephrase if there exists $t_i \in s_i$ where $j > i$ (i.e., temporally $t_j$ occurred after $t_i$ ) and $f_{rephrase}$ detects $t_j$ to be a rephrase of $t_i$ . We then extract each turn in $S_{live}$ to create turn-level data $D_{live} = \{t_j \in s_i \mid s_i \in S_{live}\}$ with $t_j$ containing two binary labels of defect $e_d$ and rephrase $e_r$ .
+
+# 3.1 Defect Identification Model (DIM)
+
+We define DIM as $f_{dim} : \mathcal{T} \to \{0,1\}$ , which takes as input each turn $t_i \in \mathcal{D}_{live}$ and outputs whether $t_i$ is a target defect or not. It uses the same contextual
+
+
+Figure 2: Our framework for leveraging implicit user feedback to automatically curate supervision data for improving NLU, consisting of Defect Identification Model (DIM) and Defect Correction Model (DCM).
+
+
+Figure 3: The model architectures for Defect Identification Model (DIM) and Defect Correction Model (DCM). For DIM, the prediction is for target defect probability, and for DCM, it is for correction probability (i.e., whether the alternate domain and intent is a good alternate ground-truth label).
+
+features (and architecture) as the underlying individual NLU model we wish to improve and uses the results of $f_{\text{defect}}$ , or $e_d$ , as the ground-truth labels for training. This allows us to filter down the defects into those that can be targeted by the NLU model of interest (since the same features could predict the defects, suggesting enough disambiguation capacity). By tuning the probability threshold used for binary model prediction, we can further reduce noise in defects and focus on more high-confidence defects that are repeated and systematic failures impacting the general user population.
+
+Figure 3 shows an example DIM architecture for a cross-domain interpretation re-ranking model (more detail in 4.1). The model architecture consists of three main modules: embedding, aggregation, and classification. Given each feature $f_{j}$ extracted from $t_i$ , the embedding module $H_{emb}$ converts $f_{j}$ into an embedding. For each sequential or categorical feature $f_{j}$ , denoting $\mathbf{w}_{f_j,t_i}$ as the value of $f_{j}$ with $m$ tokens (where $m = 1$ for categorical), we generate $\mathbf{v}_{f_j,t_i} = H_{emb}(\mathbf{w}_{f_j,t_i}) \in \mathbb{R}^{m\times d_{f_j}}$ with each token converted into the $d_{f_j}$ -dimensional
+
+# Algorithm 1 DIM threshold determination.
+
+procedure THRESSEARCH $(f_{dim},\mathcal{D}_{valid},\lambda ,\epsilon)$
+
+low, high $\leftarrow 0,1$
+
+while | low - high $| > \epsilon$ do
+
+$\tau \leftarrow (\mathrm{low} + \mathrm{high}) / 2$
+
+$\mathcal{P}_{valid} \gets \{t_i \mid f_{dim}(t_i) > \tau, \forall t_i \in \mathcal{D}_{valid}\}$
+
+$\alpha \gets$ PREDICTIONACCURACY $(\mathcal{P}_{valid})$
+
+if $\alpha < \lambda$ then low $\leftarrow \tau$
+
+else high $\leftarrow \tau$
+
+return $\tau$
+
+embedding. For each numerical feature, we have $\mathbf{v}_{f_j,t_i} = \mathbf{w}_{f_j,t_i}$ as each feature is already represented by numeric values. The aggregation module $H_{agg}$ then converts $\mathbf{v}_{f_j,t_i}$ of each feature $f_j$ to an aggregation vector $\mathbf{u}_{f_j,t_i}$ that summarizes the information of $\mathbf{v}_{f_j,t_i}$ . Based on the feature type, $H_{agg}$ applies different aggregation operations. For example, we apply a Bi-LSTM (Schuster and Paliwal, 1997) to the utterance text embeddings $\mathbf{v}_{f_1,t_i}$ to capture the word context information. Finally, the classification module $H_{cls}$ takes as input all aggregation vectors to make a prediction whether $t_i$ is a target defect or not. Specifically, we first concatenate all aggregation vectors to get a summarization vector $\mathbf{u}_{t_i} = \bigoplus_{f_j}\mathbf{u}_{f_j,t_i}$ . Then, a two-layer highway network (Srivastava et al., 2015) is applied to $\mathbf{u}_{t_i}$ to make a binary prediction. The model is trained using binary cross-entropy loss.
+
+When developing DIM, we split $\mathcal{D}_{\text{live}}$ into the training set $\mathcal{D}_{\text{train}}$ and the validation set $\mathcal{D}_{\text{valid}}$ with a ratio of 9:1. Once we have DIM trained with $\mathcal{D}_{\text{train}}$ , we use $\mathcal{D}_{\text{valid}}$ to further tune the prediction probability threshold used to extract target defects from all defects tagged by $f_{\text{defect}}$ . Specifically, for each turn $t_i \in \mathcal{D}_{\text{defect}}$ , we pass it to $f_{\text{dim}}$ to get the confidence score $o_i = f_{\text{dim}}(t_i)$ of being a defect. Then, we generate the target defect set $\mathcal{D}_{\text{target}} = \{t_i \mid o_i > \tau\}$ , i.e., we collect all turns satisfying the defect prediction confidence being greater than a threshold $\tau$ . In order to select the value for $\tau$ , we perform a binary search on $\mathcal{D}_{\text{valid}}$ as shown in Algorithm 1, which takes as inputs two additional parameters $\lambda$ (to set the minimum prediction accuracy we want) and $\epsilon$ .
+
+# 3.2 Defect Correction Model (DCM)
+
+We define DCM as $f_{dcm} : \mathcal{T} \times \Pi \to \{0,1\}$ , which takes as input a pair $(t_i, p_j)$ with $t_i \in \mathcal{D}_{live}$ and $p_j \in \Pi$ to make a prediction whether $p_j$ is a proper semantic interpretation for $t_i$ . As the space of the semantic interpretation $\Pi$ is too large, we can make the process more efficient by restricting to
+
+find a better interpretation in the $k$ -best predictions $P_{i}^{k}\subseteq \Pi$ (i.e., $k$ interpretations with the highest prediction confidence) by the NLU model of interest. Note that it is not difficult to force more diversity into the $k$ -best predictions by only allowing top predictions from each domain or intent. For training, we leverage rephrase information from the logged data to automatically assign a corrected semantic interpretation as the new ground-truth label for the defects, with the following assumption: Given a pair of turns $t_i$ and $t_j$ , if (a) the utterance of $t_j$ rephrases the utterance of $t_i$ in the same session and (b) $t_j$ is non-defective, then the semantic interpretation of $t_j$ is also the correct interpretation for $t_i$ .
+
+Following the example DIM architecture for the cross-domain interpretation re-ranking model in Figure 3, the DCM architecture extends that of DIM with the main difference that we can generate other features based on domain, intent and slot information from $p_j$ . To obtain the training data, we first examine all turns in $\mathcal{D}_{live}$ to generate the high value set $\mathcal{D}_h \subseteq \mathcal{T} \times \mathcal{T}$ . Each instance $(t_i, r_i) \in \mathcal{D}_h$ is a pair of turns satisfying (a) $t_i \in \mathcal{D}_{live}$ is a defect and (b) $r_i \in \mathcal{D}_{live}$ is a non-defective rephrase of $t_i$ in the same session (defects and rephrases are described in Section 2.3 and Section 3: Data Preparation). We then generate the training data $\mathcal{D}_{train}$ using the high value set $\mathcal{D}_h$ . Specifically, for each pair $(t_i, r_i) \in \mathcal{D}_h$ , we generate $k$ training instances as follows. First, we get the $k$ -best interpretations $P_{r_i}^k$ of $r_i$ . Then, we pair $t_i$ with each candidate $p_j \in P_{r_i}^k$ to get a list of tuples $(t_i, p_1), (t_i, p_2), \ldots, (t_i, p_k)$ . Next, we expand each tuple $(t_i, p_j)$ by assigning a label $c$ indicating whether $p_j$ can be a proper interpretation for $t_i$ . Denote $p^* \in P_{r_i}^k$ as the correct interpretation for $r_i$ , assumed since it is executed without a defect (note that the top-1 interpretation is not necessarily the executed and correct one, although it is most of the time). We generate one positive instance $(t_i, p^*, c = 1)$ , and $k - 1$ negative instances $\{(t_i, p_j, c = 0) | p_j \in P_{r_i}^k \wedge p_j \neq p^*\}$ . Only using the $k$ -best interpretations from $r_i$ to generate $\mathcal{D}_{train}$ may not be sufficient, as in practice the value $k$ is small and many interpretations observed in real traffic does not appear in the training data. To make the model generalize better, we augment the training data by injecting random noise. For each pair $(t_i, r_i) \in \mathcal{D}_h$ , in addition to the $k - 1$ generated negative instances, we randomly draw
+
+| Domain | Total | W | L | T | O | Δ1 | Δ2(%) |
| Overall* | 2,000 | 367 | 196 | 1,412 | 25 | 171 | 8.5 |
| Knowledge* | 200 | 77 | 25 | 98 | 0 | 52 | 26.0 |
| MyTasks* | 200 | 82 | 36 | 73 | 9 | 46 | 23.0 |
| Multimedia-2* | 200 | 59 | 39 | 100 | 2 | 20 | 10.0 |
| Help* | 200 | 42 | 22 | 134 | 2 | 20 | 10.0 |
| Multimedia-3* | 200 | 34 | 19 | 146 | 1 | 15 | 7.5 |
| ChitChat | 200 | 29 | 22 | 149 | 0 | 7 | 3.5 |
| DeviceControl | 200 | 8 | 4 | 187 | 1 | 4 | 2.0 |
| SmartHome | 200 | 14 | 10 | 172 | 4 | 4 | 2.0 |
| Shopping | 200 | 9 | 7 | 183 | 1 | 2 | 1.0 |
| Multimedia-1 | 200 | 13 | 12 | 170 | 5 | 1 | 0.5 |
+
+Table 1: Overall side-by-side win-loss evaluation results across 10 domains, comparing the top interpretation prediction between the baseline NLU and the updated NLU improved with our framework. "W," "L," "T" and "O" represent "Win," "Loss," "Tie" and "Others" respectively. A win means that the updated NLU produced a better top interpretation than the baseline (* denotes statistical significance at $p < .05$ ).
+
+$q$ interpretations $P_{noise}^{q} = \{p_{1}^{n}, p_{2}^{n}, \ldots, p_{q}^{n}\} \subseteq \Pi$ that are not in $P_{r_i}^k$ , and we generate $q$ new negative instances $\{(t_i, p_j^n, c = 0) \mid p_j^n \in P_{noise}^q\}$ . In short, DCM's role is to find the most promising alternate interpretation in $t_i$ 's k-best interpretation list given that $t_i$ is a defect.
+
+New Supervision Data Curation: Once we have $f_{dcm}$ trained, the last step of the framework is to curate new supervision data by applying $f_{dcm}$ to each turn $t_i \in \mathcal{D}_{target}$ identified by $f_{dim}$ and automatically assigning a better semantic interpretation for correction. Specifically, we pair each turn $t_i \in \mathcal{D}_{target}$ with every interpretation candidate $p_j \in P_i^k$ as the input to $f_{dcm}$ . The interpretation with the highest score $p^* = \arg \max_{p_j \in P_i^k} f_{dcm}(t_i, p_j)$ is used as the corrected interpretation for $t_i$ .
+
+# 4 Experiment Results and Discussion
+
+# 4.1 Experiment Methodology
+
+Dataset and Experiment Settings: Given a baseline NLU in production, $m_{base}$ , which produces a ranked list of interpretations with each interpretation comprising domain-intent-slots tuple, we inject a re-ranking subtask at the very last layer of the NLU workflow to build an improved NLU, $m_{new}$ . We call the subtask re-ranking because it takes in an already ranked list (i.e., the output of $m_{base}$ ) and makes a final adjustment. We leverage the new supervision data obtained through our framework to train the re-ranking model for improv
+
+| Domain | ASR Error | NLU Error | Bad Response | Others | NLU Error Δ | Domain | Total | W | L | T | U | Δ1 | Δ2(%) |
| DEF | DIM | DEF | DIM | DEF | DIM | DEF | DIM | Overall* | 1,000 | 399 | 77 | 365 | 159 | 322 | 32.2 |
| Overall* | 29.8 | 29.1 | 14.3 | 39.0 | 24.8 | 11.3 | 31.1 | 20.6 | 24.7 | Multimedia-2* | 100 | 82 | 3 | 12 | 3 | 79 | 79.0 |
| SmartHome* | 29.0 | 22.0 | 0.0 | 34.0 | 47.0 | 13.0 | 24.0 | 31.0 | 34.0 | Multimedia-3* | 100 | 61 | 0 | 31 | 8 | 61 | 61.0 |
| DeviceControl* | 13.0 | 36.0 | 7.0 | 41.0 | 20.0 | 12.0 | 60.0 | 11.0 | 34.0 | Knowledge* | 100 | 56 | 7 | 23 | 14 | 49 | 49.0 |
| Multimedia-3* | 41.0 | 30.0 | 21.0 | 52.0 | 20.0 | 10.0 | 18.0 | 8.0 | 31.0 | Multimedia-1* | 100 | 40 | 3 | 48 | 9 | 37 | 37.0 |
| Knowledge* | 36.0 | 23.0 | 16.0 | 45.0 | 13.0 | 5.0 | 35.0 | 27.0 | 29.0 | MyTasks* | 100 | 41 | 9 | 26 | 24 | 32 | 32.0 |
| Multimedia-1* | 37.0 | 34.0 | 6.0 | 33.0 | 38.0 | 20.0 | 19.0 | 13.0 | 27.0 | ChitChat* | 100 | 34 | 12 | 46 | 8 | 22 | 22.0 |
| MyTasks* | 24.0 | 46.0 | 16.0 | 42.0 | 6.0 | 3.0 | 54.0 | 9.0 | 26.0 | Help* | 100 | 41 | 20 | 25 | 14 | 21 | 21.0 |
| ChitChat* | 32.0 | 13.0 | 3.0 | 29.0 | 7.0 | 15.0 | 58.0 | 43.0 | 26.0 | SmartHome* | 100 | 25 | 7 | 56 | 12 | 18 | 18.0 |
| Multimedia-2* | 24.0 | 16.0 | 43.0 | 67.0 | 22.0 | 8.0 | 11.0 | 9.0 | 24.0 | Shopping* | 100 | 10 | 2 | 53 | 35 | 8 | 8.0 |
| Shopping | 31.0 | 44.0 | 11.0 | 19.0 | 35.0 | 7.0 | 23.0 | 30.0 | 8.0 | DeviceControl | 100 | 9 | 14 | 45 | 32 | -5 | -5.0 |
| Help | 31.0 | 27.0 | 20.0 | 28.0 | 40.0 | 20.0 | 9.0 | 25.0 | 8.0 | | | | | | | | |
+
+(a)
+
+(b)
+
+Table 2: (a) The analysis of DIM through error attribution annotations between the defects in the production traffic vs. the target defects identified by DIM. The numbers are in percentage. (b) The analysis of DCM through win-loss annotations between the top interpretation produced by the baseline NLU and the new interpretation label assigned by DCM. Statistical significance at $p < .05$ is noted with *, specifically on the NLU errors in (a).
+
+ing the overall NLU performance. Figure 4 shows the model architecture of the re-ranker, which is a simple extension of the DIM architecture, and it learns from the new supervision data when to top-rank a better interpretation that is not at the top of the list (trained with sigmoid activation functions at the output layer and binary cross-entropy loss). We note here that the specific model architecture is not as important as the new supervision data obtained through our framework that is the key for bringing NLU improvements. This experiment setup is appealing in that it is straightforward and simple, especially in the production setting. First, NLU consists of many domain-specific models that are spread out to multiple teams, making it difficult to coordinate leveraging the new supervision data for improvement across multiple domains. Second, working with the final re-ranking model allows us to improve NLU performance domain-agnostically without needing to know the implementation details of each domain. Third, it is easier to control the influence of the new supervision data since we need to manage only one re-ranking component.
+
+Given sampled and de-identified production traffic data from one time period $\mathcal{D}_{period1}$ , which have been analyzed by $f_{defect}$ and $f_{rephrase}^1$ , we first train DIM according to Section 3.1, with over 100MM training instances from $\mathcal{D}_{period1}$ and over 10MM defects identified by $f_{defect}$ . Then, we extract over 8MM high-value rephrase pairs (a defective turn and non-defective rephrase in the same session) from $\mathcal{D}_{period1}$ to train DCM according to Section 3.2. To train the re-ranker, we randomly sample over 10MM instances $\mathcal{D}_s \subseteq \mathcal{D}_{period1}$ and over 1MM defects identified by $f_{defect}$ . We apply
+
+
+Figure 4: The model architecture for the re-ranker, which is a subtask we put at the last layer of the NLU to produce a better ranked list of interpretations.
+
+the trained DIM to the sampled defects $\mathcal{F}_{\text {def }}$ that filters them down from over 1MM defects to over 300K target defects $\mathcal{F}_{dim}$ that the NLU re-ranker has sufficient features to target and produce different results. Then, all target defects $\mathcal{F}_{dim}$ are assigned a new ground-truth interpretation label by the trained DCM (note that not all defects have corresponding non-defect rephrases, hence the value of DCM for finding the most promising alternate interpretation from the ranked list), which serve as the new curated supervision for building $m_{\text {new }}$ , while the rest of the non-defective instances keep the top-ranked interpretation as the ground-truth label. In other words, most of the instances in $\mathcal{D}_s$ are used to replicate the $m_{\text {base }}$ results (a pass-through where the same input ranked list is outputted without any change), except for over 300K (over $3\%$ of the total training data) that are used to revert the ranking and put a better interpretation at the top.
+
+Overall Side-by-Side Evaluation: The overall performance between $m_{base}$ and $m_{new}$ was compared on another sampled production traffic from non-overlapping time period $D_{period2}$ in a shadow evaluation setting, in which the traffic flowing through $m_{base}$ was duplicated and simultaneously
+
+sent to $m_{\text{new}}$ that is deployed to the same production setting as $m_{\text{base}}$ but without end-user impact. Both $m_{\text{base}}$ and $m_{\text{new}}$ produced the same ranked list of interpretations for over $99\%$ of the time. Note that this is by design since incremental improvements are preferred in production systems without drastically changing the system behavior and that our approach can be applied continuously, week over week (changing the proportion of the new supervision data will have an impact on the replication rate). Furthermore, even $1\%$ change in the overall system behavior has a huge impact at the scale of tens of million of requests per week in a large-scale production system. We performed win-loss annotations on the deltas (when $m_{\text{base}}$ and $m_{\text{new}}$ produced different results) with in-house expert annotators who follow an established NLU annotation guideline to make a side-by-side evaluation whether $m_{\text{new}}$ produced a better interpretation (i.e., win) on the top compared to $m_{\text{base}}$ or not (N = 12, agreement = 80.3%, Cohen's kappa = 0.60 indicating moderate agreement; note that the annotators are trained to reach agreement level that is practical given the high complexity of the NLU ontology). We randomly sampled 200 such requests per domain that produced different results2.
+
+DIM Analysis: We randomly sampled 100 defects per domain from $\mathcal{F}_{\text{def}}$ and $\mathcal{F}_{\text{dim}}$ respectively and performed error attribution annotations (i.e., ASR error for mis-transcribing "play old town road" to "put hotel road", NLU error for mis-interpreting "how do I find a good Italian restaurant around here" to Question Answering intent instead of Find Restaurant intent, Bad Response for having a correct interpretation that still failed to deliver a satisfactory response or action, and Others for those that the annotators could not determine due to lack of context or additional information; N = 12, agreement = 71.3%, Cohen's kappa = 0.63 indicating substantial agreement).
+
+DCM Analysis: We perform the same win-loss annotations as described in overall shadow evaluation on 100 random samples per domain, specifically on the curated supervision data $\mathcal{F}_{dim}$ with new ground-truth assigned by DCM.
+
+Training Setup: All the models were implemented in PyTorch (Paszke et al., 2019) and trained
+
+and evaluated on AWS p3.8xlarge instances with Intel Xeon E5-2686 CPUs, 244GB memory, and 4 NVIDIA Tesla V100 GPUs. We used Adam (Kingma and Ba, 2014) for training optimization, and all the models were trained for 10 epochs with a 4096 batch size. All three models have around 12MM trainable parameters and took around 5 hours to train.
+
+# 4.2 Results and Discussions
+
+Overall Side-by-Side Evaluation: Table 1 shows the overall shadow evaluation results, making NLU-level comparison between $m_{base}$ and $m_{new}$ . The column Total shows the number of requests annotated per domain. The columns Win, Loss, and Tie show the number of requests where $m_{new}$ produced better, worse, and comparable NLU interpretations than $m_{base}$ respectively. The column Others shows the number of requests where the annotators could not make the decision due to lack of context. The column $\Delta_1$ shows the difference in the number of win and loss cases, and $\Delta_2$ shows the relative improvement (i.e., $\Delta_1 / Total$ in percentage). First, we note that $m_{new}$ overall produced a better NLU interpretation on 367 cases while making 196 losses, resulting in 171 absolute gains or $8.5\%$ relative improvement over $m_{base}$ . This indicates that applying our framework can bring a net overall improvement to existing NLU. Second, analyzing per-domain results shows that $m_{new}$ outperforms $m_{base}$ (7.5-26.0% relative improvements) on 5 domains, while making marginal improvements (0.5-3.5% improvements) on the other 5 domains.
+
+Analysis on DIM: Table 2.(a) summarizes the results of error attribution annotations between the defects in the production traffic (denoted as $DEF$ ) and target defects identified by DIM (denoted as $DIM$ ). The results show that the target defects identified by DIM help us focus more on the defects that are caused by ASR or NLU (the ones that can be targeted and potentially fixed, specifically NLU Error which is at $39.0\%$ of total for $DIM$ compared to $14.3\%$ for $DEF$ ) and filter out others (Bad Responses and Others). Per-domain results show that the target defects identified by DIM consistently have a higher NLU error ratio than that of original defects for all domains.
+
+Analysis on DCM: Table 2.(b) summarizes the win-loss annotation results on the new supervision data that take target defects from DIM and assign
+
+| Example | Dialog | Interpretation by \(m_{base}\) | Interpretation by \(m_{new}\) |
| Win_1 | \(U_1\): Show me d. c. universe.\(A_1\): I don't see a title named d. c. universe.\(U_2\): I want to watch d. c. universe.\(A_2\): Here is what I found ... | Domain: Multimedia-3Intent: PlaySlots: TitleName →d. c. universe | Domain: Multimedia-2Intent: PlaySlots: TitleName →d. c. universe |
| Win_2 | \(U_1\): What's the weather at Mission Beach today?\(A_1\): There aren't any missions near Long Beach, California.\(U_2\): What's the weather in San Diego?\(A_2\): Currently, 69 degrees with clear skies and sun. | Domain: KnowledgeIntent: QASlots: Question | Domain: WeatherIntent: WeatherDetailsSlots: Location →San Diego, CA |
| Loss_1 | \(U_1\): Play Wings of Fire.\(A_1\): I don't have a free content for that.\(U_2\): Play Me.\(A_2\): Me, by Taylor Swift. | Domain: Multimedia-3Intent: PlaySlots: TitleName →Wings of Fire | Domain: Multimedia-1Intent: PlaySlots: TitleName →Wings of Fire |
| Loss_2 | \(U_1\): Is the Collard Festival going to happen today?\(A_1\): Sorry, I'm not sure. | Domain: KnowledgeIntent: QASlots: Question | Domain: CalendarIntent: CheckCalendarSlots: Event →Collard Festival |
| Tie | \(U_1\): Can you play the Baltimore Ravens?\(A_1\): Here is what I found ... | Domain: KnowledgeIntent: QASlots: Question | Domain: Multimedia-1Intent: PlaySlots: TitleName →Baltimore Ravens |
+
+Table 3: Qualitative analysis comparing $m_{base}$ and $m_{new}$ in the overall side-by-side evaluation. For each example, the user request in bold is the turn for which the evaluation was performed. We show subsequent interaction dialog for context ( $U_*$ for user requests, $A_*$ for system answers). The first two examples are "wins" (i.e., $m_{new}$ better than $m_{base}$ ), followed by two "losses" (i.e., $m_{new}$ worse than $m_{base}$ ), and a "tie" (i.e., $m_{new}$ comparable to $m_{base}$ ).
+
+new interpretation labels for correction with DCM. The results show that overall DCM correctly assigns a better, corrected NLU interpretation on 399 cases and fails on 77 cases, resulting in 322 absolute gains or $32.2\%$ relative improvement. Per-domain results show that DCM consistently assigns a comparable or better interpretation on the target defects on almost all domains with a large margin (with $8.0\% - 79.0\%$ relative improvements on 9 domains).
+
+# 4.3 Qualitative Analysis
+
+The first two examples in Table 3 are wins where $m_{new}$ produced a better top interpretation than $m_{base}$ . In Win 1, $m_{base}$ produced an interpretation related to playing a title for a specific type of multimedia, while the user wanted to play the corresponding title in another multimedia type (e.g., music, video, or audio book). The updated NLU model $m_{new}$ produced the correct interpretation, most likely having learned to favor a multimedia type depending on the context, such as device status (e.g., music or video currently playing or screen is on). Similarly in Win 2, $m_{base}$ mis-interpreted the request as a general question due to not understanding the location "Mission Beach," which is corrected by $m_{new}$ .
+
+The next two examples are losses where $m_{new}$ top-ranked incorrect interpretations such that they produced worse results than $m_{base}$ . In Loss 1, the
+
+user is in the middle of trying out a free content experience for a specific multimedia type, and we suspect the reason $m_{new}$ produced the incorrect interpretation is that there are similar requests in live traffic to "Play Wings of Fire" with another multimedia type, such that the model learns to aggressively top-rank the interpretations associated with a more dominant multimedia type. In Loss 2, the request is for a general event query in the area, and although the Q&A still failed to correctly answer, it was determined that it would be worse to fail in Calendar domain.
+
+The last example is a "tie" where $m_{\text{new}}$ and $m_{\text{base}}$ both produced incorrect top interpretations that are equally bad in terms of user experience. Specifically, $m_{\text{base}}$ mis-interpreted the request as a Q&A, while $m_{\text{new}}$ mis-interpreted the meaning of "play" for playing multimedia instead of sports. As in Loss 1, We suspect many live utterances with the word "play" tend to be multimedia-related and biases DCM towards selecting multimedia-related interpretations.
+
+From the qualitative analysis, especially losses, we observe that we can make our framework and new supervision data more precise if we consider more interaction history context spanning a longer period of time when we train DCM, use more signals such as personalization or subscription signals (for multimedia content types such as music or
+
+audio book). Furthermore, for truly ambiguous requests, instead of aggressively trying to correct through a new interpretation, we could offer a better experience by asking a clarifying question.
+
+# 5 Conclusion
+
+We proposed a domain-agnostic and scalable framework for leveraging implicit user feedback, particularly user dissatisfaction and rephrase behavior, to automatically curate new supervision data to continuously improve NLU in a large-scale conversational AI system. We showed how the framework can be applied to improve NLU and analyzed its performance across 10 popular domains on a real production system, with component-level and qualitative analysis of our framework for more in-depth validation of its performance.
+
+# Acknowledgments
+
+We thank Sergei Dobroshinsky, Nathan Eversole, Alex Go, Kerry Hammil, Archit Jain, Shubham Katiyar, Siddharth Mohan Misra, Joe Pemberton, and Steve Saunders for their active involvement and support for this work in the industry production system.
+
+# References
+
+Ian Beaver and Abdullah Mueen. 2020. Automated conversation review to surface virtual assistant misunderstandings: Reducing cost and increasing privacy. In AAAI Conference on Artificial Intelligence.
+Keping Bi, Choon Hui Teo, Yesh Dattatreya, et al. 2019. Leverage implicit feedback for context-aware product search. arXiv preprint arXiv:1909.02065.
+Praveen Kumar Bodigutla, Aditya Tiwari, Spyros Matsoukas, et al. 2020. Joint turn and dialogue level user satisfaction estimation on multi-domain conversations. In Conference on Empirical Methods in Natural Language Processing.
+Jose Camacho-Collados and Mohammad Taher Pilehvar. 2018. From word to sense embeddings: A survey on vector representations of meaning. Journal of Artificial Intelligence Research, 63:743-788.
+Ali El-Kahky, Xiaohu Liu, Ruhi Sarikaya, et al. 2014. Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs. In International Conference on Acoustics, Speech and Signal Processing.
+
+Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In International ACM SIGIR Conference on Research and Development in Information Retrieval.
+Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, et al. 2016. Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM. In Annual Conference of the International Speech Communication Association.
+Ruining He and Julian McAuley. 2016. VBPR: visual bayesian personalized ranking from implicit feedback. In AAAI Conference on Artificial Intelligence.
+Xiangnan He, Hanwang Zhang, Min-Yen Kan, et al. 2016. Fast matrix factorization for online recommendation with implicit feedback. In International ACM SIGIR Conference on Research and Development in Information Retrieval.
+Baotian Hu, Zhengdong Lu, Hang Li, et al. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems.
+Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In International Conference on Data Mining.
+Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, et al. 2017. Learning a neural semantic parser from user feedback. arXiv preprint arXiv:1704.08760.
+Jiepu Jiang, Ahmed Hassan Awadallah, Rosie Jones, et al. 2015. Automatic online evaluation of intelligent assistants. In International Conference on World Wide Web.
+Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
+Veton Kepuska and Gamal Bohouta. 2018. Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home). In Annual Computing and Communication Workshop and Conference.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Julia Kiseleva, Kyle Williams, Jiepu Jiang, et al. 2016. Understanding user satisfaction with intelligent assistants. In ACM on Conference on Human Information Interaction and Retrieval.
+Thomas Landauer, Peter Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse Processes, 25:259-284.
+Han Li, Sunghyun Park, Aswarth Dara, et al. 2021. Neural model robustness for skill routing in large-scale conversational AI systems: A design choice exploration. arXiv preprint arXiv:2103.03373.
+
+Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454.
+Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010. Personalized news recommendation based on click behavior. In International Conference on Intelligent User Interfaces.
+Babak Loni, Martha Larson, and Alan Hanjalic. 2018. Factorization machines for data with implicit feedback. arXiv preprint arXiv:1812.08254.
+Christopher Manning and Hinrich Schutze. 1999. Foundations of Statistical Natural Language Processing. MIT press.
+Rada Mihalcea, Courtney Corley, Carlo Strapparava, et al. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In AAAI Conference on Artificial Intelligence.
+Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In AAAI Conference on Artificial Intelligence.
+Deepak Muralidharan, Justine Kao, Xiao Yang, et al. 2019. Leveraging user engagement signals for entity labeling in a virtual assistant. arXiv preprint arXiv:1909.09143.
+Adam Paszke, Sam Gross, Francisco Massa, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems.
+Pavel Petrushkov, Shahram Khadivi, and Evgeny Matusov. 2018. Learning from chunk-based feedback in neural machine translation. arXiv preprint arXiv:1806.07169.
+Pragaash Ponnusamy, Alireza Roshan Ghias, Chenlei Guo, et al. 2019. Feedback-based self-learning in large-scale conversational AI agents. arXiv preprint arXiv:1911.02557.
+Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERT-networks. In Conference on Empirical Methods in Natural Language Processing.
+Steffen Rendle, Christoph Freudenthaler, Zeno Ganttner, et al. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618.
+Ruhi Sarikaya. 2017. The technology behind personal digital assistants: An overview of the system architecture and key components. IEEE Signal Processing Magazine, 34:67-81.
+Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.
+
+Xuehua Shen, Bin Tan, and ChengXiang Zhai. 2005. Context-sensitive information retrieval using implicit feedback. In International ACM SIGIR Conference on Research and Development in Information Retrieval.
+Ivan Srba and Maria Bielikova. 2016. A comprehensive survey and classification of approaches for community question answering. Transactions on the Web, 10:1-63.
+Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387.
+Chengwei Su, Rahul Gupta, Shankar Ananthakrishnan, et al. 2018. A re-ranker scheme for integrating large scale NLU models. In Spoken Language Technology Workshop.
+Kazunari Sugiyama, Kenji Hatano, and Masatoshi Yoshikawa. 2004. Adaptive web search based on user profile constructed without any effort from users. In International Conference on World Wide Web.
+Stefan Ultes and Wolfgang Minker. 2014. Interaction quality estimation in spoken dialogue systems using hybrid-HMMs. In Annual Meeting of the Special Interest Group on Discourse and Dialogue.
+Cheng Wang, Sun Kim, Taiwoo Park, et al. 2021a. Handling long-tail queries with slice-aware conversational systems. arXiv preprint arXiv:2104.13216.
+Cheng Wang, Sungjin Lee, Sunghyun Park, et al. 2021b. Learning slice-aware representations with mixture of attentions. arXiv preprint arXiv:2106.02363.
+Haoyu Wang, Nan Shao, and Defu Lian. 2019. Adversarial binary collaborative filtering for implicit feedback. In AAAI Conference on Artificial Intelligence.
+Wei-Nan Zhang, Lingzhi Li, Dongyan Cao, et al. 2018. Exploring implicit feedback for open domain conversation generation. In AAAI Conference on Artificial Intelligence.
\ No newline at end of file
diff --git a/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/images.zip b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..aefac06670384b9b4a804fe25c0486381b580700
--- /dev/null
+++ b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:106969621eee2f7d1e30e68fe8502890ac8e2ae075db1aa855cfbf524962577d
+size 336469
diff --git a/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/layout.json b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b74bd2a52b4149df9878b6347c56ba2bcb271a85
--- /dev/null
+++ b/ascalableframeworkforlearningfromimplicituserfeedbacktoimprovenaturallanguageunderstandinginlargescaleconversationalaisystems/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9aebd506a3306eebf1c73f3fccca60eb90ac4b2bde0002dd14b78fe3ce8ed691
+size 456014
diff --git a/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_content_list.json b/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3683d9acd4cb6f8c3b2e2588faef17c9cbd4752d
--- /dev/null
+++ b/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:11b8a19d6cb0fefca173aa3a8325f1fbd123b81dce38c802584e8f9b63053703
+size 46215
diff --git a/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_model.json b/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d493ed471e3afe3ad2a2c221d1f1ba86933d4825
--- /dev/null
+++ b/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d96aa1757db89ccaa679c96588c300d9dfb129efaa3f7d010466820d653c52e4
+size 57378
diff --git a/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_origin.pdf b/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a203be08909d36590e20a5df124ed0eab03f5acd
--- /dev/null
+++ b/asecureandefficientfederatedlearningframeworkfornlp/4e2c98e9-8d23-48f1-9cc6-d8e982eb45e8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:87665d4ebd0586737bfda7b7fde7732a6aafdc2bae390bb2b3ba254bd27a6b2b
+size 8325670
diff --git a/asecureandefficientfederatedlearningframeworkfornlp/full.md b/asecureandefficientfederatedlearningframeworkfornlp/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7bd531d54dac03e6086ad3649ccc6d66582b4cf8
--- /dev/null
+++ b/asecureandefficientfederatedlearningframeworkfornlp/full.md
@@ -0,0 +1,189 @@
+# A Secure and Efficient Federated Learning Framework for NLP
+
+Jieren Deng $^{1\S}$ , Chenghong Wang $^{2\S}$ , Xianrui Meng $^{3}$ , Yijue Wang $^{1}$ , Ji Li $^{4}$ , Sheng Lin $^{5}$ , Shuo Han $^{6}$ , Fei Miao $^{1}$ , Sanguthevar Rajasekaran $^{1}$ , Caiwen Ding $^{1}$
+
+1University of Connecticut, 2Duke University, 3Facebook, 4Microsoft
+
+5 Northeastern University, 6 University of Illinois at Chicago
+
+{jieren.deng,yijue.wang,fei.miao,sanguthevar rajasekaran,caiwen.ding}@uconn.edu
+
+chenghong.wang552@duke.edu,{xianruimeng,changzhouliji}@gmail.com
+
+lin.sheng@northeastern.edu, hanshuo@uic.edu
+
+# Abstract
+
+In this work, we consider the problem of designing secure and efficient federated learning (FL) frameworks. Existing solutions either involve a trusted aggregator or require heavyweight cryptographic primitives, which degrades performance significantly. Moreover, many existing secure FL designs work only under the restrictive assumption that none of the clients can be dropped out from the training protocol. To tackle these problems, we propose SEFL, a secure and efficient FL framework that (1) eliminates the need for the trusted entities; (2) achieves similar and even better model accuracy compared with existing FL designs; (3) is resilient to client dropouts. Through extensive experimental studies on natural language processing (NLP) tasks, we demonstrate that the SEFL achieves comparable accuracy compared to existing FL solutions, and the proposed pruning technique can improve runtime performance up to $13.7\times$ .
+
+# 1 Introduction
+
+Deep Neural Networks have played a significant role in advancing many applications (Yuan et al., 2021; Ding et al., 2017). The field of Natural Language Processing (NLP) leverages Recurrent Neural Networks (RNNs) and Transformers to achieve outstanding performance on many tasks. The Transformer was first introduced in (Vaswani et al., 2017) using a self-attention mechanism and it achieved prominent performance in various NLP tasks. The benefits of RNNs and Transformers in NLP are well-publicized, but the various privacy and security problems still pose challenges to the utilization of these models by data owners, especially users with sensitive data such as location, health, and financial datasets. Federated Learning (FL) (McMahan et al., 2017a) empowers different data owners
+
+(e.g., organizations or edge devices) to collaboratively train a model without sharing their own data, thus allowing them to address key issues like data privacy. Although data exchanged in FL consists of less information of the user's raw data (Bonawitz et al., 2019), one might still be concerned about how much information remains. Recent research has shown that attackers can still infer sensitive information about the training data, or even reconstruct the it solely from publicly shared model parameters. (Zhu et al., 2019).
+
+Although a series of works (Bonawitz et al., 2017; Truex et al., 2019; Papernot et al., 2018; Wu et al., 2021; Lin et al., 2020) have been proposed to protect FL protocols from leaking sensitive information (Wang et al., 2021; Deng et al., 2021; Wang et al., 2020). They either have to involve a trusted third party (centralized aggregator), or do not tolerate client dropouts. Therefore, the data owners either need to blindly trust the centralized aggregator or must be online all the time during the training period, which makes the entire design less practical. To address the aforementioned issues, in this work, we develop a secure and efficient FL framework, SEFL. It employs two non-colluding servers, i.e., Aggregation Server (AS) and Cryptography Service Provider (CSP). AS collects the encrypted local updates from clients, and securely aggregates them, while CSP manages the cryptography primitives, i.e. the decryption key. The overarching goal of this framework is to support accurate and efficient RNN and Transformer training while preserving the privacy of training data against the untrusted servers. In other words, any servers' knowledge about any single training data should be bounded by differential privacy (Dwork, 2008).
+
+Our contributions are summarized as follows: (1) We present a novel secure FL framework that eliminates the need for trusted aggregators. (2) SEFL is more resilient to clients dropping out than previous works. SEFL is able to produce a correct global
+
+
+Figure 1: SEFL workflow
+
+model even $75\%$ of clients are dropped out from the training protocol. (3) To improve the training performance, we integrate the Hankel-matrix based local update/weight pruning method with SEFL to simultaneously reduce the volume of local update and weight storage. The reduction in space, computational, and communication complexity are significant, from $\mathrm{O}(l^2)$ to $\mathrm{O}(2l - 1)$ for weight/update representation, where $l$ is the block size. With extensive experiments, we show that SEFL achieves comparable or even better accuracy than existing secure FL solutions over complex RNN and Transformer models, and the proposed pruning scheme improves SEFL's performance up to $13.7\times$ .
+
+# 2 Background
+
+Differential privacy. Let $\epsilon, \delta > 0$ be privacy parameters, a randomized mechanism $\mathcal{M}$ satisfies $\epsilon, \delta$ -differential privacy ( $\epsilon, \delta$ -DP) if and only if for any two adjacent datasets $D$ and $D'$ (differ by addition or removal of one data), for any possible output $S$ , the following holds:
+
+$$
+\operatorname * {P r} \left[ \mathcal {M} (D) \in S \right] \leq e ^ {\epsilon} P r \left[ \mathcal {M} (D ^ {\prime}) \in S \right] + \delta
+$$
+
+The Gaussian Mechanism (GM) (Dwork et al., 2014) achieves differential privacy by approximating a deterministic real-valued function $f$ with an additive noise that is proportional to the function's sensitivity $S_{f}$ , where $S_{f} = \max_{D,D^{\prime}}|f(D) - f(D^{\prime})|$ . A GM is written as $\mathcal{M}(D) = f(D) + \mathcal{N}(0,\sigma^{2}S_{f}^{2})$ , where $\mathcal{N}$ denotes a normal distribution, and $\sigma$ is the noise scale.
+
+Additively homomorphic encryption (AHE). AHE is a semantic secure public-key encryption scheme (Peter et al., 2012), with three algorithms Gen, Enc and Dec, where Gen generates they public and secret key pairs $(\mathsf{pk},\mathsf{sk})$ , Enc encrypts a message with pk and Dec decrypts a ciphertext with secret key sk. In addition
+
+AHE provides a homomorphic addition operator $\oplus$ , such that $\mathsf{Dec}(\mathsf{Enc}(m_1, \mathsf{pk}) \oplus \mathsf{Enc}(m_2, \mathsf{pk}) \ldots \oplus \mathsf{Enc}(m_k, \mathsf{pk}), \mathsf{sk}) = m_1 + \cdots + m_k$ .
+
+Two party secure computation (2PC). 2PC allows two parties with private inputs $x_{1}$ and $x_{2}$ to jointly compute a given function $f$ . Both parties learn nothing beyond the output of $f$ . A typical 2PC design is the garbled circuit (GC) (Yao, 1986).
+
+# 3 SEFL Explained
+
+# 3.1 Workflow
+
+We design SEFL framework based on the two noncolluding (untrusted) server setting, where an aggregation server (AS) aggregates the encrypted local model updates and another sever (CSP) manages the cryptography primitives (i.e. the decryption key). To ensure the privacy, we require that any server's knowledge about any single training data is bounded by some differential privacy. Figure 1 illustrates an overview of SEFL.
+
+Initially, CSP generates the key pairs $(\mathsf{pk},\mathsf{sk})$ stores the secret key sk locally, and broadcasts the public key pk to all other entities (AS and all clients). In our design, CSP is tasked to manage the cryptography primitives (i.e. the sk), thus CSP is the only entity that can decrypt the encrypted messages under the secret key sk. In the meantime, we assume that all entities will agree on a same initial model $\mathbf{W}^0$ .
+
+Each training iteration, i.e. $i^{th}$ training round, starts with all clients conduct local training with their respective private data $D_{j}$ then obtain the local model update $\Delta \mathbf{W}_j^i$ . Then, each client prunes the obtained model updates using weight pruning techniques and encrypts the compressed update by computing $\Delta \hat{\mathbf{W}}_j^i\gets \mathrm{Enc}(n_j\Delta \mathbf{W}_j^i /\sum_{j = 1}^K n_j,\mathsf{pk})$ . Clients then submit the encrypted and compressed updates to the AS.
+
+On the server side, AS homomorphically adds all encrypted (pruned) local updates over encrypted form and then obtains $\Delta \hat{\mathbf{W}}^i \gets \Delta \hat{\mathbf{W}}_1^i \oplus \Delta \hat{\mathbf{W}}_2^i \oplus \dots$ . Knowing that $\Delta \hat{\mathbf{W}}^i$ is equal to the encryption of the weighted average of all pruned local updates, that is $\Delta \hat{\mathbf{W}}^i = \mathrm{Enc}(\sum_{j=0}^{K} \frac{n_j \Delta \hat{\mathbf{W}}_j^i}{\sum_{j=1}^{K} n_j, pk)$ . To decrypt the aggregated global update, AS has to collaborate with CSP, as CSP is the only entity that manages the decryption key. Moreover, sending $\Delta \hat{\mathbf{W}}^i$ directly to CSP for decryption will result in the exact value of $\Delta \mathbf{W}^i$ being exposed to CSP, which violates the privacy guarantee. One possible approach is to have AS homomorphically add some random noise to $\Delta \hat{\mathbf{W}}^i$ and send the distorted global update to CSP for decryption. After receiving the result of decryption, AS removes the noise to obtain the true answer. This prevents CSP from knowing the true value of the global model update, however AS will know this value, which is also a privacy violation. To ensure none of the two servers can learn the exact global updates, in our design, AS first sends a distorted $\Delta \hat{\mathbf{W}}^i$ (with some random mask) to CSP, followed by CSP decrypts the distorted global update. Then, the two servers jointly evaluate a secure 2PC where AS inputs the random mask and CSP inputs the decrypted global update. $\Delta \hat{\mathbf{W}}^i$ is then recovered inside the secure 2PC protocol. Next, each server independently samples a DP noise and provides it as input to the secure 2PC. These DP noises are then added to the recovered $\Delta \hat{\mathbf{W}}^i$ inside the 2PC protocol. Finally, the protocol returns the global update distorted by DP noise to AS, with which AS updates the global model, $\mathbf{W}^i \gets \mathbf{W}^{i-1} + \Delta \hat{\mathbf{W}}^i$ . Note that, the choice of DP noise is quite flexible, and by default, SEFL uses Gaussian noise to distort the global update.
+
+SEFL repeats the training phases until it reaches the maximum training round $T$ or the model is converging. Note that, it is not necessary for AS to have all local updates from clients, according to our evaluation results, SEFL is able to train an accurate model when only $10\%$ of clients contribute their local updates. Therefore, in practice, one can set an aggregation threshold, say $L$ , which means that AS can start aggregating local updates as long as it receives more than $L$ updates.
+
+# 3.2 Block-Hankel Matrix-based Pruning
+
+Cryptographic primitives can help to provide stronger security guarantees. However, in practice, they often come at high computation and com
+
+
+(a) The index-required compression (CSR as an example)
+
+
+(b) The BHM format compression
+Figure 2: CSR format vs. BHM format
+
+munication overhead. Adding additional cryptographic operations in an FL framework could potentially prohibit the popularity and the adoption of resource-constrained edge devices such as mobile or IoT devices with limited resources (e.g., computation, memory size). Therefore, to be compatible with resource-constrained edge devices on federated learning, we aim to minimize the number of cryptographic operations required during training while maintaining the accuracy of FL. To achieve this, we develop an efficient method to train a large neural network by simultaneously reducing the volume of local updates and weight storage. We design an efficient method to train a large NLP model with reduced volume of local updates, to reduce the number of required cryptographic operations. Pitfall of sparsity format in AHE. Typical weight pruning approaches require to store the indices of nonzero entries (Gurevin et al., 2021; Gui et al., 2019; Wen et al., 2016; Ren et al., 2020; Ma et al., 2020). However, the different position of nonzero values from all clients can lead to significant inefficiency for the subsequent model update aggregation. As shown in Fig. 2 (a), assume AS aggregates two local updates with the same sparsity from $\mathcal{C}_1$ and $\mathcal{C}_2$ . We apply compressed sparse row (CSR) format, to represent the updates ( $\Delta \mathbf{W}_1$ and $\Delta \mathbf{W}_2$ in Fig. 2 (a)), where the non-zero elements of $\Delta \mathbf{W}_1$ and $\Delta \mathbf{W}_2$ are not located in the same position. As the AHE-based update aggregation process is a black-box homomorphic addition operation, we can not reconstruct the original sparse matrix from CSR since indices are encrypted, therefore we can not correctly produce the aggregated update.
+
+Crypto-friendly Block-Hankel matrix based pruning. We divide the local update into multi-
+
+ple modules with identical shape. Within each module, a special format of structure matrix is applied to approximate the original matrix without indices. In our framework, we investigate the use of blocks of Hankel matrix (BHM) to approximate blocks of local update. As shown in Fig. 2 (b), we can perform aggregation based on the encrypted val vectors since the positions of the sequence vectors are identical. In addition, the resultant global model will have the same size, therefore downloading and uploading communication is symmetric and balanced.
+
+In what follows, we discuss the convergence analysis for pruned sub-networks.
+
+Theorem 1. For every network $f$ with depth $l$ and $\forall i \in \{1,2,\dots,n\}$ . Consider $g$ is a randomly initialized neural network with $2n$ layers, and width $poly(d,n,m)$ , where $d$ is input size, $n$ is number of layers in $f$ , $m$ is the maximum number of neurons in a layer. The weight initialization distribution belongs to Uniform distribution in range [-1,1]. Then with probability at least $1 - \beta$ there is a weight-pruned subnetwork $\hat{g}$ such that:
+
+$$
+\sup _ {x \in \chi , \| W \| \leq 1} \| f (x) - \hat {g} (x) \| \leq \alpha \tag {1}
+$$
+
+Proof 1 We start with analysis over simple ReLU networks, where $f(x) = w \cdot x$ , $g(x) = \mathbf{u}\sigma(\mathbf{w}^g x)$ . Since $\sigma$ is a ReLU activation function, thus $w = \sigma(w) - \sigma(-w)$ and such that $x^* \mapsto \sigma(wx) = \sigma(\sigma(wx) - \sigma(-wx))$ . On the other hand, this neuron can be present as: $x^* \mapsto \mathbf{u}\sigma(\mathbf{p} \odot \mathbf{w}^g x)$ . Let $\mathbf{w}^+ = \max\{\mathbf{0}, \mathbf{w}\}$ , $\mathbf{w}^- = \min\{\mathbf{0}, \mathbf{w}\}$ , $\mathbf{w}^+ + \mathbf{w}^- = \mathbf{w}^g$ . Then
+
+$$
+x ^ {*} \mapsto \mathbf {u} \sigma (\sigma (\mathbf {p} \odot \mathbf {w} ^ {+} x) - \sigma (\mathbf {p} \odot - \mathbf {w} ^ {-} x)) \tag {2}
+$$
+
+Base on (Lueker, 1998), when $n \geq C \log \frac{4}{\alpha}, \forall w^f \in [0,1]$ , there exist a pattern of $\mathbf{w}$ and $p \in \{0,1\}^n$ :
+
+$$
+\Pr \left[ \left| w ^ {f} - \mathbf {u} \sigma (\mathbf {p} \odot \mathbf {w} ^ {+}) \right| < \frac {\alpha}{2} \right] \geq 1 - \frac {\beta}{2} \tag {3}
+$$
+
+By symmetric, Eq. 3 holds for $\mathbf{w}^{-}$ as well. Therefore, we obtain $\sup \left|w^{f}x - \mathbf{u}\sigma (\mathbf{p}\odot \mathbf{wx})\right|\geq \alpha$ To extend it to a single network layer, we computes
+
+$$
+\begin{array}{l} s u p \left| \mathbf {W} ^ {f} \mathbf {x} - \mathbf {u} \sigma (\mathbf {p} \odot \mathbf {W} ^ {g} \mathbf {x}) \right| \\ \leq \sum_ {j = 1} ^ {k} \sum_ {i = 1} ^ {m} \sup \left| w _ {j, i} ^ {f} x _ {i} - \mathbf {u} _ {i} \sigma \left(\mathbf {p} _ {j, i} \odot \mathbf {w} _ {j, i} x _ {i}\right) \right| \leq \alpha \tag {4} \\ \end{array}
+$$
+
+We now provide the general case analysis. With probability over $1 - \beta$ , we obtain:
+
+$$
+\begin{array}{l} \| f (x) - \hat {g} (x) \| \\ = \left\| \mathbf {W} _ {n} \mathbf {x} _ {n} - \mathbf {P} _ {2 n} \odot \mathbf {W} _ {2 n} ^ {g} \mathbf {x} _ {n} ^ {g} \sigma \left(\mathbf {P} _ {2 n - 1} \odot \mathbf {x} _ {2 n - 1} ^ {g}\right) \right\| \tag {5} \\ \leq \alpha / 2 + \alpha / 2 = \alpha \\ \end{array}
+$$
+
+Putting it all together. Our objective is to compress the weights and updates using the BHM formats. Thus we minimize the loss function subject to constraints of BHM. More specifically, we set constraints as $\mathbf{S}_i^{(t)} = \{\mathbf{W}_i^{(t)}\mid \mathbf{W}_i^{(t)}\in \mathrm{BHM}\}$ . The backward propagation process of the training phase can also be implemented using the BHM format, since pruning based on the block Hankel matrix has the same "effectiveness" as unpruned DNNs, as shown in (Zhao et al., 2017).
+
+Compared to other index-required pruning methods, the BHM pruning has the following advantages. First, it always guarantees the strong structure of the trained network, thereby avoiding the storage space, computation, and communication time overhead incurred by the complicated indexing process. Second, during training, the BHM-based approach directly trains weight matrices in the BHM format by updating only one vector for each block (i.e., $2l - 1$ vs. $l^2$ ). Third, the reduction in space, computational, and communication complexity by using BHM are significant. The weight tensor $\mathbf{W}_i^{(t)}$ and updates $\Delta \mathbf{W}_i^{(t)}$ have the storage complexity and communication complexity reduced from $\mathrm{O}(l^2)$ to or $\mathrm{O}(2l - 1)$ .
+
+# 4 Experiments
+
+We implement the SEFL system using PyTorch 1.4.0, CUDA 10.1. All experiments are performed on the AWS EC2 cloud instance with a $2.30\mathrm{GHz}$ Intel Xeon Gold 5218 Salable Processors and 8 NVIDIA Quadro RTX 6000 GPUs. We evaluate SEFL by conducting experiments using LSTM and Transformer on WikiText-2 (Merit et al., 2016) dataset. The LSTM model is adopted from (Hochreiter and Schmidhuber, 1997). The Transformer model (Vaswani et al., 2017) contains two layers with an embedding dimension of 200, two attention heads, and 200 hidden units. We use perplexity to measure the quality of the predicted data for both Transformer and LSTM.
+
+# 4.1 Result Analysis
+
+Comparisons with existing private FL. In Figure 3, we compare SEFL with the state-of-art private FL design, CDP-FL (Geyer et al., 2017). First,
+
+
+(a). LSTM
+
+
+(b). Transformer
+
+
+Figure 3: Comparison of the impact of different federated learning approaches on accuracy.
+(a) Runtime Performance
+Figure 4: Comparison of the impact of different Block Size on accuracy and performance
+
+
+(b) Accuracy
+
+the unpruned SEFL achieves similar accuracy compared to the CDP-FL due to the fact that the logical training process of these two approaches are similar. Both methods first utilize FedAvg (McMahan et al., 2017b) to obtain a global aggregation model and then distort it with DP noise. The optimized (with pruning technique) SEFL improves the accuracy by up to $11\%$ and $15\%$ over the CDP-FL method, with LSTM and Transformer model, respectively. One possible explanation is that pruning reduces the DP noises added to the aggregation model. Since, to distort the model, one should inject Gaussian noise to each model element independently. Therefore, the smaller the size of the model, the fewer times the Gaussian noise is injected.
+
+Evaluating optimization. We compare the SEFL (with pruning optimization) with unoptimized SEFL in Figure 4. We report the average elapse time in seconds over 10 replicated runs as the runtime performance. We report the accuracy and runtime performance for unpruned SEFL and SEFL with BHM block size from 4 to 32. Note that the larger the block size, the smaller the compressed model will be. SEFL achieves a performance improvement of up to $13.7 \times$ over the unpruned SEFL under both LSTM and Transformer models. Additionally, SEFL shows better accuracy (smaller perplexity) compared to the unpruned SEFL in almost all test groups. For best cases, SEFL achieves $19.3\%$ and $12.8\%$ accuracy improvement, respectively, in contrast to the unoptimized SEFL implementation. BHM-based pruning optimization not only brings significant per
+
+| Dropout Rate | BHM (75%) | BHM (50%) | BHM (25%) | BHM (0%) |
| LSTM | 194.82 | 187.64 | 178.45 | 177.41 |
| Transformer | 310.12 | 301.04 | 279.81 | 263.78 |
+
+Table 1: Comparison of the impact of dropout rate. formance improvements, but also optimizes the accuracy guarantees.
+
+SEFL with clients dropout. We evaluate whether SEFL is able to handle clients dropouts in Table 1, where we report accuracy when $25\%$ , $50\%$ , and $75\%$ of clients are dropped out from the protocol. Shown in Table 1, when the dropout rate is relatively small, i.e., $25\%$ , SEFL achieves almost the same accuracy guarantee as in the no-dropout case (0.5% and $5\%$ accuracy degradation for LSTM and Transformer, respectively). Even when majority of clients are drooped out, i.e. $75\%$ drop rate, SEFL still produces accurate models with only 17.41 and 46.34 higher perplexity. In summary, SEFL can handle a large number of client dropouts with relatively small degradation in accuracy. This result shows that our proposed approach is applicable to practical scenarios.
+
+# 5 Conclusion
+
+In this paper, we introduced a new secure and efficient FL framework, SEFL, that (i) eliminates the need for the trusted entities, (ii) achieves similar model accuracy compared with existing FL approaches, and (iii) is resilient to client dropouts. We also proposed optimizations that mitigate the high computation and communication overhead caused by cryptographic primitives. This is achieved by applying a local weight pruning technique based on the block Hankel-matrix. Through extensive experimental studies on NLP tasks, we demonstrate that the SEFL achieves comparable accuracy compared to existing FL solutions, and can significantly improve runtime performance.
+
+# 6 Acknowledgements
+
+This research was supported in part by UConn REP award (KFS: 4648460), the National Science Foundation (NSF) Grants 1743418, NSF 1843025, NSF 1849246 and NSF 1952096. This research is based upon work supported by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy (EERE) under the Advanced Manufacturing Office Award Number DE-EE0007613.
+
+# References
+
+Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konecny, Stefano Mazzocchi, H Brendan McMahan, et al. 2019. Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046.
+Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In ACM CCS, pages 1175-1191.
+Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Hang Liu, Sanguthevar Rajasekaran, and Caiwen Ding, editors. 2021. Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics.
+Caiwen Ding, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu, Youwei Zhuo, Chao Wang, Xuehai Qian, Yu Bai, Geng Yuan, Xiaolong Ma, Yipeng Zhang, Jian Tang, Qinru Qiu, Xue Lin, and Bo Yuan. 2017. Circnn: Accelerating and compressing deep neural networks using block-circulant weight matrices. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-50 '17, page 395-408, New York, NY, USA. Association for Computing Machinery.
+Cynthia Dwork. 2008. Differential privacy: A survey of results. In Proceedings of the 5th International Conference on Theory and Applications of Models of Computation, TAMC'08, page 1-19, Berlin, Heidelberg. Springer-Verlag.
+Cynthia Dwork, Aaron Roth, et al. 2014. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407.
+Robin C Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557.
+Shupeng Gui, Haotao N Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, and Ji Liu. 2019. Model compression with adversarial robustness: A unified optimization framework. In NeurIPS, pages 1285-1296.
+Deniz Gurevin, Mikhail Bragin, Caiwen Ding, Shanglin Zhou, Lynn Pepin, Bingbing Li, and Fei Miao. 2021. Enabling retrain-free deep neural network pruning using surrogate lagrangian relaxation. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 2497-2504. International Joint Conferences on Artificial Intelligence Organization. Main Track.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
+
+Sheng Lin, Chenghong Wang, Hongjia Li, Jieren Deng, Yanzhi Wang, and Caiwen Ding. 2020. Esmfl: Efficient and secure models for federated learning. arXiv preprint arXiv:2009.01867.
+George S Lueker. 1998. Exponentially small bounds on the expected optimum of the partition and subset sum problems. _Random Structures & Algorithms_, 12(1):51-62.
+Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, and Yanzhi Wang. 2020. Pconv: The missing but desirable sparsity in dnn weight pruning for real-time execution on mobile devices. In AAAI, pages 5117-5124.
+H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017a. Communication-efficient learning of deep networks from decentralized data. In (AISTATS).
+H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017b. Communication-efficient learning of deep networks from decentralized data.
+Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.
+Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Ülfar Erlingsson. 2018. Scalable private learning with PATE. In *ICLR* 2018.
+Andreas Peter, Max Kronberg, Wilke Trei, and Stefan Katzenbeisser. 2012. Additively homomorphic encryption with a double decryption mechanism, revisited. In ISC.
+Ao Ren, Tao Zhang, Yuhao Wang, Sheng Lin, Peiyan Dong, Yen-Kuang Chen, Yuan Xie, and Yanzhi Wang. 2020. Darb: A density-adaptive regular-block pruning for deep neural networks. In AAAI, pages 5495-5502.
+Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou. 2019. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pages 1-11.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
+Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, and Sanguthevar Rajasekaran. 2020. Sapag: a self-adaptive privacy attack from gradients. arXiv preprint arXiv:2009.06228.
+
+Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, and Sanguthevar Rajasekaran. 2021. Against membership inference attack: Pruning is all you need. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 3141-3147. International Joint Conferences on Artificial Intelligence Organization. Main Track.
+Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. 2016. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074-2082.
+Xin Wu, Hao Zheng, Zuochao Dou, Feng Chen, Jieren Deng, Xiang Chen, Shengqian Xu, Guanmin Gao, Mengmeng Li, Zhen Wang, et al. 2021. A novel privacy-preserving federated genome-wide association study framework and its application in identifying potential risk variants in ankylosing spondylitis. Briefings in Bioinformatics, 22(3):bbaa090.
+A. C. Yao. 1986. How to generate and exchange secrets. In 27th Annual Symposium on Foundations of Computer Science (sfcs 1986), pages 162-167.
+Geng Yuan, Payman Behnam, Yuxuan Cai, Ali Shafiee, Jingyan Fu, Zhiheng Liao, Zhengang Li, Xiaolong Ma, Jieren Deng, Jinhui Wang, Mahdi Bojnordi, Yanzhi Wang, and Caiwen Ding. 2021. Tinyadc: Peripheral circuit-aware weight pruning framework for mixed-signal dnn accelerators. In 2021 Design, Automation Test in Europe Conference Exhibition (DATE), pages 926-931.
+Liang Zhao, Siyu Liao, Yanzhi Wang, Zhe Li, Jian Tang, and Bo Yuan. 2017. Theoretical properties for neural networks with weight matrices of low displacement rank. In International Conference on Machine Learning, pages 4082-4090.
+Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In NeurIPS, pages 14774-14784.
\ No newline at end of file
diff --git a/asecureandefficientfederatedlearningframeworkfornlp/images.zip b/asecureandefficientfederatedlearningframeworkfornlp/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..215b900a3e2956411e0f8e0abe405c72162c1016
--- /dev/null
+++ b/asecureandefficientfederatedlearningframeworkfornlp/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b5738ee375111d5e31fc5832820ee3bbcb103d879ba70d416c1c0dc8ce66b3e
+size 236343
diff --git a/asecureandefficientfederatedlearningframeworkfornlp/layout.json b/asecureandefficientfederatedlearningframeworkfornlp/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..30bcc9ecdcfbb893747d6fe6b394fe1bcc364823
--- /dev/null
+++ b/asecureandefficientfederatedlearningframeworkfornlp/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91f957f0b7e9e84bef0c1b87af2484a51d9bff78991df8df0aeedce84269ebf4
+size 274663
diff --git a/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_content_list.json b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5b1ed0afe81b1bb6194730d59c5978a5fb0ff447
--- /dev/null
+++ b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ec5028eb41d10795b404f3eb50a89d715ae4930c75ef025821dde806359967f
+size 79720
diff --git a/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_model.json b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..208330ff27b33596bff681ad99586ae3e87311c1
--- /dev/null
+++ b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b9120971b14436e26f5d3dd080eff1c4e1b3b2b84ee82e212f3875c39800581
+size 94197
diff --git a/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_origin.pdf b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..00c7b29d65065f87b9b1e3b9c6489836d34c2930
--- /dev/null
+++ b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/f97fba0d-62ae-4fee-acfc-0a9ef8ad60cf_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:208bac41b275a11a17cac75bc8d58ee15f1221b85c290b7aa9572ef63a59b905
+size 503840
diff --git a/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/full.md b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..094506252de94accb805f876c2384c897637c39b
--- /dev/null
+++ b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/full.md
@@ -0,0 +1,266 @@
+# A Semantic Feature-Wise Transformation Relation Network for Automatic Short Answer Grading
+
+Zhaohui Li and Yajur Tomar and Rebecca J. Passonneau
+
+Department of Computer Science and Engineering
+
+Pennsylvania State University
+
+{zjl5282,YST5012,rjp49} $@$ psu.edu
+
+# Abstract
+
+Automatic short answer grading (ASAG) is the task of assessing students' short natural language responses to objective questions. It is a crucial component of new education platforms, and could support more wide-spread use of constructed response questions to replace cognitively less challenging multiple choice questions. We propose a Semantic Feature-wise transformation Relation Network (SFRN) that exploits the multiple components of ASAG datasets more effectively. SFRN captures relational knowledge among the questions (Q), reference answers or rubrics (R), and labeled student answers (A). A relation network learns vector representations for the elements of QRA triples, then combines the learned representations using learned semantic feature-wise transformations. We apply translation-based data augmentation to address the two problems of limited training data, and high data skew for multi-class ASAG tasks. Our model has up to $11\%$ performance improvement over state-of-the-art results on the benchmark SemEval-2013 datasets, and surpasses custom approaches designed for a Kaggle challenge, demonstrating its generality.
+
+# 1 Introduction
+
+Educators at every level rely on classroom assessments to evaluate students' knowledge, often through quizzes. Multiple choice questions can be graded automatically, but many studies have shown that short answer, constructed response questions provide greater benefit to students (Lee et al., 2011; McDaniel et al., 2007; Butler and Roediger, 2007; Clariana, 2003). Manual assessment of short answer questions is time-consuming, and has bias and errors (Galhardi et al., 2020; Bejar, 2012). Automatic Short Answer Grading (ASAG) applies NLP methods to reduce the assessment burden for short answer questions (Burrows et al., 2015), with potential to assist educators in providing more
+
+timely feedback to students on a preferred assessment method. With increased reliance on virtual learning environments and educational technology, the potential impact of ASAG has grown.
+
+The most common ASAG approach classifies students' answers into two or more categories. The key challenge is that the classification problem is inherently relational, involving the relation of the student's answer to the question, as well as to one or more reference answers. Another challenge is that existing datasets are relatively small, especially for the kinds of neural network models that perform best on other NLP tasks. Much of the ASAG work is conducted in industry labs with proprietary methods and datasets (Wang et al., 2019; Liu et al., 2019). Creation of benchmark datasets, however, has fostered broader interest in the problem from the NLP community. The main benchmark dataset, SemEval-2013 (Dzikovska et al., 2013), covers multiple STEM domains, and has multiple classification tasks regarding both the number of classes, and the level of generalization required. For example, one task addresses unseen answers to known questions (UA), another addresses unseen questions (UQ) within the same subject domain, and a third is unseen domains (UD) for student answers to topics and questions not seen in the training data. While this is a rich dataset, it is not large enough to support complex models for all the classification tasks. A somewhat larger dataset, ASAP-SAS1 is less well structured, and contains a range of supporting information other than reference answers, such as rubrics. While several challenge models performed well, they were not described in publications. We use both datasets.
+
+Besides the potential benefits for educational technology, we believe ASAG can push NLP research in new directions due to the relational nature of the task, and the graded difficulty of the different
+
+Q: Susan has samples of 5 different foods. Using only the results of her experiment, how will Susan know which food contains the most sugar? (Gas volume is evaluated by tube)
+
+R: Susan should compare the amount of gas in each bag. The bag with the most gas contains the food with the most sugar.
+
+A: Susan will know how much sugar is in the foods by putting each bag in a volume tube. When her finder stops after pushing the top, the bottom of the part she pushes down will be on a number. That number is the milliliters of sugar in the food. Whichever number is the highest, that means that food has the most sugar.
+
+Figure 1: A [Q, R, A] triple example with a question (Q), a reference answer (R), and a student answer (A) from SemEval-2013 SciEntsBank.
+
+classification tasks that ASAG presents. Figure 1 shows a question (Q), reference answer (R), and student answer (A) from SemEval-2013. The color coding of words simulates a Venn diagram of conceptual overlap of all three components (green), versus only Q and R (blue), only R and A (yellow), and only Q and A (red). We hypothesize that a relation network can better exploit these different similarity spaces. The input to our neural network consists of QRA triples, which are then modeled as a 3-way relation.
+
+We present a Semantic Feature-Wise transformation Relation Network (SFRN). It is inspired by vision research using relation networks (Santoro et al., 2017) and feature mapping (Perez et al., 2018) functions, which were both motivated by a desire for greater generalization ability (referred to as reasoning), combined with computational efficiency and low model complexity. SFRN is an end-to-end model with three components. An encoder first encodes each component of a QRA triple, producing vectors for the question, one of a set of reference answers, and the student answer. When there are multiple reference answers, a relation network converts each triple of vectors for a given student answer into a single relation vector, and a learned feature-wise transformation function merges all the relation vectors for the student answer by leveraging the attentions calculated by a QRA triple. Finally, a classifier determines which class the student answer belongs to.
+
+To address data insufficiency and class imbalance, we adopt a simple data augmentation method, back-translation, which was studied for augmentation of paraphrase data (Wieting et al., 2017), and has been used in other NLP problems. Most ASAG datasets are relatively small, especially com
+
+pared to the typical training sets for large neural networks, and the multiway classification problems have extreme class imbalance. For example, the SemEval-2013 Beetle subset for the 5-way classification task has only 4,146 training samples, and one of the classes has only 195 examples. Here, as in (Xie et al., 2020), we apply back-translation to generate examples from existing ones without changing the label. We find that data augmentation is beneficial for simple models, like logistic regression, and for complex models, including our most complex model, SFRN+BERT. SFRN+BERT combined with data augmentation achieves up to $11\%$ performance improvement over the state-of-the-art.
+
+Our contributions are: SFRN+, a novel relation network that outperforms the state-of-the-art, use of data augmentation to address data insufficiency and imbalance, and ability to learn ASAG relations from either reference answers or rubrics. The next section presents related work on ASAG, relation networks, and data augmentation. Section 3 presents our SFRN and SFRN+ models. Section 4 describes the datasets and classification tasks. Section 5 presents our data augmentation method. Section 6 presents our experiments and results.
+
+# 2 Related Work
+
+ASAG is generally modeled as a classification problem with two or more classes. Burrows et al. (2015) gives a thorough overview of benchmark datasets and ASAG systems. Early work (Mohler and Mihalcea, 2009) formulated ASAG as a comparison of semantic text similarity between student and reference answers. A wide range of handcrafted features have been used: POS tag, word and character n-gram features (Heilman and Madnani, 2013), context overlap features (Ott et al., 2013), and graph alignment and lexical semantic similarity features (Mohler et al., 2011; Sultan et al., 2016), for input to SVM or other kinds of classifiers. Recent work applies a combination of deep neural networks with data mining. Attention networks have been used on large proprietary datasets (Wang et al., 2019; Liu et al., 2019; Ha et al., 2020). Suzen et al. (2020) used text mining to improve similarity results. Contextualized semantic representations like BERT have also been used (Hassan et al., 2018; Sung et al., 2019; Camus and Filighera, 2020). Saha et al. (2018) leveraged both hand-crafted features and sentence embeddings to achieve high performances on many tasks.
+
+Relation Networks (RN) originated as an alternative to other kinds of graph-based neural models to develop relation-based representations, and were designed to overcome the limitations of CNNs and MLPs for reasoning problems in vision, NLP and symbolic domains such as physics (Santoro et al., 2017). RN performance on a visual question answering dataset CLEVR (Johnson et al., 2017) surpassed human performance. RNs also prove effective at improving object detection models (Hu et al., 2018). Moreover, RNs have become a general framework for few-shot learning (Sung et al., 2018). We adapt RNs for the three-way relation represented by the questions, reference answers, and student answers in ASAG datasets.
+
+RNs typically combine learned representations of relational vectors using vector addition. We combine the relation vectors that represent multiple reference answers for the same question and student answer using a learned relation fusion function. For the fusion function, we use feature-wise transformation, based on its success with multi-modal data, e.g., images and text. Perez et al. (2018) developed FiLM, an approach to merge information from language and visual input. A language vector serves as conditioning input, to control the scaling and shifting of the visual feature map in a feature-wise fashion. Similarly, we train a function with learnable parameters to combine QRA triples for a given student answer. We have not seen feature wise transformation used for vector combination in ASAG research, which typically relies instead on fixed arithmetic operations or concatenation.
+
+Data augmentation is less common in NLP, compared with computer vision, where image data augmentation is standard. Operations on images such as rotating an image a few degrees, or converting it to grayscale, do not change their essential meaning. In general, data augmentation is used both to increase the size of the training data and to add irrelevant noise to examples to improve the robustness of learned models. Recently, data augmentation has been found to significantly improve performance on NLP tasks as various as paraphrasing (Wieting et al., 2017), natural language generation (Kedzie and McKeown, 2019), semantic parsing (Cao et al., 2019), or various sentiment and opinion classification tasks (Kobayashi, 2018). One method is to substitute a random word with a synonym drawn from a lexical database like WordNet (Mueller and Thyagarajan, 2016; Zhang et al.,
+
+2015; Wei and Zou, 2019), or to use word embeddings to find synonyms (Wang and Yang, 2015; Jiao et al., 2020). Back-translation leverages machine translation to paraphrase a text while retaining the meaning (Wieting et al., 2017; Edunov et al., 2018; Xie et al., 2020). We adopt back-translation for its ease of use, given that machine translation methods have achieved very high performance.
+
+# 3 SFRN
+
+A Relation Network (RN) (Santoro et al., 2017) is particularly suitable for ASAG because it is designed to infer higher order generalizations, meaning generalizations that hold across tuples of examples, in a data efficient manner. RNs have been used in vision to efficiently learn generalizations across pairs of objects without having to learn individual weights for all possible object pairs in the data. We extend the RN framework to handle relations across vectorized text triples using a data fusion function.
+
+In its simplest form, an RN is a composite function:
+
+$$
+R N (O) = f _ {\phi} \left(\sum_ {i, j} g _ {\theta} \left(o _ {i}, o _ {j}\right)\right) \tag {1}
+$$
+
+where $O$ is a set of input objects $\{o_1,o_2,\dots ,o_n\}$ $o_i\in \mathbb{R}^m$ and $f_{\phi}$ and $g_{\theta}$ are functions with trainable parameters. The inner function $g$ learns a relation over tuples which feeds the outer classifier $f$ with an abstract representation over the tuple objects.
+
+In this section, we first present the SFRN model to convert the three vectors for each Q, R and A into a relational vector, then we introduce the Semantic Feature-wise Transformation unit that fuses relation vectors. Although encoding the textual Q, R and A inputs into vectors is the first step in the model, we postpone discussion of the different encoders we try out until the last subsection.
+
+# 3.1 Creating the QRA Relation Vectors
+
+The encoded vectors for a given triple are $(q, r_j, a), j \in \{0, 1, \dots, n\}$ , where $n$ is the number of reference answers for a given question $q$ and a student answer $a$ of this question. Corresponding to equation (1) above, the relation vectors $l_j$ are inferred by $g_\theta$ , using the concatenation of QRA vectors from the encoding step as the input (left hashed box in Figure 2):
+
+$$
+l _ {j} = g _ {\theta} ([ q, r _ {j}, a ]) \tag {2}
+$$
+
+
+Figure 2: The structure of SFRN. The $g_{\theta}$ -MLP function computes the relation vector for each [Q,R,A] triple. A set of relation vectors is combined (+) using SFT. The $f_{\phi}$ -MLP function is the assessment classifier.
+
+where $g$ is a multilayer perceptron (MLP) with learnable parameters $\theta$ , and $g$ produces the relation vectors $l_{j} \in L$ for $j \in \{0,1,\dots,n\}$ (one per reference answer; see center hashed box in Figure 2).
+
+# 3.2 Relation Fusion
+
+After learning the relation vectors, the RN in Santoro et al. (2017) summed them together to feed a single merged relation vector to the classifier $f$ , where the relations were binary. For our purposes, this approach does not have enough capacity to model the subtle relational information in ASAG datasets, where the relations are ternary. Inspired by Perez et al. (2018), we adopt a relation fusion unit, Semantic Feature-wise Transformation (SFT), to learn different weights for fusing each of the learned $n$ relation vectors $l$ for one of the $k$ answers $a_{ik}$ to a question $q_i$ . The concatenation of the three vectors in the QRA triples serves as a conditioning context to learn how to incorporate each of the $n$ relation vectors with its own weights, before classifying the student's answer $a_{ik}$ to a question $q_i$ . For a given $q_i$ and one answer $a_{ik}$ , the input to SFT is thus the set of triples $C$ , the set of learned relations $L$ , and for clarity the size $n$ of these sets:
+
+$$
+\operatorname {S F T} (C, L, n) = \sum_ {j = 1} ^ {n} (\alpha (c _ {j}) \odot l _ {j} + \beta (c _ {j})) \quad (3)
+$$
+
+where $C$ is the set of $n$ QRA triples $[q_i, r_{ij}, a_{ik}]$ , $L$ is the set of $n$ relation vectors from equation (2) for the $n$ reference answers, $c_j$ is the concatenation of a QRA triple consisting of $[q_i, r_{ij}, a_{ik}]$ , $l_j$ is the corresponding learned relation, and $\alpha$ and $\beta$ are MLPs. The output of SFT is a fused relation vector that represents all the relational information in the input QRA triples for a given question $q_i$ and student answer $a_{ik}$ , relative to the reference answers $r_{ij}$ .
+
+Finally, a $f_{\phi}$ MLP function classifies the output of $SFT$ into one or more classes, depending on
+
+the ground truth labeling scheme. Combining equations (2) and (3), the composite function becomes:
+
+$$
+S F R N ([ \mathrm {Q}, \mathrm {R}, \mathrm {A} ]) = f _ {\phi} (\mathrm {S F T} (g _ {\theta})) \tag {4}
+$$
+
+where $g_{\theta} = MLP([q,r,a])$ . Overall, SFRN is a relation-based classifier that takes the QRA triple as input. Since the functions are all MLPs, the whole architecture is simple and end-to-end differentiable.
+
+# 3.3 SFRN Encoder
+
+We experiment with different SFRN encoders. Our baseline SFRN model uses LSTM (Hochreiter and Schmidhuber, 1997), which is relatively easy to train, but prone to overfitting and information loss. We compare LSTM with a BERT-based encoder. BERT is a deep, pre-trained, transformer-based model that has proven to be extremely powerful when fine-tuned for a wide range of NLP tasks (Devlin et al., 2019). We use the last output layer of the BERT base model, and fine-tune it on the ASAG data. We also use the pre-trained BERT base as an encoder in a logistic regression baseline.
+
+We use the bert-base-uncased model pre-trained on the BooksCorpus and Wikipedia, with a 30,000 token vocabulary, and 110 million parameters. For fine-tuning, we pre-process the sentences in our QRA triples by prefixing [CLS] and postfixing [SEP] to the word token lists for each question text, or reference or student answer. Then we take the last layer output (of 12 layers in total) as the vector encodings of the elements of each QRA triple.
+
+# 4 Datasets
+
+The ASAG datasets we use are SemEval-2013, and the ASAP-SAS Kaggle competition dataset. The two were created for different purposes, and have distinct structures. With ASAP, we use only the components that correspond roughly to the SemEval format, as explained further below.
+
+SemEval-2013 (Dzikovska et al., 2013) provides two training datasets, Beetle and SciEntsBank, that have 2-way, 3-way (Correct, Contradictory, Incorrect) and 5-way (Correct, Partially correct incomplete, Contradictory, Irrelevant, Non domain) class labels. SciEntsBank has three classification tasks comprising unseen answers (UA), unseen questions (UQ) or unseen domains (UD), and Beetle has the first two of these. The 5-way labels were chosen to potentially provide tutoring feedback. The Beetle dataset consists of 56 questions on basic electricity and electronics, with approximately 3,000 student answers. SciEntsBank contains approximately 10,000 answers to 197 assessment questions across 15 different science domains. Each question has from 1 to 14 reference answers.
+
+To test the generality of our method, we also apply it to a dataset that lacks reference answers, Automated Student Assessment Prize Short Answer Scoring (ASAP-SAS), used in a Kaggle competition (Shermis, 2014). It has 10 prompts from science, biology and English Language Arts (ELA), with 17,207 training examples and 5,224 test examples. Responses are rated by two annotators: four prompts are rated in $\{0,1,2,3\}$ , and others are in $\{0,1,2\}$ . This dataset has a wide array of other information, including rubrics. Therefore we test SFRN on triples that use rubrics in place of reference answers. We exclude all information other than the questions, rubrics, and student answers.
+
+Many secondary and post-secondary STEM courses that use short answer questions rely on rubrics rather than reference answers. We find that SFRN performs as well as the models that use the full resources in ASAP, which shows the generalization ability of SFRN, and makes it potentially more useful to educators. We use the score assigned by the first annotator as the label and evaluate with Quadratic Weighted Kappa (QWK; a method to measure the agreement between the graders), based on the Kaggle competition guidelines.
+
+# 5 Data Augmentation
+
+Since the SemEval ASAG dataset has limited training data, and high data skew for the 3-way and 5-way tasks, we utilize back-translation to efficiently generate more examples for any given class (Edunov et al., 2018). Back-translation refers to translation from a source language to one or more pivot languages, followed by translation from the pivot(s) back to the source. We found good perfor
+
+| Old | En: positive battery terminal is separated by a gap from terminal 1 |
| Pivot | Ch: 正极电池端子与端子1隔开一定距离 |
| New | En: The positive battery terminal is separated from terminal 1 by a certain distance |
| Old | En: positive battery terminal is separated by a gap from terminal 1 |
| Pivot | Fr: la borne positive de la batterie est séparée par un espace de la borne 1 |
| New | En: the positive battery terminal is separated by a space from terminal 1 |
+
+Figure 3: An example of a Chinese and French back-translation for a sentence from Beetle.
+
+mance using two one-step pivot languages, French and Chinese. We interleaved use of a state-of-the-art neural machine translation (NMT) system, EasyNMT, with the Google Translation API. This provides us with greater control over the scale of new examples—given that Google limits calls to Google Translate—as well as more noise injection for robustness. We randomly select sentence-label pairs to generate variant sentences with the same label. If the EasyNMT back-translation is not different from the source, we call Google Translate.
+
+Figure 3 shows two examples of back-translation, one for each pivot language we use. With Chinese as the pivot, the original word gap was converted to distance, and the two prepositional phrase arguments of separated were swapped. With French as the pivot, space replaces gap, and word order is preserved.
+
+Through trial-and-error, we found that data balancing was not effective unless there was a large enough gap between the original size of the rebalanced class and the largest class. Also, there were limits to the maximum augmentation that worked well. If there was a five-fold difference in the size of a class and the largest one, we doubled the size of the small class. Otherwise, data augmentation either resulted in little improvement or in degradation of performance. We speculate that increasing the diversity of linguistic form along with the number of examples might allow for greater increases in class size.
+
+# 5.1 Back-translation Experiment
+
+Here we test our back-translation data augmentation on a logistic regression baseline ASAG model with an LSTM encoder (also used in the experiments in section 6). We compare all pairs among
+
+ | org | double | triple | org+fr | org+ch | org+ch+fr |
| mean | 68.40 | 69.70 | 71.10 | 70.90 | 70.70 | 72.70 |
| std | 1.02 | 1.01 | 0.94 | 1.22 | 1.35 | 1.27 |
| org | x | -2.724, p=0.014 | -5.831, p=0.000 | -4.715, p=0.000 | -4.087, p=0.001 | -7.924, p=0.000 |
| double | x | x | -3.047, p=0.007 | 2.277, p=0.035 | -1.786, p=0.091 | -5.560, p=0.000 |
| triple | x | x | x | 0.389, p=0.702 | 0.730, p=0.475 | -3.036, p=0.007 |
| org+fr | x | x | x | x | 0.330, p=0.745 | -3.067, p=0.007 |
| org+ch | x | x | x | x | x | -3.244, p=0.005 |
| org+ch+fr | x | x | x | x | x | x |
+
+Table 1: T-test results of training the LR baseline 10 times on org, double, triple, org+fr, org+ch, and org+fr+ch.
+
+five conditions combined with the original Beetle datasets on the 3-way UA task: doubling the data, tripling the data, using French as the pivot language, using Chinese as the pivot language, and combining examples from the French and Chinese back-translations. We find statistically significant improvements of the LR baseline, especially for the comparison of the original dataset with an augmentation that uses both Chinese and French back-translations.
+
+We double and triple the original data to use as controls for comparison with augmentation by back-translation, to verify that it is not size alone that matters. Thus the original and two controls are org, double, triple. The augmented datasets $\text{org} + \text{fr}$ , $\text{org} + \text{ch}$ are the same size as double, and $\text{org} + \text{ch} + \text{fr}$ is the same size as triple.
+
+To get average performance results, we repeated ten iterations of training and testing on the Beetle 3-way UA classification. We trained a logistic regression baseline (LR) on the 6 training sets (see section 6.1 for the LR training details), then applied T-tests on the means to compare mean accuracy for all pairs of conditions. The results are shown in Table 1; we use alpha value $p \leq 0.05$ as the threshold to reject the null hypothesis that the means of two conditions are not different. The first two rows of Table 1 show the means and standard deviations over the ten trials for each condition. The remaining rows show the difference in means and p-values for each pair, comparing rows and columns. Table 1 shows that all the data augmentation conditions are significantly better than org (p-values $\leq 0.05$ ). The condition $org + ch + fr$ has the best absolute improvement over org, with a difference in average performance of 7.92. We concluded that back-translation is useful for data augmentation and re-balancing.
+
+Using the best performing augmentation $(org + ch + fr)$ , we created new datasets, Beetle+ and SciEntsBank+. Beetle+ (N=12,438) is thrice
+
+the size of the original $(N = 4,146)$ . SciEntsBank+ $(N = 15,450)$ is just over thrice the size of the original SciEntsBank $(N = 5,104)$ .
+
+# 6 Experiments
+
+The research questions our experiments address are: 1) How well does SFRN perform compared to the state-of-the art? 2) Does data augmentation and rebalancing improve SFRN performance? For both the SemEval-2013 and ASAP-SAS datasets, we compare the two SFRN variants—with the LSTM (SFRN) or BERT encoders $(\mathrm{SFRN}+)$ —against multiple baselines. On SemEval, we also compare performance after training on the augmented SemEval-2013+. Performance metrics are accuracy, and macro-averaged F1 (M-F1). SFRN+ performs competitively on most SemEval-2013 tasks without data augmentation. With data augmentation, SFRN+ outperforms the state-of-the-art on all Beetle tasks, and on most SciEntsBank tasks. On ASAP-SAS, performance is measured using quadratic weighted kappa (QWK). SFRN+ outperforms all baselines, including three Kaggle models that use the full datasets, while SFRN+ uses a subset of the data that is more analogous to the SemEval datasets. We calculated $95\%$ confidence intervals for all results for our models in Table 2, 3, 4 and 5: all margins of error were at most $2\%$ . To save space, however, only the point estimates are shown in the tables. The results of ASAP-SAS dataset are the best results of all runs as the benchmark model presented in their paper.
+
+# 6.1 SemEval-2013 Experiments
+
+On SemEval-2013, we compare SFRN and SFRN+ with eight baselines: 1) LO (Dzikovska et al., 2013), a model based on lexical overlap; 2) ETS (Heilman and Madnani, 2013) and 3) CoMeT (Ott et al., 2013), both of which use handcrafted features; 4) TF+SF (Saha et al., 2018), which com
+
+| Task | Method | UA | UQ |
| Acc | M-F1 | Acc | M-F1 |
| 2 Way | LO | 79 | 78 | 75 | 72 |
| ETS | 81 | 80 | 74 | 72 |
| CoMeT | 83 | 83 | 70 | 69 |
| LR | 73 | 71 | 63 | 62 |
| RN | 75 | 74 | 64 | 64 |
| SFRN | 81 | 80 | 66 | 65 |
| LR+ | 82 | 82 | 67 | 65 |
| RN+ | 84 | 84 | 68 | 68 |
| SFRN+ | 89 | 89 | 70 | 70 |
| 3 Way | LO | 60 | 55 | 51 | 47 |
| ETS | 63 | 59 | 55 | 52 |
| CoMeT | 73 | 71 | 51 | 46 |
| LR | 67 | 60 | 51 | 46 |
| RN | 69 | 61 | 55 | 48 |
| SFRN | 70 | 66 | 58 | 50 |
| LR+ | 72 | 63 | 62 | 52 |
| RN+ | 73 | 64 | 60 | 52 |
| SFRN+ | 78 | 67 | 63 | 55 |
| 5 Way | LO | 51 | 42 | 48 | 41 |
| ETS | 71 | 61 | 62 | 55 |
| CoMeT | 68 | 56 | 48 | 30 |
| LR | 60 | 50 | 55 | 50 |
| RN | 55 | 52 | 57 | 51 |
| SFRN | 65 | 55 | 55 | 51 |
| LR+ | 67 | 56 | 58 | 52 |
| RN+ | 69 | 55 | 57 | 51 |
| SFRN+ | 75 | 56 | 60 | 55 |
+
+bines handcrafted features with deep learning; 5) LR, a logistic regression baseline we developed that uses the same LSTM encoder as SRFN; 6) $\mathrm{LR + }$ (Sung et al., 2019), a logistic regression model that uses the pre-trained BERT-base model with finetuning as the encoder. Since Sung et al. (2019) report results only for the 3-way SciEntsBank tasks, we re-implemented $\mathrm{LR + }$ . 7) RN, a relation network baseline without the relation fusion module, using the same LSTM encoder as SRFN; 8) $\mathrm{RN + }$ , a relation network baseline without the relation fusion module, which uses the pre-trained BERT-base model with fine-tuning as the encoder.
+
+We trained the LR, RN and SFRN models that use an LSTM encoder with batches of size 32 and hidden size 256, using cross entropy loss, the Adam optimizer, a step learning rate from 5e-6 to 5e-4, and dropout of $50\%$ on every function. Word lookup used 300D GloVe embeddings (Wikipedia/Gigaword) (Pennington et al., 2014) as input. For $\mathrm{LR + }$ , $\mathrm{RN + }$ and SFRN+ with BERT as the encoder, we also used cross entropy loss with the Adam optimizer, but a smaller learning rate 1e-5 for BERT, and 3e-4 for the $g$ and $f$ functions. We used $20\%$ of the training samples as a dev set. We varied the number of fine-tuning epochs to be from 5 to 10, depending on performance.
+
+Table 2: Performance on the Beetle test set.
+
+| Method | UA | UQ | UD |
| Acc | M-F1 | Acc | M-F1 | Acc | M-F1 |
| 2 Way Task |
| LO | 66 | 61 | 66 | 63 | 67 | 65 |
| ETS | 72 | 70 | 71 | 68 | 69 | 68 |
| CoMeT | 77 | 76 | 60 | 57 | 67 | 67 |
| TF+SF | 79 | 78 | 70 | 68 | 71 | 70 |
| LR | 65 | 63 | 57 | 53 | 53 | 51 |
| RN | 70 | 66 | 60 | 55 | 56 | 53 |
| SFRN | 72 | 68 | 62 | 58 | 60 | 59 |
| LR+ | 70 | 70 | 59 | 57 | 57 | 53 |
| RN+ | 72 | 72 | 63 | 62 | 63 | 62 |
| SFRN+ | 78 | 78 | 64 | 64 | 67 | 67 |
| 3 Way Task |
| LO | 55 | 40 | 54 | 39 | 51 | 41 |
| ETS | 72 | 64 | 62 | 42 | 62 | 42 |
| CoMeT | 71 | 64 | 54 | 38 | 57 | 40 |
| TF+SF | 71 | 65 | 65 | 48 | 64 | 45 |
| LR | 62 | 55 | 48 | 35 | 50 | 39 |
| RN | 64 | 57 | 50 | 37 | 52 | 42 |
| SFRN | 67 | 59 | 52 | 40 | 53 | 43 |
| LR+ | 67 | 60 | 52 | 42 | 54 | 42 |
| RN+ | 71 | 63 | 54 | 44 | 54 | 44 |
| SFRN+ | 73 | 65 | 56 | 49 | 58 | 47 |
| 5 Way Task |
| LO | 43 | 37 | 41 | 32 | 41 | 31 |
| ETS | 62 | 58 | 66 | 27 | 63 | 39 |
| CoMeT | 60 | 55 | 43 | 20 | 42 | 15 |
| TF+SF | 62 | 47 | 50 | 31 | 50 | 35 |
| LR | 49 | 35 | 37 | 25 | 42 | 19 |
| RN | 58 | 50 | 40 | 30 | 46 | 33 |
| SFRN | 62 | 53 | 46 | 32 | 48 | 35 |
| LR+ | 61 | 45 | 42 | 30 | 47 | 25 |
| RN+ | 64 | 46 | 43 | 32 | 50 | 35 |
| SFRN+ | 69 | 47 | 47 | 35 | 51 | 35 |
+
+Table 3: Performance on the SciEntsBank test set.
+
+Table 2 gives the results on Beetle. SFRN+ outperforms all the baselines on the 3-way tasks, and on the 2-way UA task. On the 2-way UQ task, it is bested only by LO and ETS. It performs in the mid-range on the 5-way task. We will see, however, that with data augmentation and rebalancing, SFRN+ outperforms all models on all Beetle tasks.
+
+Table 3 gives the results on SciEntsBank. SFRN+ outperforms all baselines on the 3-way UA tasks, but TF+SF outperforms other models by a large margin on 2-way and 3-way UQ and UD. SFRN+ achieves the highest accuracy on the 5-way UA task, and the highest M-F1 on the 3-way UQ, 3-way UD and 5-way UQ tasks. On the other 5-way tasks, ETS performs best.
+
+Tables 2 and 3 also show that SFRN and SFRN+ outperform RN and $\mathrm{RN + }$ on all the sub-tasks, which indicates that the relation fusion module learns an effective method to combine the relation vectors that boosts the model performance.
+
+In the next subsection, we present results after retraining our models on the augmented datasets
+
+| Task | Method | UA | UQ |
| Acc | M-F1 | Acc | M-F1 |
| 2 Way | SOTAs | 85 | 85 | 75 | 72 |
| LR | 80 | 77 | 65 | 63 |
| LR+ | 83 | 83 | 69 | 67 |
| RN | 78 | 74 | 65 | 63 |
| RN+ | 86 | 86 | 75 | 74 |
| SFRN | 83 | 81 | 71 | 70 |
| SFRN+ | 91 | 90 | 81 | 80 |
| 3 Way | SOTAs | 76 | 71 | 64 | 56 |
| LR | 67 | 62 | 53 | 50 |
| LR+ | 72 | 63 | 60 | 53 |
| RN | 70 | 64 | 59 | 54 |
| RN+ | 75 | 66 | 62 | 55 |
| SFRN | 74 | 71 | 62 | 53 |
| SFRN+ | 83 | 76 | 66 | 63 |
| 5 Way | SOTAs | 72 | 65 | 62 | 55 |
| LR | 65 | 55 | 58 | 50 |
| LR+ | 74 | 64 | 59 | 54 |
| RN | 65 | 57 | 59 | 52 |
| RN+ | 76 | 62 | 61 | 55 |
| SFRN | 70 | 60 | 61 | 54 |
| SFRN+ | 81 | 66 | 64 | 63 |
+
+Beetle+ and SciEntsBank+. This produces large performance gains for all six models we trained.
+
+# 6.2 SemEval-2013+ Experiments
+
+In this section we report test results after retraining LR, $\mathrm{LR + }$ , RN, $\mathrm{RN + }$ , SFRN and SFRN+ on the augmented datasets. Results of retraining on Beetle+ appear in Table 4, and results of retraining on SciEntsBank+ are in Table 5. Note that for ease of comparison, both tables have a row showing the best performance on the non-augmented SemEval2013 datasets (SOTAs). All six re-trained models show at least some performance gains on all tasks. SFRN+ has performance gains of up to $9\%$ , and becomes the top performing model on all Beetle test sets. On SciEntsBank, there are performance gains on nearly all tasks for the six models. SFRN+ becomes the top performer on the 2-way UA task, but remains bested by TF+SF on the UQ and UD task. On the 3-way tasks, SFRN+ beats all baselines, apart from ties with TF+SF on the UD task. On the 5-way tasks, SFRN+ gets the greatest boost with gains of up to $11\%$ , ETS remains the top performer for accuracy on UQ and UD, but otherwise, SFRN+ outperforms all other baselines.
+
+In sum, SFRN+ achieves phenomenal gains on SemEval-2013 by training on augmented data, and outperforms nearly all baselines on all tasks. In addition, the data augmentation and balancing helps all models. It helps much more for Beetle, which we speculate is due to the presence of multiple ref
+
+Table 4: Results after retraining on Beetle+.
+
+| Method | UA | UQ | UD |
| Acc | M-F1 | Acc | M-F1 | Acc | M-F1 |
| 2 Way Task |
| SOTAs | 80 | 80 | 70 | 68 | 71 | 70 |
| LR | 68 | 65 | 58 | 54 | 55 | 52 |
| LR+ | 73 | 73 | 60 | 59 | 58 | 57 |
| RN | 70 | 70 | 62 | 59 | 56 | 56 |
| RN+ | 78 | 78 | 65 | 65 | 64 | 63 |
| SFRN | 72 | 71 | 64 | 62 | 62 | 58 |
| SFRN+ | 82 | 82 | 67 | 65 | 68 | 66 |
| 3 Way Task |
| SOTAs | 71 | 65 | 65 | 48 | 64 | 45 |
| LR | 63 | 56 | 50 | 40 | 52 | 42 |
| LR+ | 72 | 65 | 54 | 45 | 55 | 44 |
| RN | 65 | 58 | 53 | 42 | 53 | 43 |
| RN+ | 75 | 68 | 56 | 46 | 56 | 46 |
| SFRN | 69 | 62 | 58 | 46 | 56 | 44 |
| SFRN+ | 78 | 72 | 66 | 52 | 64 | 54 |
| 5 Way Task |
| SOTAs | 62 | 58 | 66 | 27 | 63 | 39 |
| LR | 52 | 39 | 43 | 29 | 42 | 22 |
| LR+ | 61 | 45 | 45 | 32 | 49 | 25 |
| RN | 63 | 49 | 48 | 30 | 52 | 25 |
| RN+ | 69 | 52 | 50 | 31 | 55 | 27 |
| SFRN | 64 | 57 | 53 | 35 | 53 | 36 |
| SFRN+ | 73 | 59 | 57 | 37 | 58 | 40 |
+
+Table 5: Results after retraining on SciEntSBank+.
+
+erence answers in Beetle compared with fewer reference answers for each question in SciEntsBank.
+
+# 6.3 ASAP-SAS Experiments
+
+As a further test of the generalization ability of SFRN and SFRN+, we run experiments on another large scale ASAG dataset, ASAP-SAS, where the training data has a different structure. As mentioned above, for this dataset we use rubrics in place of reference answers in SFRN's QRA triples. We again train four models: LR, LR+, SFRN, and SFRN+. We compare against four published baselines using QWK: 1) human raters, 2) the Kaggle winner (Tandalla), which relies on regular expression matching; 3) AutoP (Ramachandran et al., 2015), a stacked patterns model; 4) the model in (Riordan et al., 2017) (Rior), an LSTM network with attention. The four published baselines use the full ASAP-SAS resources, whereas our four models were trained only on questions, rubrics, and student answers.
+
+The ASAP-SAS results appear in Table 6. SFRN+ has the best performance, even though it relies on less data than AutoP, Rior or Tandalla. AutoP performs nearly as well as SFRN+. This fact implies that SFRN with BERT as encoder not only has strong generalization ability, but also has the flexibility to learn from triples that contain reference answers or rubrics.
+
+| Method | QWK |
| Human | 0.90 |
| AutoP (Ramachandran et al., 2015) | 0.78 |
| Rior (Riordan et al., 2017) | 0.74 |
| Tandalla (1st place in competition) | 0.77 |
| LR (Baseline) | 0.68 |
| LR+ (Baseline) | 0.71 |
| SFRN (Proposed model) | 0.71 |
| SFRN+ (Proposed model) | 0.79 |
+
+Table 6: Comparing performance of models on the test set from the Kaggle ASAP competition.
+
+# 6.4 Error Analysis
+
+We carried out error analysis motivated by the large gaps between accuracy and M-F1 on many tasks. On the 5-way tasks, no model achieved an M-F1 greater than 0.73. Inspection of the per-class performance on the 5-way tasks reveals that our four models all get far worse performance on the non-domain class, which is much smaller than three of the other four classes, and consists mainly of very short phrases with little semantic content. For instance, the test M-F1 scores on the Beetle 5-way UA experiment are \{0.9, 0.62, 0.92, 0.59, 0.21\} with training sample sizes \{5610, 2757, 3147, 1170, 1017\} for the 5 classes {correct, partially.correct_incomplete, contradictory, irrelevant, non_domain}. Examples of non-domain include: {what the book says, I do not know, Because if you see it is that because I chose it}. We believe that progress on the 5-way classes would depend on a combination of input from domain experts and more sophisticated data-augmentation.
+
+# 7 Conclusion
+
+We have presented a new type of relation network, SFRN, that learns relational information from QRA triples for automatic short answer grading (ASAG). It can learn from two types of training data, using reference answers or rubrics. SFRN+, the version with the BERT encoder, outperforms previous state-of-the-art by $8 - 11\%$ , depending on the dataset and classification task, when combined with a simple data augmentation method to compensate for the small and unbalanced training data. As relational meaning is central to NLP, our future work will investigate ways to improve SFRN, to understand its behavior, and to apply it to new problems. Another key avenue we aim to explore, however, is how to improve data augmentation and balancing for ASAG, and for other NLP tasks where data is difficult to come by.
+
+# 8 Acknowledgements
+
+We thank our lab mate, Vipul Gupta, for interesting discussions about the fusion function. This work was supported under NSF DRK award 2010351.
+
+# References
+
+Issac I. Bejar. 2012. Rater cognition: Implications for validity. Educational Measurement: Issues and Practice, 31(3):2-9.
+Steven Burrows, Iryna Gurevych, and Benno Stein. 2015. The eras and trends of automatic short answer grading. International Journal of Artificial Intelligence in Education, 25(1):60-117.
+A. C. Butler and H. L. Roediger. 2007. Testing improves long-term retention in a simulated classroom setting. European Journal of Cognitive Psychology, 19:514-527.
+Leon Camus and Anna Filighera. 2020. Investigating transformers for automatic short answer grading. In International Conference on Artificial Intelligence in Education, pages 43-48. Springer.
+Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, and Kai Yu. 2019. Semantic parsing with dual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 51-64, Florence, Italy. Association for Computational Linguistics.
+Roy B. Clariana. 2003. The effectiveness of constructed-response and multiple-choice study tasks in computer aided learning. Journal of Educational Computing Research, 28:395-406.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Myroslava Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, and Hoa Trang Dang. 2013. SemEval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 263-274, Atlanta, Georgia, USA. Association for Computational Linguistics.
+Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381.
+
+Lucas Galhardi, Rodrigo C Thom de Souza, and Jacques Brancher. 2020. Automatic grading of Portuguese short answers using a machine learning approach. In *Anais Estendidos do XVI Simposio Brasileiro de Sistemas de Informação*, pages 109-124. SBC.
+Le An Ha, Victoria Yaneva, Polina Harik, Ravi Pandian, Amy Morales, and Brian Clauser. 2020. Automated prediction of examinee proficiency from short-answer questions. In Proceedings of the 28th International Conference on Computational Linguistics, pages 893-903, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Sarah Hassan, Aly A Fahmy, and Mohammad El-Ramly. 2018. Automatic short answer scoring based on paragraph embeddings. International Journal of Advanced Computer Science and Applications, 9(10):397-402.
+Michael Heilman and Nitin Madnani. 2013. ETS: Domain adaptation and stacking for short answer scoring. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 275-279.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
+Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. 2018. Relation networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3588-3597.
+Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, page 4163-4174. Association for Computational Linguistics.
+Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901-2910.
+Chris Kedzie and Kathleen McKeown. 2019. A good sample is hard to find: Noise injection sampling and self-training for neural language generation models. In Proceedings of the 12th International Conference on Natural Language Generation (NLG), pages 584-593, Tokyo, Japan. Association for Computational Linguistics.
+
+Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452-457, New Orleans, Louisiana. Association for Computational Linguistics.
+H. Lee, O. L. Liu, and M. C. Linn. 2011. Validating measurement of knowledge integration in science using multiple-choice and explanation items. Applied Measurement in Education, 24(2):115-136.
+Tiaoqiao Liu, Wenbiao Ding, Zhiwei Wang, Jiliang Tang, Gale Yan Huang, and Zitao Liu. 2019. Automatic short answer grading via multiway attention networks. In International conference on artificial intelligence in education, pages 169-173. Springer.
+M. A. McDaniel, J. L. Anderson, M. H. Derbish, and N. Morrisette. 2007. Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19:494-513.
+Michael Mohler, Razvan Bunescu, and Rada Mihalcea. 2011. Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 752-762.
+Michael Mohler and Rada Mihalcea. 2009. Text-to-text semantic similarity for automatic short answer grading. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 567-575.
+Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 2786-2792. AAAI Press.
+Niels Ott, Ramon Ziai, Michael Hahn, and Detmar Meurers. 2013. CoMeT: Integrating different levels of linguistic modeling for meaning assessment. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 608-616, Atlanta, Georgia, USA. Association for Computational Linguistics.
+Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
+Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. FiLM: Visual reasoning with a general conditioning layer. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 3942-3951.
+
+Lakshmi Ramachandran, Jian Cheng, and Peter Foltz. 2015. Identifying patterns for short answer scoring using graph-based lexico-semantic text matching. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 97-106.
+Brian Riordan, Andrea Horbach, Aoife Cahill, Torsten Zesch, and Chungmin Lee. 2017. Investigating neural architectures for short answer scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 159-168.
+Swarnadeep Saha, Tejas I Dhamecha, Smit Marvaniya, Renuka Sindhgatta, and Bikram Sengupta. 2018. Sentence level or token level features for automatic short answer grading?: Use both. In International conference on artificial intelligence in education, pages 503-517. Springer.
+Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural network module for relational reasoning. arXiv preprint arXiv:1706.01427.
+Mark D Shermis. 2014. State-of-the-art automated essay scoring: Competition, results, and future directions from a United States demonstration. *Assessing Writing*, 20:53-76.
+Md Arafat Sultan, Cristobal Salazar, and Tamara Sumner. 2016. Fast and easy short answer grading with high accuracy. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1070-1075.
+Chul Sung, Tejas Indulal Dhamecha, and Nirmal Mukhi. 2019. Improving short answer grading using transformer-based pre-training. In International Conference on Artificial Intelligence in Education, pages 469-481. Springer.
+Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1199-1208.
+Neslihan Sümzen, Alexander N Gorbán, Jeremy Levesley, and Evgeny M Mirkes. 2020. Automatic short answer grading and feedback using text mining methods. Procedia Computer Science, 169:726-743.
+Tianqi Wang, Naoya Inoue, Hiroki Ouchi, Tomoya Mizumoto, and Kentaro Inui. 2019. Inject rubrics into short answer grading system. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 175–182.
+
+William Yang Wang and Diyi Yang. 2015. That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2557-2563.
+Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.
+John Wieting, Jonathan Mallinson, and Kevin Gimpel. 2017. Learning paraphrastic sentence embeddings from back-translated bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 274-285, Copenhagen, Denmark. Association for Computational Linguistics.
+Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems, volume 33, pages 6256-6268. Curran Associates, Inc.
+Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.
\ No newline at end of file
diff --git a/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/images.zip b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6d7d3f52d575a8463d64d558683a21d68f117ced
--- /dev/null
+++ b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:660be67b47644971d277379aa3da8fb61d058b3b540d304120b421b5ba5ff7a5
+size 442485
diff --git a/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/layout.json b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..14e538ba87127fecb6f2e5f9b0a773e6ce166b9c
--- /dev/null
+++ b/asemanticfeaturewisetransformationrelationnetworkforautomaticshortanswergrading/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e96a6e48ac086ce3eaea39e86c14bae01c9cce8b12c59b9a9851ce3e1b330d8
+size 335894
diff --git a/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_content_list.json b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8a2d6b5e40bc7003ea7c82536aaabedc22b822f7
--- /dev/null
+++ b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9bd694abb354b8919593f9fec18aa55f6e6c2f2eafe0b5e48e71bd8912279fb7
+size 71826
diff --git a/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_model.json b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0257277974e694d182a6206f8a815b432d6a8ce9
--- /dev/null
+++ b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4647d8bd2cff5e6db0f35eb00b46df79473e3d84b686978ddcb8b5829685bfad
+size 83170
diff --git a/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_origin.pdf b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..131bdcf3e6a720646a285bacfe6ed559f62f43fe
--- /dev/null
+++ b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/d5a447af-3326-40de-a255-93cdefef3f81_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bef72b74e17e7a4212de47d46a5b4eea2681159e3845c5ae348e4e1a7be3a9f7
+size 736193
diff --git a/asemanticfilterbasedonrelationsforknowledgegraphcompletion/full.md b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c6e591ac898f19f619132db42e36b6bb53222881
--- /dev/null
+++ b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/full.md
@@ -0,0 +1,354 @@
+# A Semantic Filter Based on Relations for Knowledge Graph Completion
+
+Zongwei Liang, Junan Yang, Hui Liu, Keju Huang
+National University of Defense Technology, Hefei, China
+zwliang17@nudt.edu.cn, yangjunan@ustc.edu.cn,
+{christ592604, huangkeju} @163.com
+
+# Abstract
+
+Knowledge graph embedding, representing entities and relations in the knowledge graphs with high-dimensional vectors, has made significant progress in link prediction. More researchers have explored the representational capabilities of models in recent years. That is, they investigate better representational models to fit symmetry/antisymmetry and combination relationships. The current embedding models are more inclined to utilize the identical vector for the same entity in various triples to measure the matching performance. The observation that measuring the rationality of specific triples means comparing the matching degree of the specific attributes associated with the relations is well-known. Inspired by this fact, this paper designs Semantic Filter Based on Relations(SFBR) to extract the required attributes of the entities. Then the rationality of triples is compared under these extracted attributes through the traditional embedding models. The semantic filter module can be added to most geometric and tensor decomposition models with minimal additional memory. Experiments on the benchmark datasets show that the semantic filter based on relations can suppress the impact of other attribute dimensions and improve link prediction performance. The tensor decomposition models with SFBR have achieved state-of-the-art.
+
+# 1 Introduction
+
+Knowledge Graphs (KGs) are collections of large-scale triples, such as Freebase(Bordes et al., 2013), YAGO (Suchanek et al., 2008) and DBpedia(Auer et al., 2007). KGs play a crucial role in applications such as question answering services, search engines, and medical care. Although there are billions of triples in KGs, they are still incomplete. These incomplete knowledge bases will bring limitations to practical applications. Therefore, knowledge graph completion, known as link prediction, which automatically predicts missing links between
+
+entities based on given links, has recently attracted growing attention.
+
+Inspired by word embedding (Mikolov et al., 2013), researchers try to solve the link prediction through knowledge graph embedding. Knowledge graph embedding models map entities and relations into low-dimensional vectors (or matrices, tensors), measure the rationality of triples through specific functions between entities and relations, and rank the triples with function scores. Since TransE(Bordes et al., 2013) proposes to use relation vectors to represent the geometric distance between entities, many variants emerge. For example, TransH(Wang et al., 2014) first explores the different representations of entities under different relations. TransR(Lin et al., 2015) attempts to map entities to the relational space through a particular matrix. TransD(Ji et al., 2015) tries to incorporate the different representations of the entities under the entity and relation into the calculation. These variants attempt to perform complex transformations based on relations or triples to achieve different representations of entities in different semantic spaces.
+
+Recently, scholars are more inclined to solve link prediction by designing models with more powerful representation, such as ComplEx(Trouillon et al., 2016), Tucker(Balazevic et al., 2019), RotatE(Sun et al., 2019), a method based on vector space rotation, and HAKE(Zhang et al., 2020a). Contrary to the actual semantic description, models in recent research apply identical representation for the same entity in different triples.
+
+Since the invention of TransE(Bordes et al., 2013), early scholars, who realized that we should compare different attributes of entities in different triples, tried to improve the model in this direction. However, most recent studies only focus on investigating the more robust representation of entities, such as AutoETER(Niu et al., 2020) and RotatE(Sun et al., 2019). Surprisingly, the attempt to
+
+find various representations of entities in different semantic spaces is gradually discarded.
+
+
+Figure 1: Comparison of boxes with the same shape and different colors.
+
+In practice, entities are collections of attributes, and each entity can contain various semantic attributes. Figure 1 shows the comparison of boxes with the same shape and different colors. When comparing different attributes such as colors or shapes, entities should have different expressions rather than exact representations. The paper believes that each relation describes the links between the head and tail entities in particular attributes. Measuring the plausibility of a given triplet means comparing the matching degree of the attributes associated with the predicate between the entities. Therefore, this paper proposes a semantic filter module to select different attributes of entities in different triples.
+
+This paper designs a semantic filter based on relations. By employing the semantic filter, only the semantics associated with the relations are extracted, and the information of other unneeded dimensions is suppressed. As a result, the head and tail entities are compared under a limited semantic space.
+
+We take the MLP-based semantic filter as the departure. Following the regularization strategy of diagonalization, this paper designs two SFBRs: Linear-2 and Diag. Note that MLP-based SFBR is a general model that can be transformed into most geometric and tensor decomposition models through special regularizations. We analyze several models in Appendix A to show the generality of MLP-based SFBR.
+
+Overall, this paper proposes Semantic Filter Based on Relations (SFBR), which can be added to geometric and tensor decomposition models. SFBR suppresses the interference of useless dimensions and improves the reasoning performance; SFBR occupies minimal additional resources. Experiments on the benchmark datasets show that the tensor decomposition models with SFBR achieve state-of-the-art.
+
+# 2 Related work
+
+In this section, we describe related works and the critical differences between them. We divide knowledge graph embedding models into three leading families(Akrami et al., 2020), including Tensor Decomposition Models, Geometric Models, and Deep Learning Models.
+
+Tensor Decomposition Models. These models implicitly consider triples as tensor decomposition. DistMult (Yang et al., 2015) constrains all relation embeddings to be diagonal matrices, which reduces the space of parameters to access a more accessible model to train. RESCAL(Nickel et al., 2011) represents each relationship with a full rank matrix. ComplEx(Trouillon et al., 2016) extends the KG embeddings to the complex space to better model asymmetric and inverse relations. Analogy(Liu et al., 2017) employs the general bilinear scoring function but adds two main constraints inspired by analogical structures. Based on the Tucker decomposition, TuckER(Balazevic et al., 2019) factorizes a tensor into a set of vectors and a smaller shared core matrix.
+
+Geometric Models. Geometric Models interpret relations as geometric transformations in the latent space. TransE(Bordes et al., 2013) is the first translation-based method, which treats relations as translation operations from the head entities to the tail entities. Along with TransE(Bordes et al., 2013), multiple variants, including TransH(Wang et al., 2014), TransR(Lin et al., 2015) and TransD(Ji et al., 2015), are proposed to improve the embedding performance of KGs. Recently, RotatE(Sun et al., 2019) defines each relation as a rotation from head entities to tail entities.
+
+Deep Learning Models. Deep Learning Models use deep neural networks to perform knowledge graph completion. ConvE(Dettmers et al., 2018) and ConvKB(Nguyen et al., 2018) employ convolutional neural networks to define score functions. CapsE(Nguyen et al., 2019) embeds entities and relations into one-dimensional vectors under the basic assumption that different embeddings encode homologous aspects in the same positions. CompGCN(Vashishth et al., 2020) utilizes graph convolutional networks to update the knowledge graph embedding.
+
+There are also other models, such as DURA(Zhang et al., 2020b), which are proposed to solve overfitting. Together, most of the above studies intend to find a more robust repre
+
+senting approach. Measuring the effectiveness of certain triples is to compare the matching degree of specific attributes based on relations. Only a few models, such as TransH(Wang et al., 2014), TransR(Lin et al., 2015), and TransD(Ji et al., 2015), consider that entities in different triples should have different representation. However, these variants require many occupations of resources and are limited to particular models.
+
+# 3 Background
+
+In this section, we introduce KG embedding and KG completion tasks. Next, we briefly introduce several models involved in this paper.
+
+KG Completion. Knowledge graphs are collections of factual triples $K = \{(h, r, t), h, t \in \mathcal{E}, r \in \mathcal{R}\}$ , where $(h, r, t)$ represents a triple in the knowledge graph, $h, t, r$ are head, tail entities and relations respectively. Knowledge graph embedding associates the entities $h, t$ and relations $r$ with vectors $\mathbf{h}, \mathbf{t}, \mathbf{r}$ . Then we design an appropriate scoring function $d_r(\mathbf{h}, \mathbf{t}) : \mathcal{E} \times \mathcal{R} \times \mathcal{E} \to \mathbf{R}$ , to map the embedding of the triple to a certain score. For a particular question $(h, r, ?)$ , the task of KG completion is ranking all possible answers and obtain the preference of prediction.
+
+Geometric Models. The models treat the relations as the transformation of entities in latent spaces. TransE (Bordes et al., 2013) is the first model that uses vectors to represent entities and relations. TransE supposes that entities and relations satisfy $\mathbf{h} + \mathbf{r} = \mathbf{t}$ where $\mathbf{h},\mathbf{r},\mathbf{t}\in \mathbf{R}^n$ . The scoring function can be expressed as:
+
+$$
+d _ {r} (\mathbf {h}, \mathbf {t}) = - \| \mathbf {h} + \mathbf {r} - \mathbf {t} \| \tag {1}
+$$
+
+RotatE(Sun et al., 2019) defines the relation as a rotation from head entities to tail entities in complex spaces. Given a triple $\{\mathbf{h},\mathbf{t},\mathbf{r}\}$ , we expect that $\mathbf{t} = \mathbf{h}\circ \mathbf{r}$ , where $\mathbf{h},\mathbf{r},\mathbf{t}\in \mathbf{C}^{k}$ are the embeddings, the modulus for each dimension of relations satisfy $|\mathbf{r}_i| = 1$ and $\circ$ denotes the Hadamard product. The score function is:
+
+$$
+d _ {r} (\mathbf {h}, \mathbf {t}) = - \| \mathbf {h} \circ \mathbf {r} - \mathbf {t} \| _ {2} \tag {2}
+$$
+
+Where $\mathbf{h},\mathbf{r},\mathbf{t}\in \mathbf{C}^k,|\mathbf{r}_i| = 1$
+
+Tensor Factorization Models. Models in this family interpret link prediction as a task of tensor decomposition, where triples are decomposed into a combination (e.g., a multi-linear product) of
+
+low-dimensional vectors for entities and relations. CP(Lacroix et al., 2018) represents triples with canonical decomposition. Note that the same entity has different representations at the head and tail of the triplet. The score function can be expressed as:
+
+$$
+d _ {r} (\mathbf {h}, \mathbf {t}) = \left\| \mathbf {h} ^ {\mathbf {T}} \mathbf {r} \mathbf {t} \right\| \tag {3}
+$$
+
+Where $\mathbf{h},\mathbf{r},\mathbf{t}\in \mathbf{R}^k$ .RESCAL(Nickel et al., 2011) represents a relation as a matrix $\mathbf{M}_{\mathbf{r}}\in \mathbf{R}^{d\times d}$ that describes the interactions between latent representations of entities. The score function is defined as:
+
+$$
+d _ {r} (\mathbf {h}, \mathbf {t}) = \left\| \mathbf {h} ^ {\mathrm {T}} \mathbf {M} _ {\mathbf {r}} \mathbf {t} \right\| \tag {4}
+$$
+
+ComplEx(Trouillon et al., 2016) extends the real space to complex spaces and constrains the embeddings for relation to be diagonal matrices. The bilinear product becomes a Hermitian product in complex spaces. The score function can be expressed as:
+
+$$
+d _ {r} (\mathbf {h}, \mathbf {t}) = \operatorname {R e} \left(\mathbf {h} ^ {\mathbf {T}} \operatorname {d i a g} (\mathbf {r}) \mathbf {t}\right) \tag {5}
+$$
+
+where $\mathbf{h},\mathbf{r},\mathbf{t}\in \mathbf{C}^k$
+
+# 4 SFBR model
+
+This section introduces a novel module—A Semantic Filter Based on Relations for knowledge graph completion. We first introduce the basic framework of SFBR in Section 4.1 and the specific filter design in Section 4.2. Finally, we introduce several cases on several models in Section 4.3.
+
+# 4.1 Framework of SFBR
+
+As is shown in the left of Figure 2, the mainstream KG embedding model depends on the unique representation of entities and relations. The rationality of possible triples is compared through the rankings calculated by the score function.
+
+It is widely accepted that an entity may contain various attributes. This paper believes that each relation describes the relationship between entities in specific attributes. In different triples with different relations, the attributes compared by the triples should also be unique. The comparison requires the choice of needed attributes. For a given triplet, this paper filters out the needed attributes of the triplet by special functions and ranks the triples with the scores calculated by filtered attributes.
+
+
+Figure 2: The framework of traditional embeddings models(left) and the framework of embedding models with SFBR(right).
+
+As shown in the right of Figure 2, based on the traditional embedding method, this paper designs a relation-based function for the entities. This function reinforces the dimensions associated with the relations and suppresses the information of other unrelated dimensions. This operation is similar to filters used in signal processing, so the module is named the relation-based semantic filter. The score function can be express as:
+
+$$
+d _ {r} (\mathbf {h}, \mathbf {t}) = d _ {r} (\mathbf {h}, \mathbf {t}) \tag {6}
+$$
+
+$$
+\Rightarrow d _ {r} ^ {f} (\mathbf {h}, \mathbf {t}) = d _ {r} \left(f _ {r} ^ {h} (\mathbf {h}), f _ {r} ^ {t} (\mathbf {t})\right) \tag {7}
+$$
+
+Where $d_r(\mathbf{h},\mathbf{t})$ is the traditional scoring function, $d_r^f (\mathbf{h},\mathbf{t})$ is the modified scoring function and $f_{r}(*)$ is the semantic filter.
+
+# 4.2 Semantic Filter Module
+
+We first try to design the filter based on multilayer perceptron (MLP) for SFBR.
+
+$$
+\begin{array}{l} f _ {r} (\mathbf {h}) = M L P (\mathbf {h}) \tag {8} \\ = \mathbf {h} \times \mathbf {W} _ {r} + \mathbf {b} \\ \end{array}
+$$
+
+In order to guarantee each relation filters out different semantics, each relation uses a separate $f_{r}(*)$ . However, the semantic filters based on the MLP will bring enormous parameters, and the matrix multiplication requires many resources. As shown in Figure 3, The paper attempts to regularize MLP: diagonalization.
+
+Notice that SFBR is introduced as a module into the existing models in this paper. However, we must be clear about the theoretical status of the SFBR based on MLP. MLP-based SFBR is a general model that can be transformed into most geometric and tensor decomposition models through different regularizations. This paper selects TransE,
+
+RotatE, RESCAL, and ComplEx as examples and conducts these regularization analyses in Appendix A.
+
+$$
+\mathbf {W} _ {r} = \left[ \begin{array}{l l} \mathbf {W} _ {1} & \mathbf {W} _ {2} \\ \mathbf {W} _ {3} & \mathbf {W} _ {4} \end{array} \right] \tag {9}
+$$
+
+where $\mathbf{W}_{\mathbf{r}}\in \mathbf{R}^{n\times n},\mathbf{W}_{\mathbf{1}},\mathbf{W}_{\mathbf{2}},\mathbf{W}_{\mathbf{3}},\mathbf{W}_{\mathbf{4}}\in$ $\mathbf{R}^{n / 2\times n / 2}$
+
+$$
+\mathbf {W} _ {r} ^ {\text {L i n e a r} - 2} = \left[ \begin{array}{l l} \operatorname {d i a g} (\mathbf {w} _ {1}) & \operatorname {d i a g} (\mathbf {w} _ {2}) \\ \operatorname {d i a g} (\mathbf {w} _ {3}) & \operatorname {d i a g} (\mathbf {w} _ {4}) \end{array} \right] \tag {10}
+$$
+
+where $\mathbf{W}_{\mathbf{r}}^{\mathrm{Linear - 2}}\in \mathbf{R}^{n\times n},\mathbf{w}_1,\mathbf{w}_2,\mathbf{w}_3,\mathbf{w}_4\in$ $\mathbf{R}^{n / 2}$
+
+Too many parameters of MLP make the model hard to train, which promotes regularization. First, we ignore the bias. As shown in Eq.(9) and Eq.(10), We decompose the semantic filter matrix of MLP into four square matrices of equal size and diagonalize four square matrices to reduce the parameter quantity of the relational filter. Since this diagonalization equals a linear combination of two parts of entities, we call this SFBR Linear-2.
+
+$$
+\mathbf {W} _ {r} ^ {\text {D i a g}} = \left[ \begin{array}{c c} \operatorname {d i a g} (\mathbf {w} _ {\mathbf {1}}) & \mathbf {O} \\ \mathbf {O} & \operatorname {d i a g} (\mathbf {w} _ {\mathbf {4}}) \end{array} \right] \tag {11}
+$$
+
+where $\mathbf{W}_{\mathbf{r}}^{\mathrm{Diag}} \in \mathbf{R}^{n \times n}$ and all elements in $\mathbf{O}$ equal zero.
+
+To further lessen the number of parameters, the paper directly diagonalizes the filter matrix, taking a one-dimensional vector as the semantic filter. The paper names this SFBR Diag.
+
+$$
+f _ {r} (\mathbf {h}) = \mathbf {h} \odot \mathbf {w} + \mathbf {b} \tag {12}
+$$
+
+
+Figure 3: The design route of matrix regularization for SFBR.
+
+Where $\mathbf{w},\mathbf{b}\in \mathbf{R}^n,\odot$ denotes Hadmard (or element-wise) product, $\times$ denotes matrix multiplication.
+
+# 4.3 Special Cases with SFBR
+
+This section will introduce the examples of SFBR for different models, including TransE, RotatE, and RESCAL.
+
+The corresponding score function of SFBR based on TransE can be expressed as:
+
+$$
+d _ {r} ^ {f} (\mathbf {h}, \mathbf {t}) = \left\| f _ {r} ^ {h} (\mathbf {h}) + \mathbf {r} - f _ {r} ^ {t} (\mathbf {t}) \right\| \tag {13}
+$$
+
+where $f_{r}(\mathbf{e}) = \mathbf{e}\times \mathbf{W}_{\mathbf{r}} + \mathbf{b},\mathbf{e},\mathbf{W}_{\mathbf{r}},\mathbf{b}\in \mathbf{R}^{n}$ and e represents the entity vectors $\mathbf{h},\mathbf{t}$
+
+The corresponding score function of SFBR based on RotatE can be expressed as:
+
+$$
+d _ {r} ^ {f} (\mathbf {h}, \mathbf {t}) = \left\| f _ {r} ^ {h} (\mathbf {h}) \circ \mathbf {r} - f _ {r} ^ {t} (\mathbf {t}) \right\| \tag {14}
+$$
+
+where $f_{r}(\mathbf{e}) = \mathbf{e}\times \mathbf{W}_{\mathbf{r}} + \mathbf{b},\mathbf{e},\mathbf{W}_{\mathbf{r}},\mathbf{b}\in \mathbf{C}^{n}$ and e represents the entity vector $\mathbf{h},\mathbf{t}$
+
+The corresponding score function of SFBR based on RESCAL can be expressed as:
+
+$$
+d _ {r} ^ {f} (\mathbf {h}, \mathbf {t}) = \left\| f _ {r} ^ {h} (\mathbf {h}) ^ {\mathbf {T}} \mathbf {M} _ {r} f _ {r} ^ {t} (\mathbf {t}) \right\| \tag {15}
+$$
+
+where $f_r^h(\mathbf{h}) = \mathbf{h} \times \mathbf{W}_{\mathbf{r}} + \mathbf{b}, f_r^t(\mathbf{t}) = \mathbf{t} + \mathbf{b}$ and $\mathbf{h}, \mathbf{t}, \mathbf{W}_{\mathbf{r}}, \mathbf{b} \in \mathbb{R}^n$ .
+
+Notice that using $f_r^h(\mathbf{t}) = \mathbf{t} \times \mathbf{W}_{\mathbf{r}} + \mathbf{b}$ for tails is more in line with our design. However, the prediction is to rank the scores of all entities. There are hundreds of thousands of entities. If we apply Hadamard operation for all tails, it will take up enormous resources. The paper simplified SFBR for tails. This simplification can effectively reduce resource occupation. Although the performance is sacrificed, there is still a certain improvement to basic models.
+
+# 5 Experiment
+
+This section is organized as follows. First, we introduce the experimental settings in Section 5.1.
+
+Then, we show the effectiveness of SFBR on three benchmark datasets in Section 5.2. Finally, we visualize and analyze the embeddings generated by SFBR in Section 5.3.
+
+# 5.1 Experimental Settings
+
+Dataset. In order to evaluate the proposed module, we consider three common knowledge graph datasets—WN18RR (Toutanova and Chen, 2015), FB15k-237 (Dettmers et al., 2018) and YAGO3-10 (Mahdisoltani et al., 2015). Details of these datasets are listed in Table 1.
+
+FB15k-237 is obtained by eliminating the inverse and equal relations in FB15K, making it more difficult for simple models to do well. WN18RR is achieved by excluding inverse and equal relations in WN18. The main relation patterns are symmetry/antisymmetry and composition. YAGO3-10 is a subset of YAGO3, which is produced to alleviate the test set leakage problem.
+
+Evaluation Settings. We use evaluation metrics standard across the link prediction literature: mean reciprocal rank (MRR) and Hits@k, $k = 1,3,10$ . Mean reciprocal rank is the average of the inverse of the mean rank assigned to the true triple over all candidate triples. Hits@k measures the percentage of times a true triple is ranked within the top k candidate triples. We evaluate the performance of link prediction in the filtered setting (Bordes et al., 2013), i.e., all known true triples are removed from the candidate set except for the current test triple. In both settings, higher MRR or higher Hits@1/3/10 indicate better performance.
+
+Baselines and Training Protocol. In this section, we compare the performance of SFBR against two categories of KGC models: (1) Geometric Models including TransE(Bordes et al., 2013), RotatE(Sun et al., 2019), TucKer(Balazevic et al., 2019), AutoERTR (Niu et al., 2020) and HAKE(Zhang et al., 2020a), (2) models based on tensor decomposition including CP(Lacroix et al., 2018), RESCAL(Nickel et al., 2011), Com
+
+| Dataset | #entity | #relation | #training | #validation | #test |
| WN18RR | 40943 | 11 | 141442 | 5000 | 5000 |
| FB15K-237 | 14505 | 237 | 272115 | 17535 | 20466 |
| YAGO3-10 | 123182 | 37 | 1079040 | 5000 | 5000 |
+
+Table 1: The number of entities, relations and observed triples in each split for four benchmarks.
+
+plEx(Trouillon et al., 2016) and DURA(Zhang et al., 2020b).
+
+Because SFBR is a module based on existing models, the parameters of our experiments are consistent with those in original papers, and no additional hyperparameters are designed. The parameters of TransE-SFBR and RotatE-SFBR are consistent with the hyper-parameters in RotatE(Sun et al., 2019). CP-SFBR, RESCAL-SFBR and ComplEx-SFBR take the same parameters in DURA(Zhang et al., 2020b). For tensor decomposition models, we find that the model's output is related to the initialization, so SFBR is trained based on initial results.
+
+# 5.2 Main Results
+
+In this section, we compare the results of SFBR and other state-of-the-art models on three benchmark datasets.
+
+Table 2 shows the comparison between two SFBRs and geometric models. Compared with TransE, TransE-SFBR has significant improvements: on WN18RR, Hit@10 increases by $3.8\%$ ; on FB15k-237, Hit@10 increases by $7\%$ . Compared with RotatE, RotatE-SFBR also makes significant progress: on WN18RR, Hit@10 increases by $2.2\%$ ; on the FB15k-237, Hit@10 increases by $2\%$ .
+
+The matrix multiplication performed by the MLP-based SFBR requires a lot of GPU memory. Limited by GPU resources, we only experiment on WN18RR, and the embedding dim of entities in TransE-SFBR (MLP) is only 100, which is 1/5 of the original. Therefore, the results of MLP-based SFBR cannot be contrasted with the other two SFBRs. Through comparative experiments on the two datasets, we find that the performance of SFBR based on Linear-2 is slightly better than that of SFBR based on Diag. Nevertheless, the extra parameters and the excessive resource occupancy of the Linear-2 are twice the Diag. In terms of resource utilization, we select Diag-based SFBR. The paper chooses Diag-based SFBR in the subsequent experiments by default.
+
+Table 3 shows the comparison between SFBR and the models based on tensor decomposition. SFBR improves the performance of the model on almost all datasets. On WN18RR, RESCAL-SFBR obtains the best result (the best Hit@10 is achieved by ComplEx-SFBR). On FB237, ComplEx-SFBR obtains the best result, and MRR is increased by 0.13. On YAGO3-10, although the performance of CP-SFBR and RESCAL-SFBR have been improved, they do not exceed ComplEx-DURA.
+
+Overall, compared with the basic model, the performance on link prediction tasks has been improved by SFBR. Experiments on the standard dataset show that SFBR can improve the performance of base models.
+
+# 5.3 Visualization and Analysis
+
+In this part, we analyze the performance of SFBR from three aspects. First, we visualize the embedding through T-SNE; then, we randomly select a pair of samples to analyze the function of SFBR and show the additional resources occupied by SFBR.
+
+Visualization. We use T-SNE to visualize tail entity embeddings. Suppose the link prediction task is $(h,r,?)$ , where $h$ and $r$ are head entities and relations, respectively. We randomly select ten queries in FB15k-237, which have more than 50 answers. Then, we use T-SNE to visualize the embeddings generated by RotatE and RotatE-SFBER. For each question, T-SNE converts the answers into 2-dimensional points and displays them on the graph with the same color. As shown in Figure 4 and 5, it is a visualization of the distribution of answers to 10 questions. SFBR makes the answers to the same question more similar, indicating that SFBR effectively extracts the needed semantics of each entity and suppresses the attributes of other dimensions, which verifies the claim in Section 4.1.
+
+Case study. Two pairs of triples are randomly selected from the test set for analysis. Each pair of triples has the same query: $(h, r, ?)$ . For each query, a correct answer and an incorrect answer are randomly selected. The first pair of triples,
+
+ | WN18RR | FB15K-237 |
| MRR | Hit@1 | Hit@10 | MRR | Hit@1 | Hit@10 |
| TransE* | .223 | - | .510 | .298 | - | .475 |
| RotatE* | .476 | .428 | .571 | .338 | .241 | .533 |
| AutoERTR | - | - | - | .344 | .250 | .538 |
| HAKE | .497 | .452 | .582 | .346 | .250 | .542 |
| TransE-SFBR(MLP) | .184 | .006 | .388 | - | - | - |
| TransE-SFBR(Linear-2) | .263 | .110 | .495 | .354 | .258 | .545 |
| TransE-SFBR(Diag) | .242 | .028 | .548 | .338 | .240 | .538 |
| RotatE-SFBR(Linear-2) | .490 | .447 | .576 | .355 | .258 | .553 |
| RotatE-SFBR(Diag) | .489 | .437 | .593 | .351 | .254 | .549 |
+
+Table 2: Evaluation results of geometric models on FB15K-237 and WN18RR.
+
+ | WN18RR | FB15K-237 | YAGO3-10 |
| MRR | Hit@1 | Hit@10 | MRR | Hit@1 | Hit@10 | MRR | Hit@1 | Hit@10 |
| CP | .438 | .414 | .485 | .333 | .247 | .508 | .567 | .494 | .698 |
| RESCAL | .455 | .419 | .493 | .353 | .264 | .528 | .566 | .490 | .701 |
| ComplEx | .460 | .428 | .522 | .346 | .256 | .525 | .573 | .500 | .703 |
| CP-DURA | .478 | .441 | .552 | .367 | .272 | .555 | .579 | .506 | .709 |
| RESCAL-DURA | .498 | .455 | .577 | .368 | .276 | .550 | .579 | .505 | .712 |
| ComplEx-DURA | .491 | .449 | .571 | .371 | .276 | .560 | .584 | .511 | .713 |
| CP-SFBR | .485 | .447 | .561 | .370 | .274 | .563 | .582 | .510 | .711 |
| RESCAL-SFBR | .500 | .458 | .581 | .369 | .276 | .555 | .581 | .509 | .712 |
| ComplEx-SFBR | .498 | .454 | .584 | .374 | .277 | .567 | .584 | .512 | .712 |
+
+Table 3: Evaluation results of tensor decomposition models on WN18RR, FB15K-237 and YAGO3-10.
+
+| Model | WN18RR | FB15K-237 | YAGO3-10 |
| original | SFBR | original | SFBR | original | SFBR |
| TransE | 20.48M | 20.49M | 14.78M | 15.25M | - | - |
| RotatE | 40.95M | 40.96M | 29.32M | 29.79M | 123.20M | 123.24M |
| CP | 163.82M | 163.90M | 59.11M | 61.01M | 246.45M | 246.60M |
| RESCAL | 11.92M | 11.93M | 131.70M | 132.19M | 246.52M | 246.82M |
| ComplEx | 163.86M | 164.04M | 60.06M | 60.85M | 82.47M | 82.55M |
+
+Table 4: Comparison of parameter size between SFBR and basic models on different datasets.
+
+
+Figure 4: Visualization of tail entities in RotatE using T-SNE. A point represents a tail entity. Points in the same color represent tail entities that have the same context $(h_r, r_j)$ .
+
+
+Figure 5: Visualization of tail entities in RotatE-SFBE using T-SNE.
+
+which cannot be predicted in TransE, can be distinguished by TransE-SFBER; for the other pair, both models can effectively predict them. Draw the distance of two triples: $\| \mathbf{h} + \mathbf{r} - \mathbf{t}\| _1$ , where $\mathbf{h},\mathbf{r},\mathbf{t}$ are embeddings of entities and relations. Figure 6 and Figure 7 show the distance; the blue one is the triple deviation of the correct answer, the red one is the deviation of error triple. The top of the figure shows the distance of TransE, and the bottom is the distance of TransE-SFBE. From Figure 6, we can find that for the tail entity that cannot be predicted in TransE, TransE-SFBR suppresses the influence of irrelevant dimensions, and the tail entity can be predicted. For the tail entities, which TransE can predict in Figure 7, SFBR further suppresses the noise of other dimensions, and the distance between the correct and wrong tails is further enlarged, which enhances the model.
+
+Resource occupation. As shown in Table 4, the parameters of SFBR and other basic models on the three datasets are compared. The comparison finds that the parameter of SFBR only increases by $0.01 \sim 0.5\mathrm{M}$ on the model based on geometric
+
+
+
+
+Figure 6: Distance of a pair of triples, which cannot be predicted in TransE(upper) and can be reasoned in TransE-SFBR(lower).
+
+
+
+
+
+
+
+
+Figure 7: Distance of a pair of triples, which can be predicted both in TransE(upper) and TransE-SFBR(lower)
+
+
+
+
+
+distance; the parameter of SFBR only increases by $0.01 \sim 1.9\mathrm{M}$ on the model based on tensor decomposition. Especially in the geometric models, there is a small growth of parameters, and the performance can be significantly improved. In all cases, SFBR brings minimal growth in resource occupation to the basic model.
+
+# 6 Conclusion
+
+This paper designs a relation-based semantic filter—SFBR—for the geometric and tensor decomposition models based on knowledge graph completion. SFBR is based on the observation that judging the rationality of a particular triple is to compare specific attributes between the entities, ignoring other unrelated dimensions. Therefore, this paper provides a relation-based semantic filter to extract the attributes that need to be compared and suppress the irrelevant attributes of entities.
+
+Experiments show that SFBR can effectively improve the performance of the traditional models, especially the geometric models. The visualization shows that SFBR can effectively extract the relevant dimensions and distinguish the comparisons among different attributes. Compared with the base models, SFBR only has a slight growth in resource occupation.
+
+# Acknowledgements
+
+This work was partially supported by the Anhui Provincial Natural Science Foundation (NO.1908085MF202) and Independent Scientific Research Program of National University of Defense Science and Technology (NO.ZK18-03-14).
+
+# References
+
+F. Akrami, Mohammed Samiul Saeef, Qingheng Zhang, Wei Hu, and C. Li. 2020. Realistic re-evaluation of knowledge graph completion methods: An experimental study. Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data.
+Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722-735. Springer.
+Ivana Balazevic, Carl Allen, and Timothy M. Hesperdales. 2019. Tucker: Tensor factorization for knowledge graph completion. ArXiv, abs/1901.09590.
+Antoine Bordes, Nicolas Usunier, Alberto Garcia-Durán, J. Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In NIPS.
+Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and S. Riedel. 2018. Convolutional 2d knowledge graph embeddings. In AAAI.
+Guoliang Ji, Shizhu He, L. Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In ACL.
+Timothee Lacroix, Nicolas Usunier, and G. Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In ICML.
+Yankai Lin, Zhiyuan Liu, M. Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In AAAI.
+Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical inference for multi-relational embeddings. In ICML.
+
+F. Mahdisoltani, J. Biega, and Fabian M. Suchanek. 2015. Yago3: A knowledge base from multilingual wikipedias. In CIDR.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, G. Corrado, and J. Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS.
+Dai Quoc Nguyen, T. Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. ArXiv, abs/1712.02121.
+Dai Quoc Nguyen, Thanh Vu, T. Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. 2019. A capsule network-based embedding model for knowledge graph completion and search personalization. ArXiv, abs/1808.04122.
+Maximilian Nickel, Volker Tresp, and H. Kriegel. 2011. A three-way model for collective learning on multi-relational data. In ICML.
+Guanglin Niu, Bo Li, Yongfei Zhang, S. Pu, and Jingyang Li. 2020. Autoeter: Automated entity type representation for knowledge graph embedding. ArXiv, abs/2009.12030.
+Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2008. Yago: A large ontology from wikipedia and wordnet. Journal of Web Semantics, 6(3):203-217.
+Zhiqing Sun, Zhihong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. ArXiv, abs/1902.10197.
+Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference.
+Théo Trouillon, Johannes Welbl, S. Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In ICML.
+Shikhar Vashisth, Soumya Sanyal, V. Nitin, and P. Talukdar. 2020. Composition-based multi-relational graph convolutional networks. ArXiv, abs/1911.03082.
+Zhen Wang, J. Zhang, Jianlin Feng, and Z. Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In AAAI.
+B. Yang, Wen tau Yih, X. He, Jianfeng Gao, and L. Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. CoRR, abs/1412.6575.
+Zhanqiu Zhang, Jianyu Cai, Yong dong Zhang, and J. Wang. 2020a. Learning hierarchy-aware knowledge graph embeddings for link prediction. In AAAI.
+
+Zhanqiu Zhang, Jianyu Cai, and J. Wang. 2020b. Duality-induced regularizer for tensor factorization based knowledge graph completion. *ArXiv*, abs/2011.05816.
+
+# A Analysis of generality for MLP-based SFBR
+
+The MLP-based SFBR model after different regularization will be equivalent to various geometric models or tensor decomposition models. The paper randomly selects several models for analysis. First, we merge the biases of MLP:
+
+$$
+\begin{array}{l} d _ {r} ^ {f} (\mathbf {h}, \mathbf {t}) = \left\| \mathbf {h} \times \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {h}} + \mathbf {b} _ {\mathbf {r}} ^ {\mathbf {h}} - \mathbf {t} \times \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {t}} - \mathbf {b} _ {\mathbf {r}} ^ {\mathbf {t}} \right\| _ {\mathbf {p}} \\ = \left\| \mathbf {h} \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {h}} + \mathbf {r} - \mathbf {t} \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {t}} \right\| _ {\mathbf {p}} \tag {16} \\ \end{array}
+$$
+
+where $\mathbf{r} = \mathbf{b_r^h} - \mathbf{b_r^t}$
+
+As shown in Eq.(17), when the two semantic filter matrixs satisfy $\mathbf{W}_{\mathbf{r}}^{\mathrm{h}},\mathbf{W}_{\mathbf{r}}^{\mathrm{t}} = \mathbf{I}$ , the semantic filter model is equivalent to TransE(Bordes et al., 2013).
+
+$$
+\begin{array}{l} d _ {r} ^ {f} (\mathbf {h}, \mathbf {t}) = \| \mathbf {h} \times \mathbf {I} + \mathbf {r} - \mathbf {t} \times \mathbf {I} \| _ {\mathbf {p}} \tag {17} \\ = \left\| \mathbf {h} + \mathbf {r} - \mathbf {t} \right\| _ {\mathbf {p}} \\ \end{array}
+$$
+
+When the MLP does not have any bias part, the filter matrix for heads is a special Linear-2 diagonalized matrix and the entity dimension is expanded to twice the original, the SFBR model is equivalent to RotatE(Sun et al., 2019).
+
+$$
+\begin{array}{l} d _ {r} ^ {f} (\mathbf {h}, \mathbf {t}) = \left\| \mathbf {h} \times \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {h}} + \mathbf {r} - \mathbf {t} \times \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {t}} \right\| _ {\mathbf {p}} \tag {18} \\ = \left\| \mathbf {h} \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {h}} - \mathbf {t} \right\| _ {\mathbf {p}} \\ \end{array}
+$$
+
+$$
+\mathbf {W} _ {\mathrm {r}} ^ {\mathrm {h}} = \left[ \begin{array}{l l} \operatorname {d i a g} (\cos \theta) & \operatorname {d i a g} (- \sin \theta) \\ \operatorname {d i a g} (\sin \theta) & \operatorname {d i a g} (\cos \theta) \end{array} \right] \tag {19}
+$$
+
+where $\mathbf{b_r^h} + \mathbf{b_r^t} = \mathbf{r} = 0, \mathbf{W_r^t} = \mathbf{I}, \theta \in \mathbf{R}^{n / 2}$ , $\mathbf{h} = [\mathbf{h}_{\mathrm{rel}}, \mathbf{h}_{\mathrm{img}}]$ and $\mathbf{t} = [\mathbf{t}_{\mathrm{rel}}, \mathbf{t}_{\mathrm{img}}]$ .
+
+When $\mathbf{p} = 2, \mathbf{r} = 0$ and $\mathbf{W}_{\mathbf{r}}^{\mathrm{t}} = \mathbf{I}$ , the MLP-based SFBR can be simplified as Eq.(20). RESCAL(Nickel et al., 2011) selects the third term of the formula as the optimization goal. The performance will be better with the other two items used as additional regularization items. RESCAL-DURA(Zhang et al., 2020b) takes the first two items as regularization. Just Similar to RotatE,
+
+ComplEx(Trouillon et al., 2016) achieves performance improvement by expanding the dimension and applying special matrix regularization for RESCAL. DURA(Zhang et al., 2020b) has the corresponding proof for ComplEx-DURA.
+
+$$
+\begin{array}{l} \left(d _ {r} ^ {f} (\mathbf {h}, \mathbf {t})\right) ^ {2} = \left\| \mathbf {h} \times \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {h}} + \mathbf {r} - \mathbf {t} \times \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {t}} \right\| _ {p} ^ {2} \\ = \left\| \mathbf {h} \times \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {h}} - \mathbf {t} \right\| _ {2} ^ {2} \\ = \left\| \mathbf {h} \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {h}} \right\| _ {2} ^ {2} + \| \mathbf {t} \| _ {2} ^ {2} + 2 \mathbf {h} \mathbf {W} _ {\mathbf {r}} ^ {\mathbf {h}} \mathbf {t} ^ {\mathbf {T}} \tag {20} \\ \end{array}
+$$
+
+From the above cases, we can quickly discover that the MLP-based SFBR is a general model. Most geometric models and tensor decomposition models can be equivalent to MLP-based SFBR after particular regularization.
\ No newline at end of file
diff --git a/asemanticfilterbasedonrelationsforknowledgegraphcompletion/images.zip b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e55f5f623537f4df5bc5b28627f46c118c7edb29
--- /dev/null
+++ b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e75af56f6c678e052f0e9830a201d701b3af7eea6affa9e42de0e27d9d95cef5
+size 567372
diff --git a/asemanticfilterbasedonrelationsforknowledgegraphcompletion/layout.json b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..33506e4e1317efbf935c76a693fa175600b25210
--- /dev/null
+++ b/asemanticfilterbasedonrelationsforknowledgegraphcompletion/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ce76f04fe38c08acec63dd2298ebf7d9849990ed115d0b4458e3cd51fca58fc0
+size 319270
diff --git a/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_content_list.json b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2e59d99741bef7c0b401bb5ae4306467a3cb77de
--- /dev/null
+++ b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:857e386bc4720220a206bda54587051950fbca8e4e2a85bc114193dda619aa5f
+size 49774
diff --git a/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_model.json b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..db1449d19faba236c75871c7b01fa3e993cce829
--- /dev/null
+++ b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:206b305c2a793eb63343e8dee7ddd147d8db805ce8c47d3d3c9ddf0e5088f289
+size 60693
diff --git a/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_origin.pdf b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8d1b1dab428c2954a75d3ff9d60644cb05d53f67
--- /dev/null
+++ b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/8ffb6173-2181-4e59-ad16-23b11ad4627f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fddea58ea935a3c9b93c937bef018ed5d055ed5718151d1f59d8599d44764eca
+size 873134
diff --git a/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/full.md b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..39e15e21e5c704efe03811162ecf4334c72c5b51
--- /dev/null
+++ b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/full.md
@@ -0,0 +1,180 @@
+# A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
+
+Ziyi Yang $^{1*}$ , Yinfei Yang $^{2}$ , Daniel Cer $^{2}$ , Eric Darve $^{1}$
+
+$^{1}$ Stanford University
+
+{ziyi.yang, darve}@stanford.edu
+
+2Google Research
+
+{yinfeiy, cer} $@$ google.com
+
+# Abstract
+
+Language agnostic and semantic-language information isolation is an emerging research direction for multilingual representations models. We explore this problem from a novel angle of geometric algebra and semantic space. A simple but highly effective method "Language Information Removal (LIR)" factors out language identity information from semantic related components in multilingual representations pre-trained on multi-monolingual data. A post-training and model-agnostic method, LIR only uses simple linear operations, e.g. matrix factorization and orthogonal projection. LIR reveals that for weak-alignment multilingual systems, the principal components of semantic spaces primarily encodes language identity information. We first evaluate the LIR on a cross-lingual question answer retrieval task (LAReQA), which requires the strong alignment for the multilingual embedding space. Experiment shows that LIR is highly effectively on this task, yielding almost $100\%$ relative improvement in MAP for weak-alignment models. We then evaluate the LIR on Amazon Reviews and XEVAL dataset, with the observation that removing language information is able to improve the cross-lingual transfer performance.
+
+# 1 Introduction
+
+Recently, large-scale language modeling has expanded from English to the multilingual setting (i.a., Devlin et al. (2019); Conneau and Lample (2019); Conneau et al. (2020)). Although these models are trained with language modeling objectives on monolingual data, i.e. without cross-lingual information, these multilingual systems exhibit impressive zero-shot cross-lingual ability (Hu et al., 2020b). These observations raise many questions and provide insight for multilingual representations learning. First, how is the language
+
+identity information and the semantic information expressed in the representation? Understanding their relations and underlying geometric structure is crucial for insights into designing more effective multilingual embedding systems. Second, how can we factor out the language identity information from the semantic components in representations? In many application, e.g. cross-lingual semantic retrieval, we wish to only keep the semantic information. Third, what is the geometric relation between different languages? Efforts have been made to answer these questions, e.g. Artetxe et al. (2020); Chung et al. (2020); Lauscher et al. (2020). Such prior work has addressed the problem at training time. In this work, we systematically explore a post-training method that can be readily applied to existing multilingual models.
+
+One of the first attempts in this research area, Roy et al. (2020), proposed two concepts for language agnostic models: weak alignment v.s. strong alignment. For multilingual systems with weak alignment, for any item in language $L_{1}$ , the nearest neighbor in language $L_{2}$ is the most semantically "relevant" item. In the case of strong alignment, for any representation, all semantically relevant items are closer than all irrelevant items, regardless of their language. Roy et al. (2020) show sentence representations from the same language tend to cluster in weak-alignment system. Similar phenomena can be observed on other pre-trained multilingual models like mBERT, XLM-R (Conneau et al., 2020) and CMLM (Yang et al., 2020). Roy et al. (2020) provides carefully-designed training strategies for retrieval-like model to mitigate this issue in order to obtain language agnostic multilingual systems.
+
+We systematically explore a simple post-training method we refer to as Language Information Removal (LIR), to effectively facilitate the language agnosticism in multilingual embedding systems. First introduced in Yang et al. (2020) to reduce same language bias for retrieval tasks, the method
+
+
+Figure 1: Language Information Retrieval (LIR) removes language identification information using principle components of the original representation space. This mechanism is validated by LIR's effects demonstrated in Fig. 2.
+
+uses only linear algebra factorization and posttraining operation. LIR can be conveniently applied to any multilingual model. We show LIR yields surprisingly large improvements in several downstream tasks, including LAReQA, a crosslingual QA retrieval dataset (Roy et al., 2020); Amazon Reviews, a zero-shot cross lingual evaluation dataset; XEVAL, a collection of multilingual sentence embedding tasks. Our results suggest that the principal components of a multilingual system with self-language bias primarily encodes language identification information. Implementation for LIR is available at https://github.com/ziyi-yang/LIR.
+
+# 2 Language Information Removal for Self Language Bias Elimination
+
+In this section we describe Language Information Removal (LIR) to address the self language bias in multilingual embeddings (Yang et al., 2020). The first step is to extract the language identity information for each language space. Given a multilingual embedding system $E$ , e.g. multilingual BERT, and a collection of multilingual texts $\{t_L^i\}$ , where $t_L^i$ denotes the $i$ th phrase in the collection for the language $L$ . We construct a language matrix $M_L \in \mathbb{R}^{n \times d}$ for language $L$ , where $n$ denotes the number of sentences in language $L$ and $d$ denotes the dimension of the representation. The row $i$ of $M_L$ is the representation of $t_L^i$ computed by $E$ .
+
+Second, we extract language identification components for each language. One observation in multilingual systems is that representations from the same language tend to cluster together (w.r.t representations in other languages), even though these representations have different semantic meanings. This phenomenon is also known as "weak alignment" (Roy et al., 2020). The mathematical explanation for this clustering phenomenon is that representations in the same language have shared vector space components. We propose that these shared components essentially represent the language identification information. Removing these language
+
+components should leave semantic-related information in the representations.
+
+To remove the shared components, or the language identification from the representations, we leverage singular value decomposition (SVD) which identifies the principal directions of a space. We use SVD instead of PCA since SVD is more stable numerically (e.g. for Läuchli matrix). Specifically, the SVD of a language matrix is $M_L = U_L \Sigma_L V_L^T$ , where the columns of $V_L \in \mathbb{R}^{d \times d}$ are the right singular vectors of $M_L$ . We take first $r$ columns of $V_L$ as the language identification components, denoted as $c_L \in \mathbb{R}^{d \times r}$ . Different values of $r$ are explored in the next experiments section. Language identification components are removed as follows. Given a multilingual representation $e_L$ in language $L$ , we subtract the projection of $e_L$ onto $c_L$ from $e_L$ , i.e.
+
+$$
+\boldsymbol {e} _ {L} := \boldsymbol {e} _ {L} - \boldsymbol {c} _ {L} \frac {\boldsymbol {c} _ {L} ^ {T} \boldsymbol {e} _ {L}}{\| \boldsymbol {e} _ {L} \| _ {2}} \tag {1}
+$$
+
+# 3 Experiments
+
+In the following experiments, sentences used for extracting principle components are sampled from Wiki-40B (Guo et al., 2020). We use 10,000 sentences per language. We notice performance initially increases as more sentences are used but then is almost unchanged after $n > 10,000$ . We tried different samplings of $\{t_L^i\}$ and text resources other than Wiki-40B, e.g., Tatoeba (Artetxe and Schwenk, 2019). The minimal differences in performance suggest language components are stable over different domains.
+
+# 3.1 Cross-lingual Answer Retrieval
+
+We first examine LIR on LAReQA, a cross-lingual answer retrieval dataset containing 11 languages (Roy et al., 2020). LAReQA consists of two retrieval sub-datasets: XQuAD-R and MLQA-R. XQuAD-R is built by translating 240 paragraphs in the SQuAD v1.1 dev set into 10 languages and converting them to retrieval tasks following
+
+the procedure from ReQA (Ahmad et al., 2019). Similarly, MLQA-R is constructed by converting MLQA (Lewis et al., 2020) to QA retrieval. In other words, each question in LAReQA has 11 relevant answers, one in each language. Two retrieval models with self language bias are presented in the LAReQA original paper, i.e. "En-En" and "X-X". Specifically, the multilingual model "En-En" finetunes mBERT for QA retrieval on the 80,000 English QA pairs from the SQuAD v1.1 train set using a ranking loss. The model "X-X" trains on the translation (into 11 languages) of the SQuAD train set. In one training example, the question and answer are in the same language. Since given a question query, all positive examples are within-language, "En-En" and "X-X" exhibit strong self-language bias and weak-alignment property.
+
+For evaluation, we first compute the language identification components with "En-En" and "X-X" models released by LAReQA. For testing, language identification components are removed from question and answer embeddings following Eq. (1). Results are shown in Table 1 and the evaluation metric is mean average precision (MAP) of retrieval. Detailed results for each language are provided in the appendix (Table 5). Simply applying LIR results in significant improvements, almost $100\%$ relatively for "X-X" model on XQuAD-R. This huge boost reveals the algebraic structure for multilingual representation space: in weak-alignment multilingual system, the principal components primarily encode language information. In LAReQA, each language has one of the relevant answers. The performance improvement itself already indicates less language bias.
+
+ | XQuAD-R | MLQA-R |
| En-En | X-X | En-En | X-X |
| w/o LIR | 27.8 | 23.3 | 35.7 | 26.0 |
| r = 1 | 36.7 | 45.2 | 37.0 | 42.4 |
| r = 2 | 36.7 | 45.6 | 36.2 | 41.6 |
| r = 3 | 36.5 | 45.9 | 36.3 | 41.6 |
| r = 4 | 36.4 | 45.7 | 36.1 | 41.4 |
+
+Table 1: Mean average precision (MAP) of model "En-En" and "X-X" with and without LIR.
+
+To further illustrate the effect of LIR, we plot the 2D PCA projection of questions and candidates in Chinese and English for the XQuAD-R dataset. Without LIR, as plotted on the left of Fig. 2, Chinese and English embeddings are separated while questions and candidates in the same language are clustering together1. This weak-alignment prop
+
+
+
+
+
+
+Question English
+Question Chinese
+
+
+Candidate English
+Candidate Chinese
+Figure 2: PCA projections of English and Chinese embeddings on the XQuAD-R dataset, with and without LIR. The two subfigures on the left are reproduced by authors to follow Figure 5 in Roy et al. (2020).
+
+erty is especially prominent for model "X-X". After applying LIR, the separation between the two languages vanishes. Questions and candidates embeddings, no matter which language they are in, group together. Both model "En-En" and "X-X" now exhibit strong cross-lingual alignment.
+
+# 3.2 Amazon Reviews
+
+We further evaluate LIR on zero-shot transfer learning with Amazon Reviews Dataset (Prettenhofer and Stein, 2010). In this subsection, we use multilingual BERT (Devlin et al., 2019) as the embedding model. Following Chidambaram et al. (2019), the original dataset is converted to a classification benchmark by treating reviews of more than 3 stars as positive and negative otherwise. We split 6000 English reviews in the original training set into $90\%$ for training and $10\%$ for development. A logistic classifier is trained on the English training set and then evaluated on English, French, German and Japanese test sets (each has 6000 examples) using the same trained model, i.e. the evaluation is zero-shot. The weights for mBERT are fixed. The representation of a sentence/phrase is computed as the average pooling of the transformer encoder outputs. LIR is applied in both training and evaluation stage using the corresponding language components.
+
+Results presented in Table 2 show that removing the language components from multilingual representation is beneficial for cross-lingual zero-shot
+
+transfer learning of mBERT. LIR is expected to leave only semantic-related information in the representation so that the logistic classifier trained on English should be conveniently transferred to other languages. Another interesting observation is that unlike semantic retrieval, the peak performance usually occurs at $r > 1$ .
+
+ | en | de | fr | jp | Avg. |
| w/o LIR | 80.0 | 70.4 | 73.1 | 71.7 | 73.8 |
| r = 1 | 80.2 | 70.9 | 74.6 | 72.6 | 74.6 |
| r = 2 | 80.5 | 70.9 | 75.6 | 73.1 | 75.0 |
| r = 3 | 80.2 | 70.8 | 75.4 | 71.8 | 74.5 |
| r = 5 | 80.2 | 70.8 | 76.1 | 72.0 | 74.8 |
| r = 8 | 80.2 | 71.1 | 75.2 | 70.8 | 74.3 |
| r = 10 | 80.3 | 71.0 | 76.0 | 71.2 | 74.6 |
| r = 12 | 80.0 | 70.9 | 76.0 | 71.4 | 74.6 |
+
+# 3.3 XEVAL
+
+We have tested LIR on cross-lingual benchmarks in previous sections. In this section, we apply LIR in XEVAL, a collection of multilingual sentence representation benchmark (Yang et al., 2020). The training set and test set of XEVAL are in the same language (i.e. the evaluation is not cross-lingual). Benchmarks on XEVAL include Movie Reviews (Pang and Lee, 2005), binary SST (sentiment analysis, Socher et al. (2013)), MPQA (opinion-polarity, Wiebe et al. (2005)), TREC (question-type, Voorhees and Tice (2000)), CR (product reviews, Hu and Liu (2004)), SUBJ (subjectivity/objectivity, Pang and Lee (2004)) and SICK (both entailment and relatedness (Marelli et al., 2014)). For this evaluation, we use mBERT as the base multilingual encoder. Still the weights of mBERT are fixed during training and only downstream neural structures are trained. The training, cross-validation and evaluation uses SentEval toolkit (Conneau and Kiela, 2018).
+
+Results are presented in Table 3. The metric is the averaging performance across 9 datasets mentioned above. Introducing LIR is beneficial on German, Spanish, French and Chinese. We also notice that for English dataset, removing principal components actually hurts the performance. This observation also echoes with findings in previous English sentence embedding works, e.g. Yang et al. (2019b). We speculate this is because English data are dominant in mBERT training data. Therefore mBERT representations exhibit similar behaviors with monolingual English sentence embeddings.
+
+Table 2: Classification accuracy on Amazon Reviews Dataset.
+
+ | en | de | es | fr | zh | Avg. |
| w/o LIR | 80.8 | 78.1 | 78.8 | 79.1 | 79.3 | 79.2 |
| r = 1 | 80.4 | 78.2 | 79.0 | 79.1 | 79.3 | 79.2 |
| r = 2 | 80.7 | 78.5 | 79.4 | 79.3 | 79.4 | 79.5 |
| r = 5 | 80.6 | 78.0 | 79.4 | 78.9 | 79.3 | 79.2 |
| r = 10 | 80.2 | 78.4 | 79.0 | 79.0 | 78.9 | 79.1 |
+
+Table 3: Results of applying LIR to XEVAL dataset. The metric is the average of 9 downstream tasks.
+
+# 3.4 Application to Models without Self-Language Bias
+
+In previous sections, we have shown the great effectiveness of LIR on weak-alignment systems. As an additional analysis, we examine LIR on multilingual models without self language bias, i.e. models "X-X-mono" and "X-Y" introduced in the original LAReQA paper. Model "X-X-mono" is modified from "X-X" by ensuring that each training batch is monolingual so that in-batch negative and positive examples are in the same language. In model "X-Y", questions and answers are allowed to be translated to different languages, which directly encourage the model to regard answers in different languages from the question as correct. With such designs in training, "X-X-mono" and "X-Y" are shown to be without self-language bias, i.e. semantically relevant representations are closer than all irrelevant items, regardless of their languages.
+
+The evaluation process is similar as in Section 3.1. Results are presented in Table 4. Applying LIR leads to a slight performance decrease for X-X-mono. While the drop in X-Y is notable and we suspect this is because the training process for X-Y avoids, by design, self-language bias. Rather, the principal components of X-Y contain essential semantic-related information for the retrieval task. This result is not negative and actually support our argument, since for "strong alignment" multilingual systems, principal components should both contain semantic and language-related information. Then removing principal components will hinder the semantic retrieval. For weak-alignment models, removing just the first component should be adequate for cross-lingual retrieval (table 1). For tasks like classification and sentiment analysis (tables 2 and 3), the optimal number of components to remove seems to vary on different datasets.
+
+# 4 Related Work & Our Novelty
+
+Different training methods have been proposed to obtain language agnostic representations. LASER (Artetxe and Schwenk, 2019) leverages translation pairs and BiLSTM encoder for multilingual sen
+
+ | XQuAD-R | MLQA-R |
| X-X-mono | X-Y | X-X-mono | X-Y |
| w/o LIR | 50.8 | 62.6 | 48.6 | 48.5 |
| r = 1 | 50.6 | 59.5 | 48.8 | 46.2 |
| r = 2 | 49.8 | 58.1 | 48.0 | 45.5 |
| r = 3 | 49.3 | 57.1 | 47.8 | 44.8 |
| r = 4 | 48.9 | 56.5 | 47.4 | 44.2 |
+
+Table 4: Mean average precision (MAP) of "X-X-mono" and "X-Y" models without language bias.
+
+tence representation learning. Multilingual USE (Yang et al., 2019a) uses training data such as translated SNLI, mined multilingual QA and translation pairs to learn multilingual sentence encoder. AMBER (Hu et al., 2020a) aligns contextualized representations of multilingual encoders at different granularities. LaBSE (Feng et al., 2020) finetunes a pretrained language model with the bitext retrieval task and mined cross-lingual parallel data to obtain language agnostic sentence representations. In contrast, LIR does not require any parallel data for semantic alignment.
+
+Faruqui and Dyer (2014) propose a canonical correlation analysis (CCA) based method to add multilingual context to monolingual embeddings. The method is post-processing and requires bilingual word translation pairs to determine the projection vectors. In contrast, LIR is post-training and does not require labeled data. Mrkšić et al. (2017) build semantically specialized cross-lingual vector spaces. Like CCA, their methods require the additional training to adjust the original embeddings using supervised data: cross-lingual synonyms and antonyms. Libovický et al. (2019) propose that the language-specific information of mBERT is the centroid of each language space (the mean of embeddings). Zhao et al. (2021) propose several training techniques to obtain language-agnostic representations, including segmenting orthographic tokens in training data and aligning monolingual spaces by training. In contrast, LIR is post-training and model-agnostic. Critically, this means LIR can be conveniently and easily applied to any trained multilingual systems without further training.
+
+Previous explorations on principal components of the semantic space for sentence embeddings include Arora et al. (2017) and Yang et al. (2019b), whereby principal component removal is investigated for monolingual models and the evaluation is only conducted on semantic similarity benchmarks. In contrast, our work investigates the multilingual case and the evaluation is more diverse, e.g. cross-lingual transfer learning. Mu and Viswanath
+
+(2018) explore removing top components from English representations. However, it was unclear prior to our work what purpose is served by removing principal components within multilingual and cross-lingual settings. We demonstrate these principal components represent language information for weak-alignment multilingual models.
+
+Compared with Yang et al. (2020), the novelty of this work is two-fold. First, it is unclear in Yang et al. (2020) whether the assumption (i.e. principal components contain language information) holds true for both weak and strong-alignment multilingual models. In this work we clearly show that it is valid for weak-alignment models (Section 3.1). However, for strong-alignment systems, the assumption is not quite true (Table 4). Second, in Yang et al. (2020), the evaluation is only conducted on Tatoeba, a semantic retrieval dataset. While in this work, evaluations are more comprehensive. Besides the cross-lingual retrieval dataset LAReQA, our experiments include cross-lingual zero-shot learning (Section 3.2) and monolingual transfer learning (Section 3.3). These extra results establish the effectiveness of LIR beyond the domain of semantic retrieval.
+
+# 5 Conclusion
+
+In this paper, we investigate the self-language bias in multilingual systems. We explore a simple method "Language Identity Removal (LIR)". This method identifies and removes the language information in multilingual semantic space by singular value decomposition and orthogonal projection. Although as a simple and linear-algebra-only method, LIR is highly effective in several downstream tasks, including zero-shot transfer learning, sentiment analysis, etc. Especially for crosslingual retrieval, introducing LIR increases the performance of weak-alignment multilingual systems by almost $100\%$ relatively in MAP.
+
+# Acknowledgments
+
+We would like to thank anonymous reviewers for their comments, as well as our teammates from Descartes, Google Brain and other Google teams for their valuable feedback.
+
+# References
+
+Amin Ahmad, Noah Constant, Yinfei Yang, and Daniel Cer. 2019. Reqa: An evaluation for end-to-end answer retrieval models. In Proceedings of the 2nd
+
+Workshop on Machine Reading for Question Answering, pages 137-146.
+Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In 5th International Conference on Learning Representations, ICLR 2017.
+Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623-4637.
+Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.
+Muthu Chidambaram, Yinfei Yang, Daniel Cer, Steve Yuan, Yunhsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Learning cross-lingual sentence representations via a multi-task dual-encoder model. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 250–259, Florence, Italy. Association for Computational Linguistics.
+Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, and Sebastian Ruder. 2020. Rethinking embedding coupling in pre-trained language models. arXiv preprint arXiv:2010.12821.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451.
+Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
+Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7059-7069.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462-471.
+Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language-agnostic bert sentence embedding. arXiv preprint arXiv:2007.01852.
+Mandy Guo, Zihang Dai, Denny Vrandecic, and Rami Al-Rfou. 2020. Wiki-40b: Multilingual language model dataset. In LREC 2020.
+Junjie Hu, Melvin Johnson, Orhan First, Aditya Siddhant, and Graham Neubig. 2020a. Explicit alignment objectives for multilingual bidirectional encoders. arXiv preprint arXiv:2010.07972.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020b. Xtreme: A massively multilingual multitask benchmark for evaluating cross-lingual generalization. arXiv preprint arXiv:2003.11080.
+Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168-177.
+Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483-4499.
+Patrick Lewis, Barlas Oguz, Rudy Rinott, Sebastian Riedel, and Holger Schwenk. 2020. *Mlqa: Evaluating cross-lingual extractive question answering*. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7315–7330.
+Jindrich Libovický, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual bert? arXiv preprint arXiv:1911.03310.
+Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of compositional distributional semantic models. In LREC, pages 216-223.
+Nikola Mrkšić, Ivan Vulić, Diarmuid Řeaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korhonen, and Steve Young. 2017. Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the association for Computational Linguistics, 5:309–324.
+Jiaqi Mu and Pramod Viswanath. 2018. All-but-thetop: Simple and effective postprocessing for word representations. In International Conference on Learning Representations.
+
+Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271-278.
+Bo Pang and Lillian Lee. 2005. Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 115-124.
+Peter Prettenhofer and Benno Stein. 2010. Cross-language text classification using structural correspondence learning. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 1118-1127.
+Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA: Language-agnostic answer retrieval from a multilingual pool. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5919-5930, Online. Association for Computational Linguistics.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
+Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 200-207.
+Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2-3):165-210.
+Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, et al. 2019a. Multilingual universal sentence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307.
+Ziyi Yang, Yinfei Yang, Daniel Cer, Jax Law, and Eric Darve. 2020. Universal sentence representation learning with conditional masked language model. arXiv preprint arXiv:2012.14388.
+Ziyi Yang, Chenguang Zhu, and Weizhu Chen. 2019b. Parameter-free sentence embedding via orthogonal basis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 638-648.
+
+Wei Zhao, Steffen Eger, Johannes Bjerva, and Isabelle Augenstein. 2021. Inducing language-agnostic multilingual representations. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics*, pages 229–240, Online. Association for Computational Linguistics.
+
+# A Experimental results for each language of model "X-X" on LAReQA
+
+Here we provide the detailed experiment results of each language on the XQuAD-R dataset. The multilingual encoder is model "X-X".
+
+ | w/o LIR | r = 1 | r = 2 | r = 3 | r = 4 |
| ar | 20.5 | 40.5 | 40.4 | 40.4 | 40.0 |
| de | 27.5 | 48.3 | 49.8 | 49.7 | 49.6 |
| el | 20.9 | 43.5 | 43.9 | 44.1 | 44.2 |
| en | 27.3 | 55.1 | 55.0 | 55.3 | 55.3 |
| es | 27.6 | 52.6 | 52.8 | 52.7 | 52.6 |
| hi | 18.6 | 36.5 | 36.8 | 37.5 | 37.3 |
| ru | 24.9 | 48.2 | 49.6 | 49.6 | 49.4 |
| th | 16.8 | 34.7 | 35.1 | 34.9 | 34.6 |
| tr | 23.8 | 45.3 | 45.4 | 46.3 | 46.2 |
| vi | 24.8 | 49.2 | 48.9 | 48.9 | 48.6 |
| zh | 24.7 | 43.8 | 43.8 | 45.3 | 45.2 |
+
+Table 5: Experimental results for each language of model "X-X" on the XQuAD-R dataset.
\ No newline at end of file
diff --git a/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/images.zip b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..37d513c753c2ab51d56d63397155fd42134b115b
--- /dev/null
+++ b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2df786a3d0b9dfd9f55134bef353dd6469ae1cbda8b0ad472be4d31f1e9797d5
+size 196708
diff --git a/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/layout.json b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6e3205970221607d3f9bdde377c9ad9fcbc9e3fd
--- /dev/null
+++ b/asimpleandeffectivemethodtoeliminatetheselflanguagebiasinmultilingualrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:74cb4f543c0d6260d975abfc8d12b4984855f22442be43c62af1c5851ed17fb6
+size 223336
diff --git a/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_content_list.json b/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b2421cef6612e06d9788a5085b1a3bb772bfbd41
--- /dev/null
+++ b/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14214dbe5c66e8dcaee643730dcf46e4ef9421384a30a18244c2d9ee94089085
+size 99165
diff --git a/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_model.json b/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..531aa542a6efcb5da5e7bb73f6dd1c601fcb25ec
--- /dev/null
+++ b/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:301122346c22319381ee9578cdeac3daf5c7f541704396b0b7e9cf9fe0c97e31
+size 115566
diff --git a/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_origin.pdf b/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6c761734757555e127eb49866f4f5b117f40e425
--- /dev/null
+++ b/asimpleandeffectivepositionalencodingfortransformers/85fd7fdc-bbb5-4cdd-962a-d43911bac81d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c4c8fb856cacac9f5047cbd4146c3b02b9e66a46cc1eba93edb11b6a5af2a5c2
+size 766906
diff --git a/asimpleandeffectivepositionalencodingfortransformers/full.md b/asimpleandeffectivepositionalencodingfortransformers/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..30d793e8ab24a9aa9d8b57a8ef7a6e427cc7ebc7
--- /dev/null
+++ b/asimpleandeffectivepositionalencodingfortransformers/full.md
@@ -0,0 +1,436 @@
+# A Simple and Effective Positional Encoding for Transformers
+
+# Pu-Chin Chen*, Henry Tsai*, Srinadh Bhojanapalli*, Hyung Won Chung, Yin-Wen Chang, Chun-Sung Ferng
+
+Google Research
+
+# Abstract
+
+Transformer models are permutation equivariant. To supply the order and type information of the input tokens, position and segment embeddings are usually added to the input. Recent works proposed variations of positional encodings with relative position encodings achieving better performance. Our analysis shows that the gain actually comes from moving positional information to attention layer from the input. Motivated by this, we introduce Decoupled positional attEntion for Transformers (DIET), a simple yet effective mechanism to encode position and segment information into the Transformer models. The proposed method has faster training and inference time, while achieving competitive performance on GLUE, XTREME and WMT benchmarks. We further generalize our method to long-range transformers and show performance gain.
+
+# 1 Introduction
+
+Transformers are sequence-to-sequence models that achieve state of the art performance in many Natural Language Processing (NLP) tasks, such as machine translation, language modeling and question answering (Vaswani et al., 2017; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2020). Transformers have two major components: self-attention and a position-wise feed forward layer. Both are permutation equivariant and are not sensitive to the order of input tokens. To make these models position-aware, the position information of the input words is typically added as an additional embedding to the input token embeddings (Vaswani et al., 2017). For example, input embedding $(W)$ of a sentence is added to the position embeddings $(P)$ , resulting in input $W + P$ to the Transformer. These position embeddings only depend on the location the word appears. For multi-segment tasks,
+
+additional segment embeddings can be added just like the position embeddings (Devlin et al., 2018).
+
+There have been multiple works exploring different ways to include position information in Transformers (Shaw et al., 2018; Yang et al., 2019; Raffel et al., 2020). Many of those note the advantages of using a relative position encoding scheme over absolute position encodings (see also Fig 1). However what causes this difference is not clear. Yun et al. (2020) have shown that Transformers with absolute position encodings are universal approximators of all sequence to sequence functions, proving that absolute position encodings can capture the position information. Hence what causes the superiority of relative position encodings? A systematic study and understanding of the benefits and drawbacks of different position encoding methods is missing. Ke et al. (2020) hypothesised that the cross correlation between word and position embeddings while computing attention could be the cause of poor performance of absolute position encodings. However such cross terms are present in some of the relative position encoding methods (Shaw et al., 2018; Yang et al., 2019), and these methods perform on par or better than the other position encoding schemes (see §4).
+
+In this paper we undertake a systematic study to understand different position encoding methods. We argue that absolute position embeddings mainly suffer from being added at the input. We show, with our experiments on classification, question answering and machine translation tasks, that absolute position encodings added to attention matrices with different parameters for each head improves significantly over absolute position encodings added to the input. This highlights that where the position information is included in the Transformer is important, providing an explanation for the gap in performance between absolute and relative position encodings. We also compare different position encodings and the effect of sharing position encod
+
+
+(a) English Transfer Learning on MultiNLI
+
+
+(b) Cross-lingual Transfer on XNLI
+Figure 1: Performance effect of different positional encoding methods for Transformers (see § 2) on two Natural language Inference datasets from GLUE (Wang et al., 2019), XTREME (Hu et al., 2020) and one Neural Machine Translation dataset WMT 18 (Bojar et al., 2018). Absolute positional encoding (DIET-ABS) can achieve better performance than the relative counterpart (DIET-REL), showing the importance of designing the right position encoding method.
+
+
+(c) Translation on CS-EN
+
+ings across different heads and layers of a Transformer. Based on these observations we propose decoupled positional attention and a new segment encoding approach (for tasks with multiple segments), and empirically show its superiority.
+
+We summarize our contributions in this paper below.
+
+- We theoretically and empirically analyze the limitation of the absolute position embeddings added to the input. For both absolute and relative information, we show that encoding position to attention matrix per-head results in superior performance.
+- We propose a simple and efficient way to encode position and segment information. The proposed encoding matches the SoTA methods on multiple standard NLP tasks while having a simpler model with lower training/inference costs.
+- Our proposed method can be easily applied to long sequence models (DIET-ABS $^{LIN}$ ) and improve all metrics compared with Linformer (Wang et al., 2020).
+- We present ablation studies comparing different position encoding methods and ways of sharing position encoding parameters across heads and layers in Transformer.
+
+# 2 Position Encoding for Transformers
+
+In this section, we briefly review the Transformer models (Vaswani et al., 2017) and discuss previous improvement of position encoding and analyze the limitation of the additive position embedding proposed in the initial and widely-adopted Transformer model.
+
+# 2.1 Transformer
+
+A Transformer block consists of two types of layers: 1) Self-attention layer and 2) Feed forward layers.
+
+Self-Attention Module Given input sequence length $n$ , hidden size $d$ , multi-head query-key down-projection size $d_h$ , we define hidden layer input to this attention head as $\mathbf{X} \in \mathbb{R}^{n \times d}$ , the query projection matrix as $\mathbf{W}_Q^i \in \mathbb{R}^{d \times d_h}$ , the key projection matrix as $\mathbf{W}_K^i \in \mathbb{R}^{d \times d_h}$ and the value projection matrix as $\mathbf{W}_V^i \in \mathbb{R}^{d \times d_h}$ , $i \in [h]$ , for $h$ heads. Usually, $d_h < d$ as we do multi-head attention with a smaller representation per head ( $d_h = d / h$ ). With that we can write dot-product attention score:
+
+$$
+\mathbf {A} ^ {i} = (\mathbf {X W} _ {Q} ^ {i}) (\mathbf {X W} _ {K} ^ {i}) ^ {\top}
+$$
+
+This attention score is used to compute the output for each head, after scaling and per row normalization using softmax:
+
+$$
+\mathbf {h e a d} ^ {i} = \operatorname {S o f t m a x} (\mathbf {A} ^ {i} / \sqrt {d}) \cdot (\mathbf {X W} _ {V} ^ {i})
+$$
+
+Output of all attention heads in a layer are concatenated and passed to the next feed-forward layer applied token-wise.
+
+# 2.2 Position Aware Self Attention
+
+Many NLP tasks, such as machine translation, language modeling, are sensitive to the ordering of input words. Since Transformers are permutation equivariant, we usually additionally include the position information in the input. Below we discuss some of the popular position encoding methods.
+
+# 2.2.1 Absolute Position Encodings
+
+Absolute position encodings are computed in the input layer and are summed with the input token embeddings. Vaswani et al. (2017) proposed this
+
+for Transformers and it has been a popular choice in the followup works (Radford et al., 2018; Devlin et al., 2018). There are two common variations of the absolute position encodings - fixed and learned.
+
+# 2.2.2 Relative Position Encodings
+
+One drawback of absolute position encoding is that it requires fixed length of input sequence and does not directly capture relative positions to each word. To solve these problems several relative positions schemes have been proposed.
+
+Shaw et al. (2018) proposed using relative position encoding instead of absolute position encoding, and add position embeddings to the key and optionally value projections instead of the input. They show that this new way of encoding position information leads to better performance on machine translation tasks. Yang et al. (2019) simplified this by removing the position embeddings in value projections and showed better performance on the language modeling tasks. Both these approaches use a vector representation to encode position information.
+
+Raffel et al. (2020) use scalars to encode relative position between query and key indices and add directly to the attention scores matrix. They further use logarithmic binning of position information into a fixed number of buckets. All these relative position methods further share the position encoding parameters across layers.
+
+Recently Ke et al. (2020) hypothesised that the cross correlation between position and token embeddings can result in weaker performance of additive absolute position embeddings and instead proposed to add both absolute and relative positional information based attention directly in each head. However such cross terms are present in the method proposed by Shaw et al. (2018), which does competitively with other approaches. We instead hypothesise that position encodings at input limit the rank of the position attention matrix leading to its poor performance.
+
+# 2.3 Limitations of the Input Additive Position Embedding
+
+In this section we discuss some limitations of the de facto way of adding absolute position encodings to the input token embeddings.
+
+We first compare the representation power in terms of the rank of attention matrices achievable with different position encodings.
+
+
+Figure 2: Rank of attention matrices: We present a comparison of the rank of the attention score matrices of a $\mathrm{BERT}_{\mathrm{BASE}}$ model with absolute position embeddings at input v.s. absolute position embeddings perhead (DIET-ABS (1)). With additive positional embedding at input, the attention matrices have much lower rank, limiting the representative power. This is alleviated by DIET-ABS.
+
+Theorem 1. Let $\mathbf{P} \in \mathbb{R}^{n \times d}$ be the input position embedding and $\hat{\mathbf{P}} \in \mathbb{R}^{n \times d_p}$ be the layer-wise position embeddings. Let $\mathbf{W}_Q, \mathbf{W}_K \in \mathbb{R}^{d \times d_h}$ be the query and key projection matrices with head projection size $d_h$ , and $d_h < d_p$ , $d$ and $n \geq d_h + d_p$ . Let $\mathbf{A}_a = (\mathbf{X} + \mathbf{P})\mathbf{W}_Q\mathbf{W}_K^\top (\mathbf{X} + \mathbf{P})^\top$ and $\mathbf{A}_r = \mathbf{X}\mathbf{W}_Q\mathbf{W}_K^\top \mathbf{X}^\top + \hat{\mathbf{P}}\hat{\mathbf{P}}^\top$ be the attention matrices computed using input and layer-wise position embeddings respectively. Then for any $\mathbf{X}, \mathbf{P}, \mathbf{W}_Q, \mathbf{W}_K$
+
+$$
+r a n k (\mathbf {A} _ {a}) \leq d _ {h}.
+$$
+
+There exists a choice of $\mathbf{X},\hat{\mathbf{P}},\mathbf{W}_Q,\mathbf{W}_K$ such that
+
+$$
+r a n k (\mathbf {A} _ {r}) = d _ {p} + d _ {h} > d _ {h}.
+$$
+
+Remarks. This theorem shows us that the rank of attention matrices is constrained with the absolute position encodings at the input and using per-head position encodings by adding position information to attention matrix directly results in allowing for higher rank attention. See § B for the proof.
+
+Adding the position encodings directly to the input further places a constraint on training dynamics by forcing gradients to be same for both the input token and position embeddings (see § B). Relative position encodings discussed earlier, while addressing some of these concerns, suffer from slower training/inference times (see Table 1) with complex implementations (Shaw et al. (2018); Ke et al. (2020)). In the next section, we present simple position encoding methods that avoid these limitations.
+
+# 3 Proposed Position and Segment Encodings
+
+In the previous section, we learned about the limitations of input additive positional embeddings and existing works. Based on these observations, we propose two minimal/efficient ways to incorporate (absolute/relative) positional encodings along with a novel absolute segment encoding approach. By decoupling position and segment from token embeddings we match the SoTA performance while improving training/inference time (see §3.3).
+
+# 3.1 Decoupled Absolute Positional Attention
+
+We propose the following simple absolute position encoding method that adds position information to the token attention matrix directly in each attention head. We further also add segment information to the token attention instead of the input embeddings. This way we can set the rank of position encodings independently resulting in higher rank attention matrix, addressing the limitations discussed earlier.
+
+# DIET-ABS
+
+$$
+\begin{array}{l} \mathbf {A} _ {i, j} ^ {\mathrm {A B S}} = \left(\mathbf {X} _ {i:} \mathbf {W} _ {Q}\right) \left(\mathbf {X} _ {j:} \mathbf {W} _ {K}\right) ^ {\top} / \sqrt {d} \tag {1} \\ + \left(\mathbf {P} _ {Q} \mathbf {P} _ {K} ^ {\top}\right) _ {i, j} + E _ {S} (S (i), S (j)), \\ \end{array}
+$$
+
+where $\mathbf{P}_Q, \mathbf{P}_K \in \mathbb{R}^{n \times d_p}$ are low-rank position embedding matrices and $E_S$ is the absolute segment attention to model interactions between segments defined as
+
+$$
+E _ {S} (S (i), S (j)) = \mathbf {S} _ {\hat {i}, \hat {j}} \tag {2}
+$$
+
+where $S(i) = \hat{i}$ if index $i$ is in segment $\hat{i}$ .
+
+Please note that we use the following notation in the above equation. $\mathbf{A}_{i,j}$ denotes the $(i,j)$ entry of matrix A. $\mathbf{X}_{i}$ and $\mathbf{X}_{:j}$ denote the $i$ th row and $j$ th column of $\mathbf{X}$ respectively. We will follow this notation in the remainder of the paper.
+
+By default, we set $d_p$ same as $d_h$ . This already results in potentially a rank $d_p + d_h$ attention matrix as shown in Theorem 1. To illustrate this, we compare the rank of the attention matrices in the first layer of a baseline BERT model and a DIET-ABS model for a sampled batch in Figure 2. The figure shows that attention matrices of DIET-ABS have higher ranks than the baseline BERT. Our detailed experiment results in § 4 also show that DIET-ABS performs noticeably better. This confirms our earlier observation in Theorem 1 that additive position embeddings at input can constrain the model and
+
+adding the position embeddings per-head removes this constraint and results in better performance.
+
+With the decoupled positional embedding, we can increase $d_{p}$ to any width $k$ to break the low-rank bottleneck shown in Theorem 1. We call such model DIET-ABS-Rank- $k$ . We also address the efficiency issue introduced by one additional matrix multiplication $(\mathbf{P}_Q\mathbf{P}_K^\top)$ . As the positional embeddings are independent of the input, we only need to compute the matrix multiplication once for each training batch, and we can cache the computed matrix before running inference. As a result, we observe negligible training and inference cost increase in this model variant.
+
+# 3.2 Decoupled Relative Positional Attention
+
+To incorporate relative position inductive bias, we consider a simplified version of the position encoding proposed in T5 (Raffel et al., 2020) without log-binning and per-layer parameter sharing. We further also incorporate our per-head segment encoding as in DIET-ABS. The model can be written as:
+
+# DIET-REL
+
+$$
+\begin{array}{l} \mathbf {A} _ {i, j} ^ {\text {R E L}} = \left(\mathbf {X} _ {i:} \mathbf {W} _ {Q}\right) \left(\mathbf {X} _ {j:} \mathbf {W} _ {K}\right) ^ {\top} / \sqrt {d} \tag {3} \\ + \mathbf {R} _ {i - j} + E _ {S} (S (i), S (j)). \\ \end{array}
+$$
+
+We show an example of this model with two segments in Figure 3.
+
+# 3.3 Training and Inference Costs
+
+We next show the proposed models introduce little computational overhead compared to the baseline model, making our model more practical than alternatives. We consider two different models - BERTBASE model and a smaller model, BERTSMALL, that has hidden size 512, 4 layers and 8 attention heads.
+
+In Table 1 we compare the training and inference costs of position encoding methods of Shaw et al. (2018), Ke et al. (2020), DIET-ABS and DIET-REL. We notice that the simplicity of the proposed methods indeed translates to savings in both training and inference times compared to other position encoding approaches. The savings in step times are even more significant for smaller models (BERTSMALL) and during inference.
+
+Note that the discrepancy between training and inference speed is likely because gradient updates dominate the cost at training time (Lan et al., 2020). At inference time, we only measure the time of a
+
+
+(a) DIET-ABS
+
+
+(b) DIET-REL
+Figure 3: Proposed efficient approach to include position and segment encoding by adding them directly to the token attention matrix per-head. Left figure shows how we encode absolute positional attention. Right figure represents relative positional attention.
+
+ | Mode | Shaw et al. (2018) | Ke et al. (2020) | DIET-ABS | DIET-REL |
| BERTBASE | Training | +13% | +1% | +0% | +0% |
| BERTBASE | Inference | +33% | +19% | +0% | +0% |
| BERTSMALL | Training | +24% | +4% | +0% | +0% |
| BERTSMALL | Inference | +65% | +27% | +1% | +0% |
+
+Table 1: Pre-training and inference time of Transformers with different position encoding methods in comparison to the baseline BERT model on TPU v2. We observe that simplicity of the DIET-REL and DIET-ABS result in substantial gains in both training and inference time. We notice even more speedup for the smaller $\mathrm{BERT}_{\mathrm{SMALL}}$ model compared to $\mathrm{BERT}_{\mathrm{BASE}}$ .
+
+forward pass which corresponds to costs of using such models in real systems.
+
+# 3.4 Application to Long-range Transformers
+
+Another advantage of our propose approaches is they easily extend to long range Transformer models. For long sequence inputs, Transformers suffer from quadratic dependence of computational complexity with respect to the sequence length. A class of methods reduce this complexity by using a low rank projection of the input sequence for attention computation (Wang et al., 2020; Choromanski et al., 2021; Dai et al., 2020). However, such methods use the default input position encodings, and there has not been much work in incorporating position information per-head without introducing the quadratic computation complexity on the input sequence length. We illustrate the applicability of our methods to such settings by applying DIET-ABS to Linformer (Wang et al., 2020), which projects the attention key and value matrices to a lower dimension $k$ during attention computation.
+
+DIET-ABS $^{LIN}$ The proposed method can be written as:
+
+$$
+\begin{array}{l} \mathbf {A} _ {i, j} ^ {\mathrm {L I N}} = \left(\mathbf {X} _ {i:} \mathbf {W} _ {Q}\right) \left(\left(\mathbf {E X}\right) _ {j:} \mathbf {W} _ {K}\right) ^ {\top} / \sqrt {d} \tag {4} \\ + (\mathbf {P} _ {Q} \mathbf {P} _ {K} ^ {\top}) _ {i, j}, \\ \end{array}
+$$
+
+where $\mathbf{E} \in \mathbb{R}^{k \times n}$ , $\mathbf{P}_Q \in \mathbb{R}^{n \times d}$ , $\mathbf{P}_K \in \mathbb{R}^{k \times d}$ .
+
+# 4 Experiments
+
+In this section, we present our experimental results comparing different position and segment encoding approaches discussed in earlier sections. We
+
+conduct experiments in three different settings to cover a wide range of use cases. First, we examine the results of a popular transfer learning approach from masked-LM pretraining to the end tasks in GLUE (Devlin et al., 2018). Second, we study zero-shot cross-lingual transferability of the multilingual pretrained models (Hu et al., 2020) to classification and question answering tasks in the XTREME benchmark (Hu et al., 2020). Lastly, we consider training Transformer models from scratch for machine translation.
+
+We compare the following positional encoding approaches - absolute positional embedding (Devlin et al., 2018), relative positional embedding (Shaw et al., 2018), combined absolute and relative positional encoding (Ke et al., 2020), relative scalar approach (Raffel et al., 2020), our proposed DIET-ABS and DIET-REL per-head positional encoding approaches. We denote the methods that add position/segment information directly to input token embeddings with input, and methods that add position/segment information directly in attention layer with per-head. For complete experimental setup, see Appendix A.
+
+# 4.1 English Transfer Learning Results
+
+Datasets and Model For pre-training, we use English Wikipedia and Books datasets (Devlin et al., 2018). For Finetuning tasks we use the datasets from the GLUE benchmark (Wang et al., 2019). We apply sub-word tokenization on raw text data using WordPiece (Wu et al., 2016) with a 30,000 token vocabulary.
+
+| Model | Position | Segment | MNLI 393k | QQP 364k | QNLI 105k | SST2 67k | CoLA 8.5k | STS-B 7k | Avg |
| Devlin et al. (2018) | input | input | 85.8 / 85.9 | 91.1 | 89.9 | 93.2 | 58.7 | 89.0 | 84.8 |
| Shaw et al. (2018) | per-head | input | 86.3 / 86.0 | 91.2 | 90.5 | 93.2 | 59.8 | 89.3 | 85.2 |
| Raffel et al. (2020) | per-head | input | 86.4 / 86.2 | 91.2 | 90.1 | 93.0 | 59.6 | 90.1 | 85.2 |
| Ke et al. (2020) | per-head | input | 86.1 / 86.2 | 91.2 | 90.3 | 93.1 | 59.6 | 89.6 | 85.2 |
| DIET-REL | per-head | input | 86.0 / 86.1 | 91.0 | 89.8 | 92.8 | 59.6 | 89.0 | 84.9 |
| DIET-REL | per-head | per-head | 86.3 / 86.3 | 91.0 | 90.5 | 92.9 | 60.3 | 89.3 | 85.2 |
| DIET-ABS (dp=128, share) | per-head | per-head | 86.4 / 86.4 | 90.8 | 89.5 | 93.0 | 59.8 | 90.2 | 85.2 |
| Wang et al. (2020) (dp=32) | input | input | 82.3 / 82.6 | 90.2 | 86.3 | 91.4 | 53.9 | 87.6 | 82.0 |
| DIET-ABS\( ^{LIN} \)(dp=32) | per-head | input | 83.0 / 83.1 | 90.6 | 86.7 | 92.0 | 55.7 | 87.6 | 82.7 |
+
+Table 2: GLUE: Results on the GLUE dev set of the finetuned models based on a pre-trained model with 12-layer $\mathrm{BERT}_{\mathrm{BASE}}$ architecture. We report the median of the maximum accuracy over all checkpoints among five runs. We notice that the shared DIET-ABS with rank 128 performs competitively with existing relative positional embedding SoTA models without the inductive bias of the relative positions. The proposed method also improves performance in the low-rank long range transformer setting of (Wang et al., 2020), where relative positional embedding approaches are inefficient to use.
+
+| Model | Position | Segment | Classification XNLI 393k | Question Answering | Avg |
| XQuAD 88k | MLQA 3.7k | |
| Devlin et al. (2018) | input | input | 67.0 | 66.0 / 49.9 | 56.2 / 41.0 | 59.0 / 47.9 | 55.3 |
| Shaw et al. (2018) | per-head | input | 67.9 | 69.5 / 53.9 | 58.2 / 43.1 | 64.8 / 49.9 | 58.2 |
| Raffel et al. (2020) | per-head | input | 68.5 | 69.9 / 53.5 | 59.5 / 44.3 | 63.8 / 50.6 | 58.6 |
| Ke et al. (2020) | per-head | input | 67.8 | 68.6 / 52.0 | 58.6 / 43.2 | 63.9 / 48.7 | 57.5 |
| DIET-REL | per-head | input | 68.0 | 68.1 / 52.8 | 57.7 / 42.7 | 63.3 / 50.9 | 57.6 |
| DIET-REL | per-head | per-head | 68.4 | 69.4 / 54.4 | 58.6 / 43.5 | 62.4 / 49.3 | 58.0 |
| DIET-ABS (dp=128, share) | per-head | per-head | 68.5 | 70.0 / 53.6 | 59.8 / 44.5 | 64.6 / 51.5 | 58.9 |
| Wang et al. (2020) (dp=256) | input | input | 63.6 | 59.1 / 43.7 | 48.9 / 34.0 | 50.5 / 37.9 | 48.2 |
| DIET-ABS\( ^{LIN} \)(dp=256) | per-head | input | 64.4 | 61.6 / 46.0 | 52.2 / 37.0 | 53.6 / 40.9 | 50.8 |
+
+Table 3: XTREME: Fine-tune cross-lingual model on English training set (Cross-lingual Transfer). Performance is measured by accuracy for classification, and f1 score / exact match for question answering. In agreement with results in Table 2 we see in this table that using per-head position encodings is strictly better than absolute position encodings at the input. With layer-wise sharing, DIET-ABS with rank 128 outperforms all SoTA models.
+
+| Model | EN-DE | DE-EN | EN-CS | CS-EN |
| Vaswani et al. (2017) | 39.00 | 38.42 | 18.55 | 22.93 |
| Shaw et al. (2018) | 40.10 | 38.90 | 18.74 | 23.89 |
| DIET-REL | 39.47 | 38.49 | 18.68 | 23.93 |
+
+Table 4: Machine Translation: We report results comparing different position encoding methods for Transformers on machine translation tasks en-de, de-en, encs and cs-en from the Newstest 2018 dataset. We notice that all per-head position encoding schemes (all except the first row) do better than the absolute position embeddings added at the input. Further the proposed simple DIET-REL approach is competitive with other position encoding approaches.
+
+Results We examine how different ways of encoding position and segment affect the transfer learning ability of the pre-trained English BERT models by fine-tuning on the GLUE benchmark (Wang et al., 2019), and present the results in Ta
+
+ble 2. We first notice that all the approaches that encode position features explicitly at per-head level perform better than the baseline additive position encodings at the input (Devlin et al., 2018). All models incorporating relative positions (Shaw et al., 2018; Raffel et al., 2020; Ke et al., 2020), despite their modeling differences, have very similar average score. We show further gains (84.9 to 85.2 for DIET-REL) by moving segment features to per-head.
+
+Interestingly we notice that the proposed absolute position encoding method DIET-ABS, with layer-wise sharing, is on par with all previous SoTA relative positional encodings. This shows that even absolute position encodings can perform better when included per-head instead at the input. We present a detailed ablation study varying the rank and sharing methods of absolute positional attention (DIET-ABS) in Table 8 and Tables 9 in
+
+Appendix C.
+
+For long range input, we consider Linformer (Wang et al., 2020) with a projection dimension of 32. Due to down-projection, we see non-trivial performance drop, when compared to a Transformer. Even for this setting we see that our absolute positional attention DIET-ABS can be used to improve the model's performance.
+
+# 4.2 Cross-lingual Model Results
+
+Datasets and Model For our multilingual experiments, we pre-train the models on Wikipedia corpus in 100 languages similar to (Lample and Conneau, 2019) for 125K steps with a sequence length of 512, and then fine-tune on downstream XTREME tasks (Hu et al., 2020). We use language-independent tokenizer, Sentence Piece (Kudo and Richardson, 2018) model, with 120,000 token vocabulary to encode input text.
+
+Classification We conduct 5 trials of fine-tuning for each model on the MultiNLI (Williams et al., 2018) training data, then perform zero-shot predictions on XNLI (Conneau et al., 2018), choosing median accuracy to report.
+
+Question Answering We conduct 5 trials of fine-tuning for each model on SQuAD V1.1 dataset, following by zero-shot predictions on XQuAD (11 languages), MLQA (7 languages) and TyDiQA-GoldP (9 languages), choosing median F1 / EM scores to report.
+
+Results We present our results on the classification and question answering finetuning tasks in XTREME for different position and segment encoding methods in Table 3. Again all per-head position encoding methods outperform input additive position encodings. Interestingly, our simple DIET-ABS turns out to be the best model, better than other models using relative position features. Layer-wise sharing and per-head segment attention allows DIET-ABS to outperform DIET-REL. We present a detailed ablation study in Table 5 to understand effect of decoupled positional attention variants. Finally, we notice similar advantages in using DIET-ABS with the Linformer (Wang et al., 2020) model in the long range setting.
+
+# 4.3 Translation Results
+
+Datasets and Model For the machine translation task we consider two language pairs (both directions) for training - WMT 2018 English-to-German
+
+(en-de), German-to-English (de-en), English-to-Czech (en-cs) and Czech-to-English (cs-en) (Bojar et al., 2018). We test the corresponding models on Newstest 2018 datasets respectively and report the BLEU score output by SacreBLEU (Post, 2018) with default setting. Our setup follows Vaswani et al. (2017) closely and use their Tensor2Tensor framework (Vaswani et al., 2018). Following Vaswani et al. (2017) we use a 6 layer Transformer with encoder-decoder architecture. For more details of our experimental setup please see Appendix A
+
+Results We report the BLEU scores of the models in Table 4. We observe that moving positional information from input to per-head attention layer improves BLEU scores. Different variations of per-head positional attention do not make much difference with DIET-REL being competitive with Shaw et al. (2018).
+
+# 4.4 Ablation Study
+
+In this section, we share our findings of key factors that affect performance of decoupled positional attention.
+
+Sharing the Positional Encoding Previous works (Raffel et al., 2020; Ke et al., 2020; Shaw et al., 2018) used different sharing methods for the positional encodings to reduce the model parameters. We present a detailed study on different forms of sharing positional encodings and its effect on performance. In particular, we compare the following variations in sharing the position encoding parameters across different heads and the layers in the Transformer.
+
+- head-wise - Same parameters are used for all heads in a layer, with different layers using different parameters (Shaw et al., 2018; Ke et al., 2020).
+- layer-wise - Sharing of position encoding parameters across layers with different parameters for each head (Raffel et al., 2020).
+- none - Every layer and head uses different position encoding parameters.
+
+We present results comparing different sharing methods in Table 5 for XTREME tasks. We make the following observations 1) head-wise sharing is consistently worse than layer-wise, 2) sharing hurts the performance of DIET-REL whereas it improves
+
+| Model | Sharing | Segment | Classification XNLI | Question Answering | Avg |
| XQuAD | MLQA | TyDiQA-GoldP |
| DIET-REL | - | input | 68.0 | 68.1 / 52.8 | 57.7 / 42.7 | 63.3 / 50.9 | 57.6 |
| DIET-REL | head-wise | input | 67.7 | 66.2 / 51.0 | 56.0 / 41.1 | 60.1 / 45.9 | 55.4 |
| DIET-REL | layer-wise | input | 68.0 | 68.6 / 53.3 | 58.1 / 43.1 | 61.3 / 48.2 | 57.2 |
| DIET-REL | - | per-head | 68.4 | 69.4 / 54.4 | 58.6 / 43.5 | 62.4 / 49.3 | 58.0 |
| DIET-REL | head-wise | per-head | 67.8 | 66.0 / 50.5 | 55.5 / 40.4 | 59.2 / 44.6 | 54.7 |
| DIET-REL | layer-wise | per-head | 68.1 | 68.7 / 53.8 | 58.4 / 43.2 | 61.0 / 48.4 | 57.3 |
| DIET-ABS (dp=64) | - | input | 68.0 | 67.4 / 50.5 | 57.8 / 42.3 | 61.3 / 46.8 | 56.3 |
| DIET-ABS (dp=64) | - | per-head | 67.9 | 67.5 / 52.4 | 57.3 / 42.3 | 61.6 / 46.8 | 56.5 |
| DIET-ABS (dp=128) | - | per-head | 68.1 | 68.2 / 52.0 | 57.9 / 42.6 | 61.5 / 47.6 | 56.8 |
| DIET-ABS (dp=512) | - | per-head | 68.5 | 68.0 / 52.0 | 57.7 / 42.4 | 61.6 / 48.4 | 56.9 |
| DIET-ABS (dp=64) | layer-wise | input | 68.0 | 69.3 / 53.1 | 59.3 / 43.9 | 63.2 / 48.6 | 57.9 |
| DIET-ABS (dp=64) | layer-wise | per-head | 68.4 | 69.3 / 53.2 | 59.4 / 44.1 | 63.3 / 48.6 | 58.0 |
| DIET-ABS (dp=128) | layer-wise | per-head | 68.5 | 70.0 / 53.6 | 59.8 / 44.5 | 64.6 / 51.5 | 58.9 |
| DIET-ABS (dp=256) | layer-wise | per-head | 68.4 | 69.9 / 53.8 | 59.6 / 44.2 | 62.8 / 49.1 | 58.3 |
| DIET-ABS (dp=512) | layer-wise | per-head | 67.8 | 69.0 / 53.2 | 58.4 / 43.0 | 62.5 / 48.8 | 57.5 |
+
+Table 5: Ablation study on XTREM: We run decoupled positional attention ablation study to understand the effect of 1) sharing positional attention parameters across layers and heads 2) segment attention added at per-head 3) performance of relative and absolute 4) absolute positional attention rank $d_p$ from 64 to 512.
+
+ | English | Multilingual |
| Parameters | +Δ | GLUE | Parameters | +Δ | XTREME |
| Devlin et al. (2018) | 110.1M | - | 84.8 | 178.9M | - | 55.3 |
| Shaw et al. (2018) | 112.9M | +2.5% | 85.2 | 181.7M | +1.7% | 57.9 |
| DIET-REL | 109.9M | +0.0% | 85.2 | 178.7M | +0.0% | 58.0 |
| DIET-REL (share) | 109.7M | +0.0% | 85.0 | 178.5M | +0.0% | 57.3 |
| DIET-ABS (dp=128) | 128.6M | +16.8% | 85.3 | 197.4M | +10.0% | 56.8 |
| DIET-ABS (dp=128, share) | 111.3M | +1.1% | 85.2 | 180.1M | +0.6% | 58.9 |
+
+Table 6: Model Parameters: We list the number of model parameters and performance for different position encoding approaches. We observe that sharing hurts the performance of DIET-REL with negligible benefit in the number of parameters. On the contrary, the regularization effect of sharing makes DIET-ABS more stable with lesser parameters to achieve competitive performance.
+
+the performance of DIET-ABS. We summarize the key settings along with the number of model parameters in Table 6. For DIET-REL, sharing brings little effect on saving parameters, and hurts the performance. Hence, we recommend no sharing for relative positional encodings (DIET-REL). On the other hand, it is necessary to share parameters for DIET-ABS in order to keep the number of parameters low. Interestingly, sharing has regularization effect on DIET-ABS, making the model perform better. We choose layer-wise sharing over headwise sharing for its better performance.
+
+Segment Encoding Our novel segment encoding design further improves the model performance showed in Table 5. Both relative and absolute decoupled positional attention models benefit from moving the segment encoding from input to per-head: DIET-REL (+0.4%), layer-wise shared DIET-REL (+0.1%), DIET-ABS (+0.2%), layer-wise shared DIET-ABS (+0.1%). See Appendix D for the results of GLUE benchmark and
+
+Appendix C for segment attention visualization.
+
+Rank of Absolute Positional Attention The design of DIET-ABS allows to learn higher rank attention matrices as shown in Theorem 1. To understand the effect of absolute positional attention rank $(d_p)$ in practice, we conduct experiments varying the rank from $d_p = 64$ to $d_p = 512$ . We present the results in Table 5. We notice that the performance improves as we increase the rank from 64 to 128. However there is a performance saturation in further increasing it to 512. We present a visualization of the rank of the positional attention matrix in Appendix B.
+
+# 4.5 Positional Attention Pattern Visualization
+
+We next visualize the learned positional attention patterns of DIET-ABS in Figure 4. We first note that DIET-ABS has learned to capture the relative positional relations between inputs. Also note that, for the index zero (the [CLS] token), decoupled absolute positional attention usually learns a spe
+
+
+Figure 4: Visualization of learned positional attention patterns of DIET-ABS. Note that in addition to capturing the relative positional relations, the model also learn to attend to [CLS] at index 0, suggesting the dedicated [CLS] untying design in Ke et al. (2020) is not necessary with DIET-ABS.
+
+cial pattern. This pattern cannot be solely modeled by existing relative positional embedding methods, and some existing works (Ke et al., 2020) handled this case specifically by introducing new parameters. This shows the benefit of DIET-ABS in not requiring any carefully designed inductive biases as in existing approaches( Shaw et al. (2018); Raffel et al. (2020)), which may not generalize across tasks.
+
+# 5 Conclusion
+
+In this paper we theoretically and empirically examined the limitation of additive position embedding at input and showed that having per-head position embeddings results in better performance. We argued that the superior performance of some of the relative position encoding methods come from their per-head addition to attention matrix rather than the position information being relative vs absolute. Indeed we show that using absolute position encodings per-head results in better performance. Motivated by this we propose a simple per-head position and segment attention method that achieves the state-of-the-art performance on multiple NLP tasks and is more computationally efficient than existing approaches.
+
+# References
+
+Ondrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272-303, Belgium, Brussels. Association for Computational Linguistics.
+
+Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. 2021. Rethinking attention with performers.
+Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
+Zihang Dai, Guokun Lai, Yiming Yang, and Quoc V. Le. 2020. Funnel-transformer: Filtering out sequential redundancy for efficient language processing.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional Transformers for language understanding. arXiv preprint arXiv:1810.04805.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In Proceedings of Machine Learning and Systems 2020, pages 7449-7459.
+Guolin Ke, Di He, and Tie-Yan Liu. 2020. Rethinking the positional encoding in language pre-training. arXiv preprint arXiv:2006.15595.
+Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing.
+Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations.
+Xiaodong Liu, Kevin Duh, Liyuan Liu, and Jianfeng Gao. 2020. Very deep transformers for neural machine translation.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
+Matt Post. 2018. A call for clarity in reporting bleu scores. In WMT.
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical Report, OpenAI.
+
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468.
+Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N Gomez, Stephan Gouws, Llion Jones, Lukasz Kaiser, Nal Kalchbrenner, Niki Parmar, et al. 2018. Tensor2tensor for neural machine translation. arXiv preprint arXiv:1803.07416.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019.
+Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity.
+Yu-An Wang and Yun-Nung Chen. 2020. What do position embeddings learn? an empirical study of pre-trained language model positional encoding. In EMNLP 2020.
+Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
+
+Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank J Reddi, and Sanjiv Kumar. 2020. Are Transformers universal approximators of sequence-to-sequence functions? In International Conference on Learning Representations.
+
+# A Experimental setup
+
+In this section we present more details of our experimental setup.
+
+Pre-training We pre-train the models using a masked LM task (Devlin et al., 2018) and do not use the Next Sentence Prediction (NSP) loss as suggested in RoBERTa (Liu et al., 2019). Each input is constructed with full sentences from documents, and packed up to the maximum sequence length. We use the same architecture as BERTBASE (Devlin et al., 2018) ( $L = 12$ , $H = 768$ , $A = 12$ ) for our experiments.
+
+Fine-tuning Some downstream tasks have different groups of full sentences provided at inputs. For those tasks (e.g. MNLI, CoLA, XNLI, SQuAQ), we fine-tune models with supplemental segment encoding discussed in Section §3. We leave models for other tasks unchanged as their pre-training correspondences.
+
+Hyper-parameters Hyper-parameters we use are presented in Table 7.
+
+ | English | Multilingual |
| Pretrain | Finetune | Pretrain | Finetune |
| Max Steps | 500K | 5 or 10 epochs | 125K | 3 epochs |
| Learning Rate | 0.0018 | {1e-5, 2e-5, 3e-5, 4e-5} | 0.0018 | {1e-5, 2e-5, 3e-5, 4e-5} |
| Warmup Proportion | 0.025 | 0.1 | 0.025 | 0.1 |
| Sequence Length | 128 | 128 | 512 | 512 |
| Batch Size | 4096 | 32 | 4096 | 32 |
| Checkpoint Interval | 20k | 3.5k | 20k | 3.5k |
+
+Table 7: Hyperparameters for all models
+
+Translate For our Translate experiments we follow the setup of Vaswani et al. (2017) and use their Tensor2Tensor framework (Vaswani et al., 2018). We train using WMT18 ((Europarl v7, Common Crawl corpus and News Commentary v13) en-de, de-en, en-cs and cs-en datasets. We report BLUE scores provided by SacreBLEU (Post, 2018) on newest 2018 dataset. We train a 6 layer Transformer model. Any changes to position encoding are applied to all the attention layers both in the encoder and decoder. We use Adam optimizer and train for 250k steps. For decoding we use beam search with beam size 10 and length penalty 0.6.
+
+# B Proofs
+
+Proof of Theorem 1. The first claim follows easily by observing that rank of product of an two matrices is upper bounded by the minimum of the individual ranks.
+
+$$
+\begin{array}{l} r a n k (\mathbf {A} _ {a}) = r a n k ((\mathbf {X} + \mathbf {P}) \mathbf {W} _ {Q} \mathbf {W} _ {K} ^ {\top} (\mathbf {X} + \mathbf {P}) ^ {\top}) \\ \leq \min (r a n k (\mathbf {X} + \mathbf {P}), r a n k (\mathbf {W} _ {Q}), r a n k (\mathbf {X} + \mathbf {P}), r a n k (\mathbf {W} _ {K})) \\ \leq d _ {h}. \\ \end{array}
+$$
+
+$$
+r a n k ((\mathbf {X} + \mathbf {P}) \mathbf {W} _ {Q} \mathbf {W} _ {K} ^ {\top} (\mathbf {X} + \mathbf {P}) ^ {\top}) \leq d _ {h}, \text {w h e r e} \mathbf {W} _ {Q}, \mathbf {W} _ {K} \in \mathbb {R} ^ {d \times d _ {h}}
+$$
+
+The last inequality follows from $\text{rank}(\mathbf{W}_Q) \leq d_h$ as $\mathbf{W}_Q \in \mathbb{R}^{d \times d_h}$ .
+
+To prove the second claim we follow a construction approach. Let us first take $\mathbf{W}_Q = \mathbf{W}_K$ to be same matrices with first $d_h$ rows being identity matrix and the remaining $d - d_h$ rows being all zeros. Then
+
+$$
+\mathbf {W} _ {Q} \mathbf {W} _ {K} ^ {\top} = \left( \begin{array}{c c} I _ {d _ {h}, d _ {h}} & 0 _ {d _ {h}, d - d _ {h}} \\ 0 _ {d - d _ {h}, d _ {h}} & 0 _ {d - d _ {h}, d - d _ {h}} \end{array} \right).
+$$
+
+Here $I_{d_h,d_h}$ denotes the identity matrix in $\mathbb{R}^{d_h\times d_h}$ and $0_{d_h,d}$ denotes the all zeros matrix in $\mathbb{R}^{d_h,d}$ .
+
+We let $\mathbf{X}$ be such that the first $d$ rows form an identity matrix and rest are zeros - $\mathbf{X}^{\top} = [I_{d,d},0_{n - d,d}]$ . Hence $\mathbf{X}\mathbf{W}_Q\mathbf{W}_K^\top X^\top$ becomes a similar diagonal matrix with
+
+$$
+\mathbf {X} \mathbf {W} _ {Q} \mathbf {W} _ {K} ^ {\top} \mathbf {X} ^ {\top} = \left( \begin{array}{c c} I _ {d _ {h}, d _ {h}} & 0 _ {d _ {h}, n - d _ {h}} \\ 0 _ {n - d _ {h}, d _ {h}} & 0 _ {n - d _ {h}, n - d _ {h}} \\ 2 9 8 4 \end{array} \right).
+$$
+
+Choose $d_p = n > d_h$ and let $\hat{\mathbf{P}} = I$ . Now choosing $\hat{\mathbf{P}}$ with zeros in the first $n - d_p$ columns and identity in the last $d_p$ columns $(\hat{\mathbf{P}} = [0_{d,n - d_p}, I_{d_p,d_p}])$ gives
+
+$$
+\hat {\mathbf {P}} \hat {\mathbf {P}} ^ {\top} = \left( \begin{array}{c c} 0 _ {n - d _ {p}, n - d _ {p}} & 0 _ {n - d _ {p}, d _ {p}} \\ 0 _ {d _ {p}, n - d _ {p}} & I _ {d _ {p}, d _ {p}} \end{array} \right).
+$$
+
+Combining these two gives us
+
+$$
+\begin{array}{l} r a n k (\mathbf {A} _ {r}) = r a n k (\mathbf {X W} _ {Q} \mathbf {W} _ {K} ^ {\top} \mathbf {X} ^ {\top} + \hat {\mathbf {P}} \hat {\mathbf {P}} ^ {\top}) \\ = \min \left(d _ {h} + d _ {p}, n\right) > d _ {h}. \\ \end{array}
+$$
+
+
+
+Let $\mathbf{X} \in \mathbb{R}^{n \times d}$ be the input word embeddings in dimension $d$ with sequence length $n$ . We have trainable position embeddings $\mathbf{P} \in \mathbb{R}^{n \times d}$ , which are added to the input sequence before feeding into the model $g$ . For a given input $\mathbf{X}$ and label $y$ , the objective for a loss function $\ell$ is as follows:
+
+$$
+L = \ell (g (\mathbf {X} + \mathbf {P}), y) \tag {5}
+$$
+
+Theorem 2. Let $\mathbf{X}$ and $\mathbf{P}$ be trainable embedding matrices in $\mathbb{R}^{n\times d}$ . Then the gradients of the loss function in equation (5), at any point $(\mathbf{X},y)$ , and for any differentiable functions $\ell$ and $g$ , are same for $\mathbf{X}$ and $\mathbf{P}$ .
+
+Remarks. This theorem shows us that the gradients are same for the input token embeddings and position embeddings. While in standard NLP tasks the inputs $X$ can be different in each step due to different input tokens being present in each mini batch, the result still suggests that additive position embedding can limit the model from learning the relative importance of position encodings with respect to token embeddings based on the training task at hand.
+
+Proof of Theorem 2. The above theorem follows by just computing the gradients and showing they are equal for each step.
+
+Gradients of the above objective w.r.t $\mathbf{X}$ and $\mathbf{P}$ are as follows.
+
+$$
+\begin{array}{l} \nabla_ {\mathbf {X}} L = \nabla_ {g} L \cdot \nabla_ {\mathbf {X} + \mathbf {P}} g \cdot \nabla_ {\mathbf {X}} (\mathbf {X} + \mathbf {P}) \\ = \nabla_ {g} L \cdot \nabla_ {\mathbf {X} + \mathbf {P}} g \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \nabla_ {\mathbf {P}} L = \nabla_ {g} L \cdot \nabla_ {\mathbf {X} + \mathbf {P}} g \cdot \nabla_ {\mathbf {P}} (\mathbf {X} + \mathbf {P}) \\ = \nabla_ {g} L \cdot \nabla_ {\mathbf {X} + \mathbf {P}} g. \\ \end{array}
+$$
+
+The above computation of gradient follows from chain rule. This shows that the gradients of $L$ w.r.t. $\mathbf{X}$ and $\mathbf{P}$ are the same.
+
+# C Attention Visualization
+
+In this section, we examine the model internals to understand how the proposed model works. We first visualize the model internals of different modeling alternatives to argue our proposed model is sensible.
+
+Why We Remove the Input Embedding To understand if it is sensible to remove the input additive embedding after adding position scalars per-head, we add additive position embedding to our DIET-ABS model. Then, we examine the position embedding of the BERT model and our DIET-ABS variant with additive position embedding. Figure 5 shows that, when the model has both absolute scalar and additive absolute position embedding, the position embedding encodes almost no information — all position embeddings at input are similar.
+
+
+Figure 5: The cosine similarity distribution between all absolute position pairs of the input additive positional embedding for the baseline BERT model and the proposed DIET-ABS. We observed that, after the position features are added to each head as in DIET-ABS, the input position embedding contains almost no information — all input position pairs are similar.
+
+The Effect of Segment Attention We also examine the effect of adding segment attention on top of the position attention. Figure 6 shows some representative patterns. We observe that segment attention enables the model to attend more to parts of the sequence that belongs to certain segments.
+
+
+(a) Attend to the Second Segment
+
+
+(b) Down-weight Relative Position Attention
+Figure 6: We consider input of length 32 with two segments. The second segment starts at index 16. We observe the attention patterns in the DIET-REL model without token-to-token attention.
+
+Shifting Pattern Learned from Absolute Positional Attention Using relative position encoding gives generally better results despite smaller improvement scale compared to moving feature encoding per-head. To understand this, we visualize the attention pattern of the absolute positional attention and found two representative patterns in DIET-ABS in Figure 7. We observe that even given absolute position features, the model learns a "shifting pattern" for the most part. Different from Wang and Chen (2020) which claimed absolute position only learns local patterns, we show the position attention can actually attend to longer context. However, the shifting pattern can be modeled directly by relative position. Thus, DIET-REL can be a better model choice with fewer parameters and more accurate inductive bias in some applications.
+
+
+(a) Attend to Forward Neighbors
+
+
+(b) Attend to Previous Tokens
+Figure 7: Sampled position attention score patterns for the DIET-ABS model. We can see a clear shifting patterns generated by the model. Such patterns can be modeled better by relative positional scalar encodings.
+
+Rank of Positional Attention Matrices In Figure 8, we present a comparison of rank of position attention matrices for a $\mathrm{BERT}_{\mathrm{BASE}}$ model with absolute position embeddings at input $(\mathbf{P}_Q\mathbf{W}_Q\mathbf{W}_K^\top \mathbf{P}_K^\top)$ v.s. absolute position embeddings per-head (DIET-ABS (1), $(\mathbf{P}_Q\mathbf{P}_K^\top)$ , where $\mathbf{P}_Q,\mathbf{P}_K\in \mathbb{R}^{n\times d_p}$ ). With additive positional embedding at input, position attention matrices have much lower rank, limiting the representative power. This is alleviated by DIET-ABS.
+
+
+Figure 8: Rank of positional attention matrices
+
+# D Additional Ablation Study on GLUE
+
+Earlier we present an ablation study on XTREME in Table 5 for decoupled positional attention variants. We compare DIET-REL and DIET-ABS against the baseline (Devlin et al., 2018). We now present a similar study on the GLUE benchmark in Table 8 and observe similar results.
+
+Positional Encoding In Table 8, moving positional embeddings from input to per-head improves average score for both DIET-REL (+0.1%) and DIET-ABS (+0.2%).
+
+Segment Encoding In Table 8, moving segment embeddings from input to per-head improves both DIET-REL $(+0.3\%)$ and DIET-ABS $(+0.05\%)$ .
+
+Sharing Strategies Sharing plays an important role for DIET-ABS. In Table 9, we find that sharing will degrade the performance of DIET-REL (-0.2% layer-wise, -0.3% head-wise). For DIET-ABS, sharing makes the model more stable, and able to compete with DIET-REL.
+
+| Model | Position | Segment | MNLI 393k | QQP 364k | QNLI 105k | SST2 67k | CoLA 8.5k | STS-B 7k | Avg |
| Devlin et al. (2018) | input | input | 85.8 / 85.9 | 91.1 | 89.9 | 93.2 | 58.7 | 89.0 | 84.8 |
| DIET-REL | per-head | input | 86.0 / 86.1 | 91.0 | 89.8 | 92.8 | 59.6 | 89.0 | 84.9 |
| DIET-REL | per-head | per-head | 86.3 / 86.3 | 91.0 | 90.5 | 92.9 | 60.3 | 89.3 | 85.2 |
| DIET-ABS (dp=64) | per-head | input | 86.1 / 85.8 | 91.2 | 90.0 | 93.0 | 58.9 | 89.9 | 85.0 |
| DIET-ABS (dp=64) | per-head | per-head | 86.1 / 86.1 | 91.2 | 90.2 | 93.0 | 58.9 | 89.8 | 85.0 |
| DIET-ABS (dp=64, share) | per-head | per-head | 86 / 86.8 | 91.1 | 90.4 | 92.9 | 59.3 | 89.8 | 85.2 |
| DIET-ABS (dp=128, share) | per-head | per-head | 86.4 / 86.4 | 90.8 | 89.5 | 93.0 | 59.8 | 90.2 | 85.2 |
+
+Table 8: Position and segment ablation study on GLUE: DIET-REL and DIET-ABS demonstrate the advantages of moving both positional and segment embedding from input to per-head.
+
+| Model | Sharing | MNLI 393k | QQP 364k | QNLI 105k | SST2 67k | CoLA 8.5k | STS-B 7k | Avg |
| DIET-REL | - | 86.3 / 86.3 | 91.0 | 90.5 | 92.9 | 60.3 | 89.3 | 85.2 |
| DIET-REL | layer-wise | 86.5 / 86.3 | 91.1 | 90.0 | 93.0 | 58.8 | 89.6 | 85.0 |
| DIET-REL | head-wise | 85.8 / 85.7 | 91.2 | 90.2 | 92.8 | 59.8 | 89.1 | 84.9 |
| DIET-ABS (dp=64) | - | 86.1 / 86.1 | 91.2 | 90.2 | 93.0 | 58.9 | 89.8 | 85.0 |
| DIET-ABS (dp=128) | - | 86.7 / 86.5 | 91.2 | 90.6 | 92.8 | 60.1 | 89.4 | 85.3 |
| DIET-ABS (dp=64) | layer-wise | 86 / 86.8 | 91.1 | 90.4 | 92.9 | 59.3 | 89.8 | 85.2 |
| DIET-ABS (dp=128) | layer-wise | 86.4 / 86.4 | 90.8 | 89.5 | 93.0 | 59.8 | 90.2 | 85.2 |
+
+Table 9: Sharing ablation study on GLUE: We run ablation study to understand the effect of sharing position encoding parameters across layers and heads. We notice that sharing improves the performance of DIET-ABS, but hurts the performance of DIET-REL with both layer-wise or head-wise sharing.
\ No newline at end of file
diff --git a/asimpleandeffectivepositionalencodingfortransformers/images.zip b/asimpleandeffectivepositionalencodingfortransformers/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6f2d4080740a4863c3763ab7a1a30331a4ed9d16
--- /dev/null
+++ b/asimpleandeffectivepositionalencodingfortransformers/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91fa7becd139fc72872bc0ccd02414bea6e55cac945dfe8034c59c5d2a160477
+size 870144
diff --git a/asimpleandeffectivepositionalencodingfortransformers/layout.json b/asimpleandeffectivepositionalencodingfortransformers/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..27642689c13d5384f82d130ba26d5850e0de4322
--- /dev/null
+++ b/asimpleandeffectivepositionalencodingfortransformers/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b3d06fe68cd2b4a8ddb04d78ca5f23a15910c74de716df444fa2b15d87eddd7
+size 460815
diff --git a/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_content_list.json b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3ae695a989d60446a51cd27d158b47db2b1f1080
--- /dev/null
+++ b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6e1e496232ce97aa4f0a8afc1670cb4bc15fa7825e4be7a25c2a48aa82f1d476
+size 50881
diff --git a/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_model.json b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8e033104e1ac067e2ec8a9bfecafd5113aefb7ed
--- /dev/null
+++ b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e5eae63ea16fa6b312875c51fe66b08b6f7d7d41179dc65d277efd46bf6beaf
+size 62143
diff --git a/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_origin.pdf b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e142fa89551d642a26236be464c03a8702c7c2b9
--- /dev/null
+++ b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/f4580064-5ff6-4966-ae41-73e0d8170b8e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:067332e30359b12ea9d07b98b0260a5123fb579052bbd41aa4ac814e8641e3ce
+size 548845
diff --git a/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/full.md b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f0dc62d4af363d44d23e043c61afc8e173bb047a
--- /dev/null
+++ b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/full.md
@@ -0,0 +1,237 @@
+# A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders
+
+Maarten De Raedt
+
+Chatlayer.ai by Sinch
+
+Frédéric Godin
+
+Chatlayer.ai by Sinch
+
+Pieter Buteneers
+
+Sinch
+
+Chris Develder
+
+Ghent University - Imec
+
+Thomas Demeester
+
+Ghent University - Imec
+
+# Abstract
+
+Powerful sentence encoders trained for multiple languages are on the rise. These systems are capable of embedding a wide range of linguistic properties into vector representations. While explicit probing tasks can be used to verify the presence of specific linguistic properties, it is unclear whether the vector representations can be manipulated to indirectly steer such properties. For efficient learning, we investigate the use of a geometric mapping in embedding space to transform linguistic properties, without any tuning of the pre-trained sentence encoder or decoder. We validate our approach on three linguistic properties using a pre-trained multilingual autoencoder and analyze the results in both monolingual and crosslingual settings.
+
+# 1 Introduction
+
+Recently, the design of sentence encoders, monolingual (Kiros et al., 2015; Conneau et al., 2017) and multilingual (Artetxe and Schwenk, 2019; Feng et al., 2020) has enjoyed a lot of attention. Many works have used probing tasks to investigate the presence of specific linguistic properties in sentence representations (Adi et al., 2016; Conneau et al., 2018; Conneau and Kiela, 2018; Ravishankar et al., 2019; Hewitt and Manning, 2019; Chi et al., 2020). However, it remains unclear to what extent these linguistic properties can be actually steered by manipulating the representations. By analogy to the definition of style-transfer from Li et al. (2018), we refer to modifying a particular linguistic property in a given text (e.g., a sentence's tense) while preserving all of the property-independent content as linguistic property transfer.
+
+Training dedicated models to transfer linguistic properties requires substantial computational effort and a lot of training data. Adding the ability to transform a new property may require an entire retraining of the text encoder and decoder. This is
+
+especially challenging for low-resource languages or when reusing or building transfer models for more than one language.
+
+Assuming that pre-trained autoencoders capture the linguistic properties of interest, we investigate (i) whether they can be used without further tuning to efficiently transfer the properties, and (ii) whether this extends to the cross-lingual setting, when based on a multilingual pre-trained autoencoder. Our starting point is a pre-trained sentence encoder, with a corresponding decoder trained on an autoencoder objective. We show how a geometric transformation of pre-trained multilingual sentence embeddings can be efficiently learned on CPU for transferring specific linguistic properties. We also experiment with cross-lingual linguistic property transfer, using a language-agnostic pretrained encoder.
+
+In summary, this paper presents a set of preliminary experiments on linguistic property transfer, and shows that there may be value in further research on manipulating distributed representations to efficiently tackle language generation tasks.
+
+# 2 Related work
+
+Linguistic properties usually denote the grammatical behavior of linguistic units in sentences. This contrasts with styles which concern semantic aspects of sentences such as sentiment and gender. Nevertheless, transferring linguistic properties can be situated in the broader style transfer setting.
+
+Style transfer systems can be categorized into (i) methods that learn disentangled representations, in which the content is explicitly separated from the style, making the style aspect controllable and interpretable (Hu et al., 2017; Shen et al., 2017; Zhao et al., 2018; Fu et al., 2018; Logeswaran et al., 2018; John et al., 2019) and (ii) methods that learn entangled representations in which the content and style are not explicitly separated (Mueller et al., 2017; Dai et al., 2019; Liu et al., 2020; Wang et al.,
+
+
+Figure 1: (a) Pretrained autoencoder (encoder ENC, decoder DEC). (b) linguistic property classifier $\mathcal{C}$ . (c) Geometric transformation of the sentence representation to shift $\mathbf{z}$ according to $\lambda$ beyond the decision boundary of $\mathcal{C}$ , the shifted encoding $\mathbf{z}'$ is then given as input to the decoder resulting in the sentence $\mathbf{x}'$ with the transferred property.
+
+2019; Duan et al., 2020). Our approach falls under the entangled methods because encoder-decoder systems trained on an autoencoding objective yield representations in which there is no explicit separation between content and style. Conceptually, our method is most similar to Duan et al. (2020), but differs in (i) that it can use any existing pre-trained autoencoder as opposed to training an autoencoder from scratch on a variational objective, (ii) that a simple geometric transformation is applied on the representations instead of training a computational heavy neural transformation network, and (iii) that it generalizes to the cross-lingual setting.
+
+# 3 Linguistic Property Transfer
+
+Our system consists of three components: (1) a pretrained multilingual autoencoder, (2) linear classifiers for the targeted linguistic properties and (3) a component that geometrically transforms sentence embeddings to transfer the selected properties in the dense sentence representation space. These components are presented schematically in Fig. 1.
+
+We start from a pre-trained autoencoder (Fig. 1a) that consists of an encoder (ENC: $\mathcal{X} \to \mathbb{R}^n$ ) which maps sentences ( $\mathbf{x} \in \mathcal{X}$ ) to vectors ( $\mathbf{z} \in \mathbb{R}^n$ ), and a decoder (DEC: $\mathbb{R}^n \to \mathcal{X}$ ) that maps the vectors $\mathbf{z}$ back to the corresponding sentences.
+
+The second component (Fig. 1b) is a linear classifier $\mathcal{C}:\mathbb{R}^n\to \mathcal{V}$ that takes as input a sentence encoding $\mathbf{z}$ and outputs a linguistic property label. We will limit our experiments to binary properties, i.e., $\mathcal{V} = \{0,1\}$ .
+
+Finally, the last component (Fig. 1c), performs a geometric transformation. It allows flipping the value of the selected linguistic property by projecting the original encoding $\mathbf{z}$ into the opposite half-space with respect to the property classifier,
+
+over an estimated distance $\lambda$ . This leads to the transferred encoding $\mathbf{z}'$ , designed to be decoded into a sentence $\mathbf{x}'$ close to the original sentence, but with the transformed target property.
+
+The three components shown in Fig. 1 are further described below.
+
+# 3.1 Pretrained Autoencoder
+
+For the pre-trained autoencoder shown in Fig. 1a, we use Language Agnostic Sentence Representations (LASER) (Artetxe and Schwenk, 2019). LASER encodes sentences of 93 languages into a single vector space, such that semantically similar sentences in different languages have similar vectors. For our experiments, we leave the LASER encoder unchanged and train separate decoders for English and Dutch, by optimizing the likelihood $p(\mathbf{x}|\mathbf{z})$ , with $\mathbf{z} = \mathrm{ENC}(\mathbf{x})$ . The decoder consists of a single-layer 1024-dimensional hidden state LSTM (Hochreiter and Schmidhuber, 1997).
+
+# 3.2 Linear Property Classifier
+
+Our approach assumes that both labels of the considered property are linearly separable in $\mathbf{z}$ space. A linear classifier $\mathcal{C}$ is trained on examples of the linguistic property. With the coefficients $\mathbf{w} \in \mathbb{R}^n$ and bias $b \in \mathbb{R}$ , its decision boundary is characterized by the affine hyperplane
+
+$$
+\mathcal {H} = \left\{\mathbf {z} \in \mathbb {R} ^ {n}: \mathbf {z} \cdot \mathbf {w} + b = 0 \right\}. \tag {1}
+$$
+
+Logistic regression was used for the results presented in this work.
+
+# 3.3 Geometric Transformation
+
+The idea behind the geometric transformation is the following: a perpendicular projection from $\mathbf{z}$ onto the decision plane $\mathcal{H}$ would make the classifier $\mathcal{C}$ most uncertain about the considered attribute,
+
+with minimal changes (in Euclidean sense) to the original vector. When removing the property information from the corresponding sentence with the opposite label, we assume it gets projected onto the same position of $\mathcal{H}$ . As a result, the proposed geometric transformation comes down to shifting $\mathbf{z}$ in the direction perpendicular to $\mathcal{H}$ , and beyond it, into the region where $\mathcal{C}$ would predict the opposite label of the property. The transformed representation $\mathbf{z}'$ is then decoded by DEC. The intuitive approach of simply mirroring $\mathbf{z}$ over the decision plane appears sub-optimal (see Section 4.4). The distance into the opposite half space is therefore predicted based on the input (see Section 3.4).
+
+The geometric shift of $\mathbf{z}$ in the direction of $\mathcal{H}$ can be derived with basic geometry, for which what follows is a brief sketch. By construction, $\mathbf{w}$ is perpendicular to the plane described by $\mathbf{z} \cdot \mathbf{w} = 0$ , which in turn is parallel to $\mathcal{H}$ , given Eq. (1), such that $\mathbf{w} \perp \mathcal{H}$ . With that, the perpendicular projection $\mathbf{z}_{\perp}$ of $\mathbf{z}$ onto $\mathcal{H}$ can be written as
+
+$$
+\mathbf {z} _ {\perp} = \mathbf {z} + \beta \mathbf {w}, \text {w i t h} \beta = - \frac {\mathbf {z} \cdot \mathbf {w} + b}{| | \mathbf {w} | | ^ {2}},
+$$
+
+after substituting $\mathbf{z}_{\perp}\in \mathcal{H}$ into Eq. (1).
+
+We finally express the transformation of $\mathbf{z}$ onto $\mathbf{z}'$ beyond $\mathcal{H}$ as
+
+$$
+\mathbf {z} ^ {\prime} = \mathbf {z} _ {\perp} + \lambda (\mathbf {z} _ {\perp} - \mathbf {z}) \tag {2}
+$$
+
+where the parameter $\lambda \geq 0$ represents the distance of $\mathbf{z}'$ from $\mathcal{H}$ , relative to the distance $\| \mathbf{z}_{\perp} - \mathbf{z}\|$ on the original side of the decision plane (indicated as $d_{\perp}$ in Fig. 1).
+
+# 3.4 Projection Distance Predictor
+
+As mentioned above, we propose estimating the most suitable value of $\lambda$ , corresponding to how far on the other side of the decision plane $\mathbf{z}$ needs to be projected to get optimal transfer results. To that end, we use a contextual multi-armed bandit (CMAB) (Auer, 2002), a simple and efficient form of reinforcement learning with a single state, which in our setting is the sentence representation $\mathbf{z}$ . For a new input $\mathbf{z}$ , the bandit needs to select the value of $\lambda$ that best allows transferring the property with Eq. (2), while preserving the content of the associated sentence $\mathbf{x}$ .
+
+The bandit method allows using a non-differentiable reward, but other choices of algorithm are possible. Our model's goal is to preserve the content of the original sentence $\mathbf{x}$ while changing its property $y$ to $y^\prime$ . Hence, our CMAB reward
+
+consists of (i) a linguistic property reward $r_{\mathrm{prop}}$ and (ii) a content-preserving reward $r_{\mathrm{content}}$ . To compute $r_{\mathrm{prop}}$ , we pass the decoded transformed sentence $\mathbf{x}' = \mathrm{DEC}(\mathbf{z}')$ back into the encoder and use the predicted likelihood of the corresponding linear property classifier for target $y'$ as the reward:
+
+$$
+r _ {\mathrm {p r o p}} (\mathbf {x} ^ {\prime}, y ^ {\prime}) = \left\{ \begin{array}{l l} \sigma (\mathrm {E N C} (\mathbf {x} ^ {\prime}) \cdot \mathbf {w} + b) & y ^ {\prime} = 1 \\ 1 - \sigma (\mathrm {E N C} (\mathbf {x} ^ {\prime}) \cdot \mathbf {w} + b) & y ^ {\prime} = 0 \end{array} \right.
+$$
+
+with $\sigma(.)$ the logistic function. For $r_{\mathrm{content}}$ , we directly optimize the BLEU-score (Papineni et al., 2002) between the original sentence $\mathbf{x}$ and the transferred sentence $\mathbf{x}'$ . Intuitively, this leads to the minimum number of changes that are required to transfer $y$ to $y'$ and thus encourages the content preservation between $\mathbf{x}$ and $\mathbf{x}'$ :
+
+$$
+r _ {\text {c o n t e n t}} \left(\mathbf {x}, \mathbf {x} ^ {\prime}\right) = \operatorname {B L E U} \left(\mathbf {x}, \mathbf {x} ^ {\prime}\right)
+$$
+
+For the final reward $r(\mathbf{x},\mathbf{x}^{\prime},y^{\prime})$ , the harmonic mean of $r_{\mathrm{prop}}(\mathbf{x}^{\prime},y^{\prime})$ and $r_{\mathrm{content}}(\mathbf{x},\mathbf{x}^{\prime},y^{\prime})$ appeared a suitable choice, encouraging the model to jointly ensure the correct target property (high $r_{\mathrm{prop}}$ ) as well as preserve the sentence content (high $r_{\mathrm{content}}$ ).
+
+We implement CMAB using the LinUCB with Disjoint Linear Models algorithm from Li et al. (2010), which assumes that the expected reward obtained from choosing arm $\lambda$ is linear with respect to its input features (in our case, sentence encoding $\mathbf{z}$ ). For each discrete allowed value ('arm') for $\lambda$ , LinUCB learns a separate ridge-regression model, with learnable parameters $\mathbf{A} \in \mathbb{R}^{n \times n}$ and $\mathbf{b} \in \mathbb{R}^n$ (for LASER $n$ is 1024). It predicts the reward, including an upper confidence bound (UCB), for choosing that value for $\lambda$ for the given encoding $\mathbf{z}$ . The hyperparameter $\alpha$ is used to control the wideness of the UCB, such that a larger $\alpha$ results in a wider UCB. Each training iteration observes a single $\mathbf{z}$ for which the arm achieving the highest potential reward (UCB) is chosen and only the parameters corresponding to its ridge-regression model are updated. Quantifying the merit of each arm for the input requires an inverse-matrix $(n \times n)$ computation, 2 matrix-vector multiplications, and 2 dot products. The best arm's parameters (A and b) are then updated, requiring 1 outer-vector product. During inference, the $\lambda$ value of the best arm is used. The training and inference schemes are presented in Algorithms 1 and 2.
+
+Algorithm 1: Training scheme, pseudocode adapted from Li et al. (2010)
+```txt
+input:Exploration parameter $\alpha \in \mathbb{R}_+$ $\mathcal{A} = \{\lambda_1,\dots,\lambda_k\}$
+```
+
+for $(\mathbf{x}_t,y_t)\in \mathcal{D}$ do
+```latex
+$\mathbf{z}_t = \mathrm{ENC}(\mathbf{x}_t)$
+for $\lambda \in \mathcal{A}$ do
+if $t = 0$ then $\begin{array}{rl} & {\mathbf{A}_{\lambda} = \mathbf{I}_{n\times n},\mathbf{b}_{\lambda} = \mathbf{0}_{n\times 1}}\\ & {\hat{\theta}_{\lambda} = \mathbf{A}_{\lambda}^{-1}\mathbf{b}_{\lambda}}\\ & {p_{t,\lambda} = \hat{\theta}_{\lambda}^{T}\mathbf{z}_{t} + \alpha \sqrt{\mathbf{z}_{t}^{T}\mathbf{A}_{\lambda}^{-1}\mathbf{z}_{t}}} \end{array}$ Choose $\lambda_{t} = \operatorname {argmax}_{\lambda \in \mathcal{A}}p_{t,\lambda}$ $\mathbf{z}_t^{\prime} = \mathbf{z}_{\perp ,t} + \lambda_t(\mathbf{z}_{\perp ,t} - \mathbf{z}_t)$ $\mathbf{x}_t^\prime = \mathrm{DEC}(\mathbf{z}_t^\prime)$ $\mathbf{A}_{\lambda_t} = \mathbf{A}_{\lambda_t} + \mathbf{z}_t\mathbf{z}_t^T$ $\mathbf{b}_{\lambda_t} = \mathbf{b}_{\lambda_t} + r(\mathbf{x}_t,\mathbf{x}_t',y_t')\mathbf{z}_t$
+```
+
+Algorithm 2: Inference scheme
+```txt
+input:Aand $\mathbf{b}_{\lambda}$ for each arm $\lambda \in \mathcal{A} = \{\lambda_1,\dots ,\lambda_k\}$ Sentence x with property label y
+```
+
+$\mathbf{z} = \mathrm{ENC}(\mathbf{x})$
+```latex
+for $\lambda \in \mathcal{A}$ do $\begin{array}{r}\hat{\theta}_{\lambda} = \mathbf{A}_{\lambda}^{-1}\mathbf{b}_{\lambda}\\ p_{\lambda} = \hat{\theta}_{\lambda}^{T}\mathbf{z} \end{array}$
+```
+
+Choose $\lambda = \operatorname{argmax}_{\lambda \in \mathcal{A}} p_{\lambda}$
+```latex
+$\mathbf{z}' = \mathbf{z}_{\perp} + \lambda (\mathbf{z}_{\perp} - \mathbf{z})$
+```
+
+$\mathbf{x}^{\prime} = \mathrm{DEC}(\mathbf{z}^{\prime})$
+```txt
+return $\mathbf{x}^{\prime}$
+```
+
+# 4 Experiments
+
+To investigate whether linguistic properties embedded in representations of pre-trained encoders can be transferred without finetuning, we first apply the SentEval tool from Conneau and Kiela (2018) to LASER-embeddings (Section 3.1) and identify three properties that have a strong presence. We then investigate how well our approach performs on these properties in the monolingual setting (ML), in which our CMAB model is both trained and evaluated on English sentences (Q1). Finally, we investigate the performance of our approach in the crosslingual setting (CL), in which the model is trained on English but evaluated on Dutch sentences. In particular, after training on English, Dutch sentences are passed into the LASER encoder to obtain the transformed encodings $\mathbf{z}^{\prime}$ which in turn are decoded by the Dutch decoder (Q2).
+
+| Probing Task: Accuracy (%) |
| Length: 74.09 | Tense: 89.1 |
| BigramShift: 68.06 | CoordinateInversion: 67.82 |
| OddManOut: 50.80 | Depth: 39.2 |
| TopConstituents: 39.2 | SubjNumber: 90.69 |
| ObjNumber: 88.72 | |
+
+Table 1: Results of LASER-embeddings on the probing tasks of Conneau and Kiela (2018). In our experiments, we transfer the properties denoted in bold.
+
+# 4.1 Linguistic Properties
+
+Table 1 shows the results of LASER-embeddings on the probing tasks from Conneau and Kiela (2018). The high accuracies for the properties shown in bold, indicate that LASER encodes them well. In our experiments, we transfer (i) the Tense of the main verb which is either in the present or past, (ii) ObjNum, representing the number (singular or plural) of the main clause's direct object and (iii) SubjNum, which is the number (singular or plural) of the subject of the main clause.
+
+# 4.2 Implementation and Training Data
+
+As discussed in Section 3.1, we use LASER's encoder and train two decoders on it with around 20M English and Dutch OpenSubtitles sentences (Tiedemann, 2012; Lison et al., 2019). For each property, we train a binary logistic regression model on CPU using SentEval data, through stratified 5-fold cross-validation. We found that training the CMAB-models on SentEval led to worse results than training on OpenSubtitles. We hypothesize that this is due to a mismatch between the SentEval and OpenSubtitles sentences on which the decoders were trained. We therefore trained, on CPU, the CMAB-models using 2500 English OpenSubtitles sentences with (noisy) property labels predicted by the SentEval classifiers. Across all experiments, we use the discrete set $\{1, 1.5, \dots, 7\}$ as possible values for $\lambda$ ('arms' of the CMAB algorithms) and set the CMAB exploration parameter $\alpha$ to 4.
+
+# 4.3 Evaluation
+
+We randomly selected OpenSubtitles sentences (not seen during decoder training), and for those with any of the target properties present, annotated the corresponding sentence with the flipped property. As such, 100 test-pairs $(\mathbf{x},\mathbf{x}^{\prime})$ were obtained for each property. We report human evaluation metrics: (i) the percentage of transferred sentences that have the correct property ('Label' accuracy), and (ii) the
+
+ | Property | Label (%) | All (%) | BLEU |
| Mono-lingual | TenseML | 61 | 47 | 54.9 |
| ObjNumML | 44 | 29 | 39.0 |
| SubjNumML | 48 | 34 | 36.3 |
| Cross-lingual | TenseCL | 51 | 41 | 49.9 |
| ObjNumCL | 49 | 43 | 49.0 |
| SubjNumCL | 56 | 33 | 32.6 |
+
+percentage of transferred sentences that have the correct property and preserve the content ('All' accuracy). We also include the BLEU-score between the transferred sentence and the gold target $\mathbf{x}'$ .
+
+# 4.4 Results
+
+To answer (Q1), we refer to the first three rows of Table 2. Our approach switches properties in roughly half of the cases (label accuracy). However, fewer cases occur in which both the property is transferred and content is preserved. The last three rows of Table 2 display the metrics in the cross-lingual setting in which we notice similar results as in the previous setting (Q2). The results are encouraging, although we expect further improvements from more complex transformation approaches. Table 3 shows, for $Tense_{ML}$ , a comparison of our CMAB approach against a baseline, that mirrors each z over the decision boundary i.e., $\lambda = 1$ . We find that the CMAB-approach outperforms that baseline for all metrics. Moreover, Table 4 shows the distribution of the predicted arms on the test sets in the monolingual and cross-lingual settings, indicating that choosing the optimal value for $\lambda$ is input-dependent. As an illustration, Table 5 lists a few examples, picked randomly from among those test items with successful label transformation and content preservation.
+
+# 5 Conclusion and Future Work
+
+We have introduced a simple and efficient geometric method to transfer linguistic properties which has been evaluated on three properties in both monolingual and cross-lingual settings. While there is room for improvement, our preliminary results indicate that it can allow pre-trained autoencoders to transfer linguistic properties without additional tuning, such that there is no need to train dedicated transfer systems. This potentially makes learning faster and better scalable than with
+
+Table 2: Human label accuracy ('Label') and accuracy of both label and content ('All'), and BLEU-scores of our CMAB-approach (monolingual and cross-lingual).
+
+| Model | Label (%) | All (%) | BLEU |
| Baseline | 59 | 28 | 53.1 |
| CMAB | 61 | 47 | 54.9 |
+
+Table 3: Comparison of the baseline $\left( {\lambda = 1}\right)$ and the CMAB-approach for Tense ${}_{\mathrm{{ML}}}$ .
+
+| λ | TenseML(CL) | SubjNumML(CL) | ObjNumML(CL) |
| 1 | 2.5(5) | X | X |
| 1.5 | 23.5(31.5) | X | X |
| 2 | 43.5(34.5) | X | X |
| 2.5 | 14(19) | X | 13(14) |
| 3 | 15(7) | 3(1) | X |
| 3.5 | X | X | X |
| 4 | 1.5(3) | 21(X) | X |
| 4.5 | X | X(29.5) | X |
| 5 | X | 5.5(6.5) | 1.5(5.5) |
| 5.5 | X | 21(26) | X |
| 6 | X | 24(11.5) | 0.5(50) |
| 6.5 | X | 8.5(13) | 22(15) |
| 7 | X | 17(12.5) | 63(15.5) |
+
+Table 4: Distributions of the predicted projection distances of the CMAB for the different test sets expressed as a percentage (monolingual and cross-lingual).
+
+ | Tense (present→past) |
| Mono-lingual | i ask many people here .i asked many people here . |
| Cross-lingual | ik kijk maar een oude film van m ’ n moeder .ik bekeek een oude film van mijn moeder . |
| ObjNum (singular→plural) |
| Mono-lingual | i could tell you some story .i could tell you some stories . |
| Cross-lingual | we hebben een better bondgenoot nodig .we hebben betere bondgenoten nodig . |
| SubjNum (plural→singular) |
| Mono-lingual | families agreed to keep it quiet .a family agreed to keep it quiet . |
| Cross-lingual | monsters gaan ons opeten .het monster.gaat ons opeten . |
+
+Table 5: Linguistic property transfer examples of the proposed system in both monolingual and cross-lingual settings
+
+existing methods. For future work, we aim at extending our method to transformer-based encoders (monolingual and cross-lingual), and will consider additional linguistic as well as more style-oriented properties.
+
+# Acknowledgments
+
+This work was funded by the Flemish Government (VLAIO), Baekeland project-HBC.2019.2221.
+
+# References
+
+Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
+Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.
+Peter Auer. 2002. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397-422.
+Ethan A Chi, John Hewitt, and Christopher D Manning. 2020. Finding universal grammatical relations in multilingual BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
+Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
+Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670-680.
+Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single $\$ 89$ ! $^\# ^ {\ast}$ vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126- 2136.
+Ning Dai, Jianze Liang, Xipeng Qiu, and Xuan-Jing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5997-6007.
+Yu Duan, Jiaxin Pei, Canwen Xu, and Chenliang Li. 2020. Pre-train and plug-in: Flexible conditional text generation with variational auto-encoders. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
+Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language-agnostic bert sentence embedding. arXiv preprint arXiv:2007.01852.
+
+Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Thirty-Second AAAI Conference on Artificial Intelligence.
+John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
+Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587-1596. JMLR.org.
+Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 424-434.
+Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294-3302.
+Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874.
+Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661-670.
+Pierre Lison, Jörg Tiedemann, Milen Kouylekov, et al. 2019. Open subtitles 2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In LREC 2018, Eleventh International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA).
+Dayiheng Liu, Jie Fu, Yidan Zhang, Chris Pal, and Jiancheng Lv. 2020. Revision in continuous space: Fine-grained control of text style transfer. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI.
+Lajanugen Logeswaran, Honglak Lee, and Samy Bengio. 2018. Content preserving text generation with attribute controls. In Advances in Neural Information Processing Systems, pages 5103-5113.
+
+Jonas Mueller, David Gifford, and Tommi Jaakkola. 2017. Sequence to better sequence: continuous revision of combinatorial structures. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2536-2544. JMLR.org.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics.
+Vinit Ravishankar, Lilja Øvrelid, and Erik Velldal. 2019. Probing multilingual sentence representations with x-probe. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 156-168.
+Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830–6841.
+Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Lrec, volume 2012, pages 2214-2218.
+Ke Wang, Hang Hua, and Xiaojun Wan. 2019. Controllable unsupervised text attribute transfer via editing entangled latent representation. In Advances in Neural Information Processing Systems, pages 11034-11044.
+Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann LeCun. 2018. Adversarily regularized autoencoders. In International Conference on Machine Learning, pages 5902-5911.
\ No newline at end of file
diff --git a/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/images.zip b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4759e796964793946db381b4dd0211d546f937eb
--- /dev/null
+++ b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e18ff7c6a9d8fb31412172d4820e375da69fc128544d1bc72b621501a96f6962
+size 209037
diff --git a/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/layout.json b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..13ca953baa3881ea80637ae5187c37f25a1e99b3
--- /dev/null
+++ b/asimplegeometricmethodforcrosslinguallinguistictransformationswithpretrainedautoencoders/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:244b8aedf0e0c0a37656c5c19965dd30aa439d969808cbc15308d09f79450c3d
+size 306685
diff --git a/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_content_list.json b/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..98a84da77e81110eab4e3c1c701fbcaaf03ec6c0
--- /dev/null
+++ b/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05457aa73ef5e730a454bb2ab2465847c700bb0a57f7b89a7850b6a4adcb26c0
+size 97659
diff --git a/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_model.json b/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7673d1d698c1935bf0ac7c0d48c6420def5ee583
--- /dev/null
+++ b/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:08e39c68416e7928209e8b4a764b6d3c5d3be390a50788fa7f53298fd33d0a2b
+size 122828
diff --git a/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_origin.pdf b/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..15386ffb8c1a8be481b7467789b0075cb8fa505d
--- /dev/null
+++ b/astrongbaselineforqueryefficientattacksinablackboxsetting/47b3738d-7ba5-4a48-be8c-df19d1147efa_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba3667599ab4c726fab0cef4254d052474a1cf51e38937724eec37add5e8d308
+size 968625
diff --git a/astrongbaselineforqueryefficientattacksinablackboxsetting/full.md b/astrongbaselineforqueryefficientattacksinablackboxsetting/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..84671ec630056cf8058423d411ddb9bf95f7a2e5
--- /dev/null
+++ b/astrongbaselineforqueryefficientattacksinablackboxsetting/full.md
@@ -0,0 +1,452 @@
+# A Strong Baseline for Query Efficient Attacks in a Black Box Setting
+
+Rishabh Maheshwary*, Saket Maheshwary* and Vikram Pudi
+
+Data Sciences and Analytics Center, Kohli Center on Intelligent Systems
+
+International Institute of Information Technology, Hyderabad, India
+
+{rishabh.maheshwary@research.iit.ac.in, vikram@iit.ac.in}
+
+# Abstract
+
+Existing black box search methods have achieved high success rate in generating adversarial attacks against NLP models. However, such search methods are inefficient as they do not consider the amount of queries required to generate adversarial attacks. Also, prior attacks do not maintain a consistent search space while comparing different search methods. In this paper, we propose a query efficient attack strategy to generate plausible adversarial examples on text classification and entailment tasks. Our attack jointly leverages attention mechanism and locality sensitive hashing (LSH) to reduce the query count. We demonstrate the efficacy of our approach by comparing our attack with four baselines across three different search spaces. Further, we benchmark our results across the same search space used in prior attacks. In comparison to attacks proposed, on an average, we are able to reduce the query count by $75\%$ across all datasets and target models. We also demonstrate that our attack achieves a higher success rate when compared to prior attacks in a limited query setting.
+
+# 1 Introduction
+
+In recent years, Deep Neural Networks (DNNs) have achieved high performance on a variety of tasks (Yang et al., 2016; Goodfellow et al., 2016; Kaul et al., 2017; Maheshwary et al., 2017; Maheshwary and Misra, 2018; Devlin et al., 2018). However, prior studies (Szegedy et al., 2013; Papernot et al., 2017) have shown evidence that DNNs are vulnerable to adversarial examples — inputs generated by slightly changing the original input. Such changes are imperceptible to humans but they deceive DNNs, thus raising serious concerns about their utility in real world applications. Existing NLP attack methods are broadly classified into white box attacks and black box attacks. White box attacks require access to the target model's
+
+parameters, loss function and gradients to craft an adversarial attack. Such attacks are computationally expensive and require knowledge about the internal details of the target model which are not available in most real world applications. Black box attacks crafts adversarial inputs using only the confidence scores or class probabilities predicted by the target model. Almost all the prior black box attacks consists of two major components (1) search space and (2) search method.
+
+A search space is collectively defined by a set of transformations (usually synonyms) for each input word and a set of constraints (e.g., minimum semantic similarity, part-of-speech (POS) consistency). The synonym set for each input word is generated either from the nearest neighbor in the counter-fitted embedding space or from a lexical database such as HowNet (Dong and Dong, 2003) or WordNet (Miller, 1995). The search space is variable and can be altered by either changing the source used to generate synonyms or by relaxing any of the above defined constraints.
+
+A search method is a searching algorithm used to find adversarial examples in the above defined search space. Given an input with $\mathcal{W}$ words and each word having $\mathcal{T}$ possible substitutes, the total number of perturbed text inputs is $(\mathcal{T} + 1)^{\mathcal{W}}$ . Given this exponential size, the search algorithm must be efficient and exhaustive enough to find optimal adversarial examples from the whole search space.
+
+Black box attacks proposed in (Alzantot et al., 2018; Zang et al., 2020) employ combinatorial optimization procedure as a search method to find adversarial examples in the above defined search space. Such methods are extremely slow and require massive amount of queries to generate adversarial examples. Attacks proposed in (Ren et al., 2019; Jin et al., 2019) search for adversarial examples using word importance ranking which first rank the input words and then substitutes them with similar words. The word importance ranking scores
+
+each word by observing the change in the confidence score of the target model after that word is removed from the input (or replaced with token). Although, compared to the optimization based methods, the word importance ranking methods are faster, but it has some major drawbacks – (1) each word is ranked by removing it from the input (or replacing it with a token) which therefore alter the semantics of the input during ranking, (2) it not clear whether the change in the confidence score of the target model is caused by the removal of the word or the modified input and (3) this ranking mechanism is inefficient on input of larger lengths.
+
+In general, their exists a trade off between the attack success rate and the number of queries. A high query search method generates adversarial examples with high success rate and vice-versa. All such prior attacks are not efficient and do not take into consideration the number of queries made to the target model to generate an attack. Such attacks will fail in real world applications where there is a constraint on the number of queries that can be made to the target model.
+
+To compare a new search method with previous methods, the new search method must be benchmarked on the same search space used in the previous search methods. However, a study conducted in (Yoo et al., 2020) have shown that prior attacks often modify the search space while evaluating their search method. This does not ensure a fair comparison between the search methods because it is hard to distinguish whether the increase in attack success rate is due to the improved search method or modified search space. For example, (Jin et al., 2019) compares their search method with (Alzantot et al., 2018) where the former uses Universal Sentence Encoder (USE) (Cer et al., 2018) and the latter use language model as a constraint. Also, all the past works evaluate their search methods only on a single search space. In this paper1, we address the above discussed drawbacks through following contributions:
+
+1. We introduce a novel ranking mechanism that takes significantly less number of queries by jointly leveraging word attention scores and LSH to rank the input words; without altering the semantics of the input.
+
+2. We call for unifying the evaluation setting by benchmarking our search method on the same search space used in the respective baselines. Further, we evaluate the effectiveness of our method by comparing it with four baselines across three different search spaces.
+3. On an average, our method is $50\%$ faster as it takes $75\%$ lesser queries than the prior attacks while compromising the attack success rate by less than $2\%$ . Further, we demonstrate that our search method has a much higher success rate than compared to baselines in a limited query setting.
+
+# 2 Related Work
+
+# 2.1 White Box Attacks
+
+This category requires access to the gradient information to generate adversarial attacks. Hotflip (Ebrahimi et al., 2017) flips characters in the input using the gradients of the one hot input vectors. Liang et al. used gradients to perform insertion, deletion and replacement at character level. Later (Li et al., 2018) used the gradient of the loss function with respect to each word to find important words and replaced those with similar words. Following this, (Wallace et al., 2019) added triggers at the start of the input to generate adversarial examples. Although these attacks have a high success rate, but they require knowledge about the model parameters and loss function which is not accessible in real word scenarios.
+
+# 2.2 Black Box Attacks
+
+Existing black box attacks can be classified into combinatorial optimization based attacks and greedy attacks. Attack proposed in (Alzantot et al., 2018) uses Genetic algorithm as a search method to search for optimal adversarial examples in the search space. On similar lines, (Zang et al., 2020) uses Particle Swarm Optimization procedure to search for adversarial examples. Recently, (Maheshwary et al., 2021) crafted adversarial attacks using Genetic algorithm in a hard label black box setting. Although, such methods have a high success rate but they are extremely slow and takes large amount of queries to search for optimal adversarial examples.
+
+Greedy black box attacks generates adversarial attacks by first finding important words, which highly impacts the confidence score of the target model and than replacing those words with
+
+
+Figure 1: Scoring of each input word using attention mechanism and Locality Sensitive Hashing (LSH).
+
+similar words. Search method proposed in (Ren et al., 2019) used word saliency and the target model confidence scores to rank words and replaced them with synonyms from WordNet (Miller, 1995). Although such a ranking mechanism is exhaustive, but it is not query efficient to rank the words. Inspired by this, (Jin et al., 2019) proposed TextFooler which ranks word only based upon the target model's confidence and replaces them with synonyms from the counter-fitted embedding space (Mrkšić et al., 2016). This ranking mechanism have a low attack success rate and is not exhaustive enough to search for adversarial examples with low perturbation rate.
+
+Some prior works (Garg and Ramakrishnan, 2020; Li et al., 2020b; Maheshwary et al., 2020; Li et al., 2020a) have used masked language models to generate word replacements instead of using synonyms. However, all these methods follow a ranking mechanism similar to TextFooler. Moreover, as shown in (Yoo et al., 2020) most of the black box methods described above do not maintain a consistent search space while comparing their method with other search methods.
+
+# 2.3 Locality Sensitive Hashing
+
+LSH has been used in various NLP applications in the past. Ravichandran et al. used LSH for clustering nouns, (Van Durme and Lall, 2010) extended it for streaming data. Recently, (Kitaev et al., 2020) used LSH to reduce the computation time of self attention mechanism. There have been many variants of LSH, but in this paper we leverage the LSH method proposed in (Charikar, 2002).
+
+# 3 Proposed Approach
+
+Given a target model $\mathbf{F}:\mathcal{X}\to \mathcal{Y}$ , that maps the input text sequence $\mathcal{X}$ to a set of class labels $\mathcal{V}$ . Our goal is to generate an adversarial text sequence $\mathcal{X}_{\mathcal{A}\mathcal{D}\mathcal{V}}$ that belongs to any class in $\mathcal{V}$ except the original class of $\mathcal{X}$ i.e. $\mathbf{F}(\mathcal{X})\neq \mathbf{F}(\mathcal{X}_{\mathcal{A}\mathcal{D}\mathcal{V}})$ . The input $\mathcal{X}_{\mathcal{A}\mathcal{D}\mathcal{V}}$ must be generated by substituting the input words with their respective synonyms from a chosen search space.
+
+Our search method consists of two steps (1) Word Ranking – ranks all words in the input text and (2) Word Substitution – substitutes input words with their synonyms in the order-of-rank (step 1).
+
+# 3.1 Word Ranking
+
+Recent studies (Niven and Kao, 2019) have shown evidence that certain words in the input and their replacement can highly influence the final prediction of DNNs. Therefore, we score each word based upon, (1) how important it is for the final prediction and (2) how much its replacement with a similar word can alter the final prediction of the target model. We use attention mechanism to select important words for classification and employ LSH to capture the impact of replacement of each word on the prediction of target model. Figure 1 demonstrates the working of the word ranking step.
+
+# 3.1.1 Attention based scoring
+
+Given an input $\mathcal{X}$ , this step assigns high score to those influential words which impact the final outcome. The input sequence $\mathcal{X} = \{x_1, x_2 \dots x_n\}$ is passed through a pre-trained attention model $F_{attn}$ to get attention scores $\alpha_i$ for each word $x_i$ . The scores are computed using Hierarchical Attention Networks (HAN) (Yang et al., 2016) and Decom-
+
+pose Attention Model (DA) (Parikh et al., 2016) for text classification and entailment tasks respectively. Note, instead of querying the target model every time to score a word, this step scores all words together in a single pass (inferring the input sample by passing it through attention model), thus, significantly reducing the query count. Unlike prior methods, we do not rank each word by removing it from the input (or replacing it with a UNK token), preventing us from altering semantics of the input.
+
+# 3.1.2 LSH based Scoring
+
+This step assigns high scores to those words whose replacement with a synonym will highly influence the final outcome of the model. It scores each word based on the change in confidence score of the target model, when it is replaced by its substitute (or synonym) word. But computing the change in confidence score for each synonym for every input word, significantly large number of queries are required. Therefore, we employ LSH to solve this problem. LSH is a technique used for finding nearest neighbours in high dimensional spaces. It takes an input, a vector $x$ and computes its hash $h(x)$ such that similar vectors get the same hash with high probability and dissimilar ones do not. LSH differs from cryptographic hash methods as it aims to maximize the collisions of similar items. We leverage Random Projection Method (RPM) (Charikar, 2002) to compute the hash of each input text.
+
+# 3.1.3 Random Projection Method (RPM)
+
+Let us assume we have a collection of vectors in an $m$ dimensional space $R^m$ . Select a family of hash functions by randomly generating a spherically symmetrical random vector $r$ of unit length from the $m$ dimensional space. Then the hash function $h_r(u)$ is defined as:
+
+$$
+h _ {r} (u) = \left\{ \begin{array}{l l} 0 & r. u < 0 \\ 1 & r. u \geq 0 \end{array} \right. \tag {1}
+$$
+
+Repeat the above process by generating $d$ random unit vectors $\{r_0, r_1..r_d\}$ in $R^m$ . The final hash $\bar{u}$ for each vector $u$ is determined by concatenating the result obtained using on all $d$ vectors.
+
+$$
+\bar {u} = \{h _ {r 1} (u), h _ {r 2} (u)... h _ {r d} (u) \}. \qquad (2)
+$$
+
+The hash of each vector $u$ is represented by a sequence of bits and two vectors having same hash are mapped to same bucket in the hash table. Such
+
+a process is very efficient in finding nearest neighbor in high dimensional spaces as the hash code is generated using only the dot product between two matrices. Also, it is much easier to implement and simple to understand when compared to other nearest neighbour methods. We use the above process to score each word as follows:
+
+1. First, an input word $x_{i}$ is replaced with every synonym from its synonym set $S(x_{i})$ resulting in perturbed sequence $\mathcal{X}_{ij} = \{x_1\ldots w_j\ldots x_n\}$ , where $i$ is the substituted index and $j$ is the $j$ th synonym from $S(x_{i})$ i.e. $w_{j}\in S(x_{i})$ . Perturbed sequences not satisfying the search space constraints (Table 1) are filtered.
+2. The remaining perturbed inputs are passed through a sentence encoder (USE) which returns a vector representation $\mathcal{V}_j$ for each perturbed input. Then we use LSH as described above to compute the hash of each vector. The perturbed inputs having the same hash are mapped to the same bucket of the hash table.
+
+$$
+\mathcal {V} _ {j} = \operatorname {e n c o d e} \left(\mathcal {X} _ {i j}\right) \quad \forall j; j \in [ 1, \mathcal {T} ] \tag {3}
+$$
+
+$$
+\mathcal {B} = \operatorname {L S H} \left(\mathcal {V} _ {j}, \mathcal {X} _ {i j}\right) \quad \forall j; j \in [ 1, \mathcal {T} ] \tag {4}
+$$
+
+were $\mathcal{T}$ are the number of perturbed inputs obtained for each word and $\mathcal{B} = \{b_0,b_1\dots b_{\mathcal{K}}\}$ are the buckets obtained after LSH in a hash table, $\mathcal{K}$ being the number of buckets.
+
+3. As each bucket contains similar perturbed inputs, a perturbed input is sampled randomly from each bucket and is passed to the target model $\mathbf{F}$ . The maximum change in the confidence score of the target model among all these fed inputs is the score for that index.
+
+$$
+\mathcal {V} _ {k} ^ {*}, \mathcal {X} _ {k} ^ {*} = \operatorname {s a m p l e} \left(b _ {k}\right) \forall k; k \in [ 0, \mathcal {K} ] \tag {5}
+$$
+
+$$
+\Delta P _ {k} = \mathbf {F} (\mathcal {X}) - \mathbf {F} \left(\mathcal {X} _ {k} ^ {*}\right) \forall k; k \in [ 0, \mathcal {K} ] \tag {6}
+$$
+
+$$
+P _ {i} = \max \left(\Delta P _ {k}\right) \quad k \in [ 0, \mathcal {K} ] \tag {7}
+$$
+
+The steps 1 to 3 are repeated for all the indices in $\mathcal{X}$ . LSH maps highly similar perturbed inputs to the same bucket, and as all such inputs are similar, they will impact the target model almost equally. Therefore, instead of querying the target model $\mathbf{F}$ for every input in the bucket, we sample an input randomly from each bucket and observe its impact on the target model $\mathbf{F}$ . This will reduce the query count from being proportional to number of synonyms of each word to minimum number of buckets obtained after LSH.
+
+# 3.1.4 False Negative error rate of LSH
+
+Although LSH is efficient for finding nearest neighbour, there is still a small probability that similar perturbed inputs get mapped to different buckets. Therefore to reduce this probability, we conduct multiple rounds of hashing, $L = 15$ , each round with a different family of hash functions and choose the round which has the most collisions i.e. round having minimum buckets. (Charikar, 2002) establishes an upper bound on the false negative error rate of LSH i.e two highly similar vectors are mapped to different buckets. The upper bound on the false negative error rate of LSH given by (for more details refer (Charikar, 2002))
+
+$$
+(1 - P ^ {d}) ^ {L} \tag {8}
+$$
+
+$$
+w h e r e P = 1 - \frac {\theta (u , v)}{\pi} \tag {9}
+$$
+
+This shows that for given values of $L$ and $d$ , LSH maps similar vectors to the same bucket with very high probability. As LSH maps similar inputs to the same bucket with high probability, it cuts down the synonym search space drastically, thus reducing the number of queries required to attack. The dimension of the hash function $d$ used in equation 2 is set to 5 and is same across all rounds.
+
+# 3.1.5 Final Score Calculation
+
+After obtaining the attention scores $\alpha_{i}$ and the scores from synonym words $P_{i}$ for each index (calculated using LSH), we multiply the two to get the final score $score_{i} = \alpha_{i} * P_{i}$ for each word. All the words are sorted in descending order based upon the score $score_{i}$ . The algorithm of the word ranking step is provided in the appendix A.
+
+# 3.2 Word Substitution
+
+We generate the final adversarial example for the input text by perturbing the words in the order retrieved by the word ranking step. For each word $x_{i}$ in $\mathcal{W}$ , we follow the following steps.
+
+1. Each synonym $w_{j}$ from the synonym set $S(x_{i})$ is substituted for $x_{i}$ which results in a perturbed text $X_{j}^{\prime} = \{x_{1}\dots w_{j}\dots x_{n}\}$ . The perturbed texts which do not satisfy the constraints imposed on the search space (Table 1) are filtered (Algorithm 1, lines 1-7).
+2. The remaining perturbed sequence(s) are fed to the target model $\mathbf{F}$ to get the class label $y_{new}$ and its corresponding probability score
+
+$P_{new}$ . The perturbed sequence(s) which alters the original class label $y_{orig}$ is chosen as the final adversarial example $\mathcal{X}_{\mathcal{ADV}}$ . In case the original label does not change, the perturbed sequence which has the minimum $P_{new}$ is chosen (Algorithm 1, lines 7 - 14).
+
+The steps 1 - 2 are repeated on the chosen perturbed sequence for the next ranked word. Note, we only use LSH in the word ranking step, because in ranking we need to calculate scores for all the words in the input and so we need to iterate over all possible synonyms of every input word. However, in the word substitution step we replace only one word at a time and the substitution step stops when we get an adversarial example. As the number of substitutions are very less (see perturbation rate in Table 3) to generate adversarial example, the substitution step iterates over very less words when compared to ranking step.
+
+# Algorithm 1 Word Substitution
+
+Input: Test sample $\mathcal{X}$ , Ranked words $\mathcal{W}$
+
+Output: Adversarial text $\mathcal{X}_{ADV}$
+
+1: $\mathcal{X}_{ADV}\gets \mathcal{X}$
+2: $y_{orig}$ , $P_{orig}\gets \mathbf{F}(\mathcal{X}_{\mathcal{ADV}})$
+3: $P_{best} \gets P_{orig}$
+4: for $(score_i, x_i)$ in $\mathcal{W}$ do
+5: $S\gets$ Synonyms $(x_{i})$
+6: for $w_{j}$ in $S$ do
+7: $X_{j}^{\prime}\gets \text{Replace } x_{i}$ with $w_{j}$
+8: $y_{new}^{\prime}, P_{new} \gets \mathbf{F}(X_j^{\prime})$
+9: if $y_{new} \neq y_{orig}$ then
+10: $\mathcal{X}_{\mathcal{A}\mathcal{D}\mathcal{V}}\gets X_j^{\prime}$
+11: return $\mathcal{X}_{\mathcal{A}\mathcal{D}\mathcal{V}}$
+12: if $P_{new} < P_{best}$ then
+13: $P_{best}\gets P_{new}$
+14: $\mathcal{X}_{\mathcal{ADV}}\gets X_j^{\prime}$
+15: return $\mathcal{X}_{ADV}$
+
+# 4 Experiments
+
+# 4.1 Datasets and Target Models
+
+We use IMDB — A document level sentiment classification dataset for movie reviews (Maas et al., 2011) and Yelp Reviews — A restaurant review dataset (Zhang et al., 2015), for classification task. We use MultiNLI — A natural language inference dataset (Williams et al., 2017) for entailment task. We attacked WordLSTM (Hochreiter and Schmidhuber, 1997) and BERT-base-uncased (Devlin et al., 2018) for evaluating our attack strategy on text
+
+classification and entailment tasks. For WordLSTM, we used a single layer bi-directional LSTM with 150 hidden units, a dropout of 0.3 and 200 dimensional GloVe (Pennington et al., 2014) vectors. Additional details are provided in appendix A.
+
+# 4.2 Search Spaces and Baselines
+
+We compare our search method with four baselines across three different search spaces. Also, while comparing our results with each baselines we use the same search space as used in that baseline paper. The details of search spaces are shown in Table 1. PSO: (Zang et al., 2020) It uses particle swarm optimization algorithm as a search method and uses HowNet (Dong and Dong, 2006) as search space. TextFooler: (Jin et al., 2019) It finds important words and replace them with synonyms from counter-fitted embeddings (Mrkšić et al., 2016). PWWS: (Ren et al., 2019) It takes word saliency score into account to rank words and uses WordNet (Miller, 1995) for synonym substitution.
+
+Genetic Attack: It crafts examples using a population based algorithm (Alzantot et al., 2018).
+
+| Baseline | Transformation | Constraint |
| Genetic Attack | Counter-fitted | Word similarity, LM score |
| TextFooler | embeddings | USE = 0.84, POS consistency |
| PSO | HowNet | USE = 0.84, POS consistency |
| PWWS | WordNet | POS consistency |
+
+Table 1: Baselines and their search spaces
+
+# 4.3 Experimental Settings
+
+In a black box setting, the attacker has no access to the training data of the target model. Therefore, we made sure to train the attention models on a different dataset. For attacking the target model trained on IMDB, we trained our attention model on the Yelp Reviews and vice-versa. For entailment, we trained the attention model on SNLI (Bowman et al., 2015) and attacked the target model trained on MNLI. Following (Jin et al., 2019), the target models are attacked on same 1000 samples, sampled from the test set of each dataset. The same set of samples are used across all baselines when evaluating on a single dataset. For entailment task we only perturb the premise and leave the hypothesis unchanged. We used spacy for POS tagging and filtered out stop words using NLTK. We used Universal Sentence Encoder (Cer et al., 2018) to encode the perturbed inputs while performing LSH. The hyperparameters $d$ and $L$ are tuned on the val
+
+ization set (10% of each dataset). Additional details regarding hyperparameter tuning and attention models can be found in appendix.
+
+# 4.4 Evaluation Metrics
+
+We use (1) attack success rate – the ratio of the successful attacks to total number of attacks, (2) query count – the number of queries, (3) perturbation rate – the percentage of words substituted in an input and (4) grammatical correctness – the average grammatical error increase rate (calculated using Language-Tool²) to verify the quality of generated adversarial examples. For all the metrics except attack success rate, lower the value better the result. Also, for all metrics, we report the average score across all the generated adversarial examples on each dataset. Further, we also conducted human evaluation to assess the quality of generated adversarial examples.
+
+# 4.5 Results
+
+Table 2 and 4 shows the comparison of our proposed method with each baseline across all evaluation metrics. On an average across all baselines, datasets and target models we are able to reduce the query count by $75\%$ . The PSO and Genetic attack takes at least $50x$ and $20x$ more queries respectively than our attack strategy. Also, when compared to PWWS and TF we are able to reduce the query count by at least $65\%$ and $33\%$ respectively while compromising the success rate by less than $2.0\%$ . The perturbation rate and the grammatical correctness is also within $1\%$ of the best baseline. In comparison to PSO and Genetic Attack, we are able to achieve even a lower perturbation rate and grammatical error rate with much lesser queries across some datasets and target models. Similarly, our attack outperforms TextFooler almost on all evaluation metrics. The runtime comparison and the anecdotes from generated adversarial text are provided in the appendix.
+
+# 5 Ablation Study
+
+We study the effectiveness of attention and LSH component in our method by doing a three way ablation. We observe the change in success rate, perturbation rate and queries when both or either one of the two ranking components are removed.
+
+No LSH and attention: First, we remove both the attention and LSH scoring steps and rank the
+
+| Model | Attack | IMDB | Yelp | MNLI |
| Qrs | Suc% | Qrs | Suc% | Qrs | Suc% |
| BERT | PSO | 81350.6 | 99.0 | 73306.6 | 93.2 | 4678.5 | 57.97 |
| Ours | 737 | 97.4 | 554.2 | 91.6 | 97.2 | 56.1 |
| LSTM | PSO | 52008.7 | 99.5 | 43671.7 | 95.4 | 2763.3 | 67.8 |
| Ours | 438.1 | 99.5 | 357.6 | 94.75 | 79.8 | 66.4 |
+
+(a) Comparison with PSO.
+
+| Model | Attack | IMDB | Yelp | MNLI |
| Qrs | Suc% | Qrs | Suc% | Qrs | Suc% |
| BERT | PWWS | 1583.9 | 97.5 | 1013.7 | 93.8 | 190 | 96.8 |
| Ours | 562.9 | 96.4 | 366.2 | 92.6 | 66.1 | 95.1 |
| LSTM | PWWS | 1429.2 | 100.0 | 900.0 | 99.1 | 160.2 | 98.8 |
| Ours | 473.8 | 100.0 | 236.3 | 99.1 | 60.1 | 98.1 |
+
+(c) Comparison with PWWS.
+
+| Model | Attack | IMDB | Yelp | MNLI |
| Qrs | Suc% | Qrs | Suc% | Qrs | Suc% |
| BERT | Gen | 7944.8 | 66.3 | 6078.1 | 85.0 | 1546.8 | 83.8 |
| Ours | 378.6 | 71.1 | 273.7 | 84.4 | 43.4 | 81.9 |
| LSTM | Gen | 3606.9 | 97.2 | 5003.4 | 96.0 | 894.5 | 87.8 |
| Ours | 224 | 98.5 | 140.7 | 95.4 | 39.9 | 86.4 |
+
+(b) Comparison with Genetic Attack.
+
+| Model | Attack | IMDB | Yelp | MNLI |
| Qrs | Suc% | Qrs | Suc% | Qrs | Suc% |
| BERT | TF | 1130.4 | 98.8 | 809.9 | 94.6 | 113 | 85.9 |
| Ours | 750 | 98.4 | 545.5 | 93.2 | 100 | 86.2 |
| LSTM | TF | 544 | 100.0 | 449.4 | 100.0 | 105 | 95.9 |
| Ours | 330 | 100.0 | 323.7 | 100.0 | 88 | 96.2 |
+
+(d) Comparison with TextFollower (TF).
+Table 2: Result comparison. Succ% is the attack success rate and Qrs is the average query count. Note as each baseline uses a different search space, our method will yield different results when comparing with each baseline.
+
+| Model | Attack | IMDB | Yelp | MNLI |
| Pert% | I% | Pert% | I% | Pert% | I% |
| BERT | PSO | 4.5 | 0.20 | 10.8 | 0.30 | 8.0 | 3.5 |
| Ours | 4.2 | 0.10 | 7.8 | 0.15 | 7.1 | 3.3 |
| LSTM | PSO | 2.2 | 0.15 | 7.7 | 0.27 | 6.7 | 1.27 |
| Ours | 2.0 | 0.11 | 4.9 | 0.15 | 6.8 | 1.3 |
+
+(a) Comparison with PSO.
+
+| Model | Attack | IMDB | Yelp | MNLI |
| Pert% | I% | Pert% | I% | Pert% | I% |
| BERT | PWWS | 5.2 | 0.74 | 7.3 | 1.5 | 7.1 | 1.71 |
| Ours | 7.5 | 0.9 | 9.9 | 1.9 | 9.6 | 1.48 |
| LSTM | PWWS | 2.3 | 0.3 | 4.8 | 1.29 | 6.6 | 1.5 |
| Ours | 1.9 | 0.4 | 5.5 | 1.29 | 7.8 | 2.1 |
+
+(c) Comparison with PWWS.
+
+| Model | Attack | IMDB | Yelp | MNLI |
| Pert% | I% | Pert% | I% | Pert% | I% |
| BERT | Gen | 6.5 | 1.04 | 11.6 | 1.5 | 8.7 | 1.9 |
| Ours | 6.7 | 1.02 | 10.5 | 1.49 | 9.2 | 2.1 |
| LSTM | Gen | 4.1 | 0.62 | 8.6 | 1.3 | 7.7 | 2.5 |
| Ours | 3.19 | 0.56 | 6.2 | 1.05 | 8.2 | 2.1 |
+
+(b) Comparison with Genetic Attack.
+
+| Model | Attack | IMDB | Yelp | MNLI |
| Pert% | I% | Pert% | I% | Pert% | I% |
| BERT | TF | 9.0 | 1.21 | 5.2 | 1.1 | 11.6 | 1.23 |
| Ours | 6.9 | 0.9 | 6.6 | 1.2 | 11.4 | 1.41 |
| LSTM | TF | 2.2 | 2.3 | 5.7 | 2.06 | 9.8 | 1.7 |
| Ours | 2.4 | 1.5 | 5.3 | 1.5 | 10.1 | 1.4 |
+
+(d) Comparison with TextFollower (TF).
+
+Table 3: Result comparison. $\text{Pert}\%$ is the perturbation and $\mathrm{I}\%$ is the average grammatical error increase.
+
+words in random order. Table 4 shows the results obtained on BERT across all three datasets. On an average the attack success rate drops by $7\%$ , the perturbation rate increases drastically by $6\%$ . This shows that although the query count reduces, substituting words in random order degrades the quality of generated adversarial examples and is not effective for attacking target models.
+
+Attention and no LSH: We remove the LSH component of our ranking step and rank words based upon only the attention scores obtained from the attention model. Table 4 shows the results on BERT across all datasets. On an average the attack success rate drops by $2.5\%$ , the perturbation rate increases by $3\%$ and the query increases by $37\%$ . Therefore, LSH reduces the queries significantly by eliminating near duplicates in search space.
+
+LSH and no Attention: We remove the attention component and rank words using only LSH. Results in Table 4 shows that on an average, with
+
+out attention scoring the attack success rate drops by $2\%$ , the perturbation rate increases by $0.5\%$ and the query increases by $20\%$ . Therefore, attention is important as it not only reduces queries but also enables the ranking method to prioritize important words required in target model prediction.
+
+With LSH and Attention: Finally in Table 4 we observe that, using both LSH and attention in our ranking our attack has a much better success rate, a lower perturbation rate in much lesser queries. This shows that both the components are necessary to do well across all evaluation metrics. We obtained similar results on LSTM when evaluating across different datasets and search spaces.
+
+# 6 Quantitative Analysis
+
+# 6.1 Limited Query Setting
+
+In this setting, the attacker has a fixed query budget $L$ , and can generate an attack in $L$ queries or
+
+
+(a) Adv. samples generated against BERT on IMDB
+
+
+(c) Adv. samples generated against LSTM on IMDB
+
+
+
+
+(b) Adv. samples generated against BERT on Yelp
+(d) Adv. samples generated against LSTM on Yelp
+
+Figure 2: Comparison of the number of adversarial samples generated by varying the query budget $L$ .
+
+| Dataset | Random | Only Attention | Only LSH | Both LSH and Attention |
| Suc% | Pert% | Qrs | Suc% | Pert% | Qrs | Suc% | Pert% | Qrs | Suc% | Pert% | Qrs |
| IMDB | 90.5 | 13.3 | 507.9 | 94.0 | 9.3 | 851.3 | 95.3 | 8.0 | 694.9 | 96.4 | 7.5 | 562.9 |
| Yelp | 87.3 | 15.0 | 305.9 | 91.0 | 11.0 | 550.0 | 90.2 | 10.2 | 475.2 | 92.6 | 9.8 | 366.2 |
| MNLI | 88.8 | 14.3 | 60.1 | 92.4 | 11.7 | 121.2 | 94.3 | 10.1 | 100.1 | 95.1 | 9.6 | 66.1 |
+
+Table 4: Ablation Study of attention mechanism and LSH on PWWS search space.
+
+less. To demonstrate the efficacy of our attack under this constraint, we vary the query budget $L$ and observe the attack success rate on BERT and LSTM across IMDB and Yelp datasets. We vary the query budget from 0 to 2500 and observe how many adversarial examples can be generated successfully on a test set of 500 samples. We keep the search space same (used in PWWS) across all the search methods. The results in Figure 7 show that with a query budget of 1000, our approach generates at least 200 (44.4%) more adversarial samples against both BERT and LSTM on IMDB when compared to the best baseline. Similarly, on Yelp our method generates at least 100 (25%) more adversarial samples on BERT and LSTM when compared to the best baseline. This analysis shows that our attack has a much higher success rate in a limited query setting, thus making it extremely useful for real world applications.
+
+# 6.2 Input Length
+
+To demonstrate that how our strategy scales with change in the input length (number of words in the input) compared to other baselines, we attacked
+
+BERT on Yelp. We selected inputs having number of words in the range of 10 to 250 and observed the number of queries taken by each attack method. Results in figure 3 shows that our attack takes the least number of queries across all input lengths. Further, our attack scales much better on longer inputs ( $>250$ words) as it is $2x$ faster than PWWS and TextFooler, $13x$ faster than Genetic attack and $133x$ faster than PSO.
+
+# 6.3 Transferability
+
+An adversarial example is said to be transferable, if it is generated against one particular target model but is able to fool other target models as well. We implemented transferability on IMDB and MNLI datasets across two target models. The results are shown in Table 5. Our transferred examples dropped the accuracy of other target models on an average by $16\%$ .
+
+# 6.4 Adversarial Training
+
+We randomly sample $10\%$ samples from the training dataset of MNLI and IMDB and generate adversarial examples using our proposed strategy.
+
+| Transfer | Accuracy | IMDB | MNLI |
| BERT → LSTM | Original | 90.9 | 85.0 |
| Transferred | 72.9 | 60.6 |
| LSTM → BERT | Original | 88.0 | 70.1 |
| Transferred | 67.7 | 62.1 |
+
+We augmented the training data with the generated adversarial examples and re-trained BERT on IMDB and MNLI tasks. We then again attacked BERT with our proposed strategy and observed the changes. The results in Figure 4 shows that as we add more adversarial examples to the training set, the model becomes more robust to attacks. The after attack accuracy and perturbation rate increased by $35\%$ and $17\%$ respectively and required higher queries to attack.
+
+# 7 Qualitative Analysis
+
+# 7.1 Human Evaluation
+
+We verified the quality of generated adversarial samples via human based evaluation as well. We asked the evaluators to classify the adversarial examples and score them in terms of grammatical correctness (score out of 5) as well as its semantic similarity compared to the original text. We randomly sampled $25\%$ of original instances and their corresponding adversarial examples generated on BERT for IMDB and MNLI datasets on PWWS search space. The actual class labels of adversarial examples were kept hidden and the human judges were asked to classify them. We also asked the human judges to evaluate each sample for its semantic similarity and assign a score of 0, 0.5 or 1 based on how well the adversarial examples were able to retain the meaning of their original counterparts. We also asked them to score each example in the range 1 to 5 for grammatical correctness. Each adversarial example was evaluated by 3 human evaluators
+
+
+Figure 3: Queries taken vs number of words in input
+
+
+Figure 4: Increase in after attack accuracy and perturbation rate as more adversarial samples are augmented.
+
+
+
+and the scores obtained were averaged out. The outcome is in Table 6.
+
+Table 5: Transferability on IMDB and MNLI datasets
+
+| Evaluation criteria | IMDB | MNLI |
| Classification result | 94% | 91% |
| Grammatical Correctness | 4.32 | 4.12 |
| Semantic Similarity | 0.92 | 0.88 |
+
+Table 6: Demonstrates scores given by evaluators
+
+# 8 Conclusion
+
+We proposed a query efficient attack that generates plausible adversarial examples on text classification and entailment tasks. Extensive experiments across three search spaces and four baselines show that our attack generates high quality adversarial examples with significantly lesser queries. Further, we demonstrated that our attack has a much higher success rate in a limited query setting, thus making it extremely useful for real world applications.
+
+# 9 Future Work
+
+Our proposed attack provides a strong baseline for more query efficient black box attacks. The existing word level scoring methods can be extended to sentence level. Also, the attention scoring model used can be trained on different datasets to observe how the success rate and the query efficiency gets affected. Furthermore, existing attack methods can be evaluated against various defense methods to compare the effectiveness of different attacks.
+
+# 10 Acknowledgement
+
+We would like to thank all the reviewers for their critical insights and positive feedback. We would also like to thank Riyadh Ahmed Bhat for having valuable discussions which strengthen our paper and helped us in responding to reviewers.
+
+# References
+
+Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. arXiv preprint arXiv:1804.07998.
+Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.
+Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.
+Moses S Charikar. 2002. Similarity estimation techniques from rounding algorithms. In Proceedings of the thirty-fourth annual ACM symposium on Theory of computing, pages 380-388.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+Zhendong Dong and Qiang Dong. 2003. Hownet-a hybrid language and knowledge resource. In International Conference on Natural Language Processing and Knowledge Engineering, 2003. Proceedings. 2003, pages 820-824. IEEE.
+Zhendong Dong and Qiang Dong. 2006. Hownet and the computation of meaning (with Cd-rom). World Scientific.
+Javid Ebrahimi, Anyi Rao, Daniel Lowd, and De- jing Dou. 2017. Hotflip: White-box adversarial examples for text classification. arXiv preprint arXiv:1712.06751.
+Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text classification. arXiv preprint arXiv:2004.01970.
+Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
+Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural language attack on text classification and entailment. arXiv preprint arXiv:1907.11932.
+Ambika Kaul, Saket Maheshwary, and Vikram Pudi. 2017. Autolearn—automated feature generation and selection. In 2017 IEEE International Conference on data mining (ICDM), pages 217-226. IEEE.
+Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451.
+
+Dianqi Li, Yizhe Zhang, Hao Peng, Liquin Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2020a. Contextualized perturbation for textual adversarial attack. arXiv preprint arXiv:2009.07502.
+Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. Textbugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271.
+Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020b. Bert-attack: Adversarial attack against bert using bert. arXiv preprint arXiv:2004.09984.
+Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006.
+Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pages 142-150. Association for Computational Linguistics.
+Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2020. A context aware approach for generating natural language attacks. arXiv preprint arXiv:2012.13339.
+Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. Generating natural language attacks in a hard label black box setting. In Proceedings of the 35th AAAI Conference on Artificial Intelligence.
+Saket Maheshwary, Soumyajit Ganguly, and Vikram Pudi. 2017. Deep secure: A fast and simple neural network based approach for user authentication and identification via keystroke dynamics. In IWAISE: First International Workshop on Artificial Intelligence in Security, page 59.
+Saket Maheshwary and Hemant Misra. 2018. Matching resumes to jobs via deep siamese network. In *Companion Proceedings of the The Web Conference* 2018, pages 87-88.
+George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.
+Nikola Mrkšić, Diarmuid O Seaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. arXiv preprint arXiv:1603.00892.
+Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. arXiv preprint arXiv:1907.07355.
+
+Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506-519.
+Ankur P Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.
+Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
+Deepak Ravichandran, Patrick Pantel, and Eduard Hovy. 2005. Randomized algorithms and nlp: Using locality sensitive hash functions for high speed noun clustering. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 622-629.
+Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
+Benjamin Van Durme and Ashwin Lall. 2010. Online generation of locality sensitive hash signatures. In Proceedings of the ACL 2010 conference short papers, pages 231-235.
+Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing nlp. arXiv preprint arXiv:1908.07125.
+Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.
+Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480-1489.
+Jin Yong Yoo, John X Morris, Eli Lifland, and Yanjun Qi. 2020. Searching for a search method: Benchmarking search algorithms for generating nlp adversarial examples. arXiv preprint arXiv:2009.06368.
+
+Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066-6080.
+Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649-657.
+
+# A Appendix
+
+# Algorithm 2 Word Ranking
+
+Input: Test sample $\mathcal{X}$
+
+Output: $\mathcal{W}$ containing score of each word $x_{i}$
+
+1: $F_{attn} \gets HAN()$ or $DA()$
+2: $\alpha \gets F_{attn}(\mathcal{X})$
+3: for $x_{i}$ in $\mathcal{X}$ do
+4: $S \gets \text{Synonyms}(x_i)$
+5: for $w_{j}$ in $S$ do
+6: $\mathcal{X}_{ij} \gets \text{Replace } x_i \text{ with } w_j \text{ in } \mathcal{X}$
+7: $\mathcal{V}_j\gets encode(\mathcal{X}_{ij})$
+8: $\{b_{1}\dots b_{\mathcal{K}}\} = \mathbf{LSH}(\mathcal{V}_{j},\mathcal{X}_{ij})$
+9: for $k = 1$ to $\mathcal{K}$ do
+0: $\mathcal{V}_k^*,\mathcal{X}_k^*\gets sample(b_k)$
+1: $\Delta P_{k} = \mathbf{F}(\mathcal{X}) - \mathbf{F}(\mathcal{X}_{k}^{*})$
+12: $P_{i} = \max (\Delta P_{k})$
+13: $score_{i} \gets \alpha_{i} * P_{i}$
+14: $\mathcal{W}$ .insert((scorei, $x_{i}$ ))
+15: Sort $\mathcal{W}$ by score in descending order
+
+| Dataset | Train | Test | Classes | Avg. Len |
| IMDB | 12K | 12K | 2 | 215 |
| Yelp | 560K | 18K | 2 | 152 |
| MultiNLI | 12K | 4K | 3 | 10 |
+
+# A.1 Models
+
+Target Models: We attacked WordLSTM and BERT-base-uncased for evaluating our attack strategy on text classification and entailment tasks. For WordLSTM a single layer bi-directional LSTM with 150 hidden units, 200 dimensional GloVe vectors and a dropout of 0.3 were used.
+
+Ranking Models: We used Hierarchical Attention Networks (HAN) and Decompose Attention Model (DA) for classification and entailment tasks respectively. For training HA, we used 200 dimensional word2vec embeddings and 50 dimensional
+
+
+Figure 5: Queries vs the dimension $d$ of hash function
+
+GRU cells. We used 100 dimensional word context vectors initialized randomly. We trained the model with a batch size of 64, learning rate of 0.0001, dropout of 0.3 and momentum of 0.9. For DA, a 2 layer LSTM with 200 neurons and 200 dimensional glove embeddings are used. A batch size of 32, learning rate of 0.025, dropout of 0.2 are used. All the hyper-parameters were tuned on the $20\%$ validation set of each dataset.
+
+Table 7: Statistics of all datasets
+
+| Attack | Runtime |
| PSO | 72 hrs |
| Genetic Attack | 10 hrs |
| PWWS | 3 hrs |
| Text,Fooler | 2.5 hrs |
| Ours | 1 hrs |
+
+Table 8: Runtime comparison while attacking BERT trained on Yelp dataset on a set of 500 samples across WordNet search space.
+
+# A.2 Hyperparameter Study
+
+Figure 5 and 6 shows the variation of attack success rate and queries taken to attack the target model as $d$ increases. With increase in $d$ the number of collisions decreases and therefore the number of buckets obtained $\mathcal{K}$ increases. This increases the overall query count. Also, with increase in $d$ the success rate first increases and then remains unchanged. Therefore, we use $d = 5$ because after that the success rate is almost the same but the query count increases drastically. Figure 7a and 7b shows the variation of attack success rate and queries taken to attack the target model as the rounds of hashing $L$ increases. Conducting multiple rounds of hashing reduces the probability that that similar perturbed text inputs are mapped to different buckets. We choose $L = 15$ as after it the attack success rate and the queries remain almost unchanged. The values of $d$ , $L$ are kept same across all datasets and target models.
+
+
+Figure 6: Success rate vs dimension $d$ of hash function
+
+
+(a) Queries vs rounds of hashing $L$
+
+
+(b) Success rate vs rounds of hashing $L$
+Figure 7: Comparison of success rate and queries required by changing the rounds of hashing $L$ .
+
+| Examples | Prediction |
| The movie has an excellent screenplay (the situation is credible, the action has pace), first-class [fantabulous] direction and acting (especially the 3 leading actors but the others as well -including the mobster, who does not seem to be a professional actor). I wish [want] the movie, the director and the actors success. | Positive → Negative |
| Let me start by saying I don't recall laughing once during this comedy. From the opening scene, our protagonist Solo (Giovanni Ribisi) shows himself to be a self-absorbed, feeble, and neurotic loser completely unable to cope with the smallest responsibilities such as balancing a checkbook, keeping his word, or forming a coherent thought. I guess we're supposed to be drawn to his fragile vulnerability and cheer him on through the process of clawing his way out of a deep depression. I actually wanted [treasured] him to get his kneecaps busted at one point. The dog was not a character in the film. It was simply a prop to be used, neglected, scorned, abused, coveted and disposed of on a whim. So be warned. | Negative → Positive |
| Local-international gathering [assembly] spot [stain] since the 1940s. One of the coolest pubs on the planet. Make new friends from all over the world, with some of the best [skilful] regional and imported beer selections in town. | Positive → Negative |
| This film is strange, even for a silent movie. Essentially, it follows the adventures about a engineer in post-revolutionary Russia who daydreams about going to Mars. In this movie, it seems like the producers KNOW the Communists have truly screwed up the country, but also seems to want to make it look like they've accomplished something good. Then we get to the "Martian" scenes, where everyone on Mars wears goofy hats. They have a revolution after being inspired by the Earth Men, but are quickly betrayed by the Queen who sides with them. Except it's all a dream, or is it. (And given that the Russian Revolution eventually lead to the Stalin dictatorship, it makes you wonder if it was all allegory.) Now [Nowdays], I've seen GOOD Russian cinema. For instance, Eisenstein's Battleship Potemkin is a good movie. This is just, well, silly. | Negative → Positive |
| This movie is one of the worst [tough] remake [remakes] I have ever seen in my life [aliveness!] The acting is laughable [funny] and Corman has not improved his piranhas any since 1978. 90% of the special [exceptional] effects [impressions] are taken [lifted] from Piranha (1978), Up From The Depths (1979) and Humanoids From The Deep (1979). It makes Piranha II: The Spawning look like it belongs on the American Film Institute List. | Negative → Positive |
| The story ultimately [eventually] takes hold and grips hard. | Positive → Negative |
| It's weird, wonderful, and not necessarily [definitely] for kids. | Negative → Positive. |
+
+Table 9: Demonstrates adversarial examples generated after attacking BERT on classification task. The actual word is highlighted green and substituted word is in square brackets colored red. Prediction shows before and after labels marked green and red respectively.
+
+| Examples | Prediction |
| Premise: If we travel for 90 minutes, we could arrive [reach] arrive at larger ski resorts.
+Hypothesis: Larger ski resorts are 90 minutes away. | Entailment → Neutral |
| Premise: I should put it in this way [manner].
+Hypothesis: I'm not explaining it. | Entailment → Neutral |
| Premise: Basically [Crucially], to sell myself.
+Hypothesis: Selling myself is a very important thing. | Contradict → Neutral |
| Premise: June [April] 21, 1995, provides the specific requirements for assessing and reporting on controls.
+Hypothesis: There are specific requirements for assessment of legal services. | Contradict → Entail |
+
+Table 10: Demonstrates adversarial examples generated after attacking BERT on entailment task. The actual word is highlighted green and substituted word is in square brackets colored red. Prediction shows before and after labels marked green and red respectively.
\ No newline at end of file
diff --git a/astrongbaselineforqueryefficientattacksinablackboxsetting/images.zip b/astrongbaselineforqueryefficientattacksinablackboxsetting/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e3ecb37d6803c09480f66643d7dd4608ba5b201b
--- /dev/null
+++ b/astrongbaselineforqueryefficientattacksinablackboxsetting/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9e3824d049a9ad72b8a11e885bf5657b5cbeb54b3c42b061897636c5d5e9cd6
+size 917582
diff --git a/astrongbaselineforqueryefficientattacksinablackboxsetting/layout.json b/astrongbaselineforqueryefficientattacksinablackboxsetting/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c2e2520828c50bddd50ce4bcda595d3b66fab2c4
--- /dev/null
+++ b/astrongbaselineforqueryefficientattacksinablackboxsetting/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2ecbf884cade76aa17fffc4306e0fec5cca0bbc15cb0f49693171c2f521bc134
+size 559500
diff --git a/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_content_list.json b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..123d557de5e4301a307ff70e2d7db858fea30264
--- /dev/null
+++ b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b86f3c61c84b48db5c14a8ff7c40a0d0c2ce100bf18a00aeb0cd9d1cf3352553
+size 104840
diff --git a/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_model.json b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..561f05eae090ec4701c37b8336ead22562a0155d
--- /dev/null
+++ b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c2240103ee448fecddc4c2db8a8f993535e06e23263e46991f15198ea107b3ab
+size 124214
diff --git a/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_origin.pdf b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fdabe45401e84dc6ee492ef7d42ca05993567678
--- /dev/null
+++ b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/7e95ec67-34f4-448d-9acf-0679d3ca8a25_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09732d0b3ebfab68896a0b9a1e62b1a6ced6fdff97ba04ac54292c4cadc7b275
+size 2044462
diff --git a/asurprisaldurationtradeoffacrossandwithintheworldslanguages/full.md b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f1ea84df75afb09799593fef51777616807165c0
--- /dev/null
+++ b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/full.md
@@ -0,0 +1,303 @@
+# A surprisal-duration trade-off across and within the world's languages
+
+Tiago Pimentel $^{\dagger}$ Clara Meister $^{\S}$ Elizabeth Salesky $^{\text{q}}$ Simone Teufel $^{\dagger}$ Damián Blasi $^{\text{a},\theta,\text{r}}$ Ryan Cotterell $^{\dagger,\S}$
+
+6University of Cambridge 1ETH Zürich 2Johns Hopkins University 3Harvard University 4Max Planck Institute for the Science of Human History 5Higher School of Economics ttp472@cam.ac.uk clara.meister@inf.ethz.ch esalesky@jhu.edu sht25@c1.cam.ac.uk dblasi@fas.harvard.edu rryan.cotterell@inf.ethz.ch
+
+# Abstract
+
+While there exist scores of natural languages, each with its unique features and idiosyncrasies, they all share a unifying theme: enabling human communication. We may thus reasonably predict that human cognition shapes how these languages evolve and are used. Assuming that the capacity to process information is roughly constant across human populations, we expect a surprisal-duration trade-off to arise both across and within languages. We analyse this trade-off using a corpus of 600 languages and, after controlling for several potential confounds, we find strong supporting evidence in both settings. Specifically, we find that, on average, phones are produced faster in languages where they are less surprising, and vice versa. Further, we confirm that more surprising phones are longer, on average, in 319 languages out of the 600. We thus conclude that there is strong evidence of a surprisal-duration trade-off in operation, both across and within the world's languages.
+
+# 1 Introduction
+
+During the course of human evolution, countless languages have evolved, each with unique features. Despite their stark differences, however, it is plausible that shared attributes in human cognition may have placed constraints on how each language is implemented. These constraints, in turn, may lead to compensations and trade-offs in the world's languages. For instance, if we assume a channel capacity (Shannon, 1948) in human's ability to process language (as posited by Frank and Jaeger, 2008), we may make predictions about these trade-offs. Additionally, if we assume this capacity to be uniform across human populations, these trade-offs will extend cross-linguistically.
+
+Within languages, there is a direct connection between this channel capacity assumption and
+
+
+Figure 1: Surprisal-duration trade-off slopes. The $y$ -axis presents a multiplicative effect, duration is multiplied by $y$ per bit of information. Sorted dots represent languages in Unitran; $+$ are languages in Epitran.
+
+the uniform information density hypothesis (UID; Fenk and Fenk, 1980; Aylett and Turk, 2004; Levy and Jaeger, 2007), which predicts that speakers smooth the information rate in a linguistic signal so as to keep it roughly constant; by smoothing their information rate, natural languages can stay close to a (hypothetical) channel capacity. Across languages, a unified channel capacity allows us to derive a specific instantiation of the compensation hypothesis (Hockett, 1958), with information density (measured in, e.g., bits per phone) being compensated by utterance speed (in, e.g., milliseconds per phone). We may thus predict a trade-off between surprisal1 and duration both within and across the world's languages.
+
+This trade-off has been studied amply within high resource languages (Genzel and Charniak, 2002; Bell et al., 2003; Mahowald et al., 2018, inter alia). Cross-linguistically, however, this trade-off has received comparatively little attention, with a few notable exceptions such as Pellegrino
+
+et al. (2011) and Coupé et al. (2019). Several factors have inhibited cross-linguistic studies of this kind. Arguably, the most prominent is the sheer lack of data necessary to investigate the phenomenon. While massively cross-linguistic data abounds in the form of wordlists (Wichmann et al., 2020; Dellert et al., 2020), surprisal is a context-dependent measure and, therefore, isolated word types are not enough for this analysis. Further, as we have a specific hypothesis for why this trade-off should arise (humans' information processing capacity), we are not interested in simply finding any correlation between surprisal and duration. Several confounds could drive such a correlation, but most of these are either trivially true or uninteresting from our perspective. Therefore, a thorough analysis of this trade-off needs to control for these potential confounds.
+
+In this work, we investigate the surprisal-duration trade-off by analysing a massively multi-lingual dataset of more than 600 languages (Salesky et al., 2020). We present an experimental framework, controlling for several possible confounds, and evaluate the surprisal-duration trade-off at the phone level. We find evidence of a trade-off across languages: languages with more surprising phones compensate by making utterances longer. We also confirm mono-lingual trade-offs in 319 languages, out of $600;^2$ within these languages, more surprising phones are pronounced with a significantly longer duration. This is the most representative evidence of the uniform information density hypothesis to date. Moreover, we did not find evidence of a single language where the opposite effect is in operation (i.e. where more informative phones are shorter). Given these collective results, we conclude there is strong evidence for a surprisal-duration trade-off both across and within the world's languages.
+
+# 2 Surprisal and Duration
+
+Cross-linguistic comparisons of information rate go back at least 50 years. In a study comparing phonemes per second, Osser and Peng (1964) found no statistical difference between the speech rate of English and Japanese native speakers. In a similar study, den Os (1985, 1988) compared Dutch and Italian and found no difference in terms
+
+of syllables per second, although Italian was found to be somewhat slower in phones per second. Such cross-linguistic comparisons, however, are not straightforward, since the range of speech rate can vary widely within a single language, depending on sentence length (Fonagy and Magdics, 1960) and type of speech (e.g. storytelling vs interview; Kowal et al., 1983). In a meta-analysis of these studies, Roach (1998) concludes that carefully assembled speech databases would be necessary to answer this question. In this line, Pellegrino et al. (2011) recently analysed the speech rate of 8 languages using a semantically controlled corpus. They found strong evidence towards non-uniform speech rates across these languages.
+
+This result is not surprising, however, given that natural languages vary widely in their phonology, morphology, and syntax. Despite these differences, researchers have hypothesised that there exist compensatory relationships between the complexity of these components (Hockett, 1958; Martinet, 1955). For instance, a larger phonemic diversity could be compensated by shorter words (Moran and Blasi, 2014; Pimentel et al., 2020) or a larger number of irregular inflected forms could lead to less complex morphological paradigms (Cotterell et al., 2019). Such a compensation can be thus seen as a type of balance, where languages compromise reliability versus effort in communication (Zipf, 1949; Martinet, 1962). One natural avenue for creating this balance would be a language's information rate. If this were kept roughly constant, the needs of both speakers (who prefer shorter utterances) and listeners (who value easier comprehension) could be accommodated. Speech rate would then be compensated by information density, resulting in a form of surprisal-duration trade-off. Indeed, Pellegrino et al. (2011) and Coupé et al. (2019) present initial evidence of this trade-off across languages.
+
+Analogously, the UID hypothesis posits that, within a language, users balance the amount of information per linguistic unit with the duration of its utterance. This hypothesis has been used to explain a range of experimental data in psycholinguistics, including syntactic reduction (Levy and Jaeger, 2007) and contractions, such as are vs 're (Frank and Jaeger, 2008). While this theory is somewhat under-specified with respect to its causal mechanisms, as we argue in Meister et al. (2021), one of its typical interpretations is that users are maximising a communicative channel's capacity
+
+(Frank and Jaeger, 2008; Piantadosi et al., 2011). If we assume this channel's capacity to be constant across languages, we may derive a cross-linguistic version of UID. Such a hypothesis would predict, for instance, that speakers of languages with less informative phones will make them faster. Under this specific interpretation, our study can be seen as evidence of UID as a cross-linguistic phenomenon.
+
+# 3 Measuring Surprisal
+
+To formalise our approach, we first present a standard measure of information content: surprisal. In the context of natural language, surprisal (Hale, 2001) measures the Shannon information content a linguistic unit conveys in context, which can be measured as its negative log-probability:
+
+$$
+\mathrm {H} \left(S _ {t} = s _ {t} \mid \boldsymbol {S} _ {< t} = \boldsymbol {s} _ {< t}\right) = - \log p \left(s _ {t} \mid \boldsymbol {s} _ {< t}\right) \tag {1}
+$$
+
+In this equation, $S$ is a sentence-level random variable, with instances $s \in S^*$ , and $t$ indexes a position in the sentence. Accordingly, we define $S$ as the set of phones in a given phonetic alphabet, and we use $s_{4 and spans more than 600 languages from 70 language families, as shown in Fig. 2.5 This dataset offers us a semantically controlled setting for our experiments, as it is composed of translations of a single text, the Bible.
+
+This dataset contains automatically generated phone alignments and derived phonetic measures for all its languages (with both phone duration, and vowels' first and second formant frequencies). On average, there are approximately 9,000 utterances (or 20 hours of speech) per language, making it the largest dataset of its kind. Phone labels were generated using grapheme-to-phoneme (G2P) tools and time aligned using either multilingual acoustic models (Wiesner et al., 2019; Povey et al., 2011) or language-specific acoustic models (Black, 2019; Anumanchipalli et al., 2011). VoxClamantis offers its phonetic measurements under three G2P models, which trade-off language coverage and quality. We will focus on two:6
+
+- Epitran (Mortensen et al., 2018). This is a collection of high quality G2P models based on language-specific rules. Phonetic measurements produced with Epitran are available for a collection of 39 doculects7 from 29 languages (as defined by ISO codes) in 8 language families.
+- Unitran (Qian et al., 2010). This is a naïve and deterministic G2P model, but its derived measurements are available for all languages in VoxClamantis. While Unitran is particularly error-prone for languages with opaque orthographies (Salesky et al., 2020), we filter out the languages with lower-quality alignments (as we detail below). This original dataset has 690 doculects from 635 languages in 70 language families.
+
+In order to study the trade-off hypothesis we require two measurements: phone durations and phone-level surprisals. As mentioned above, phone
+
+
+Figure 2: The languages of the VoxClamantis corpus geo-located and coloured by language family.
+
+durations are readily available in VoxClamantis. Phone-level surprises, on the other hand, are not, so we employed phone-level language models in order to estimate them (as detailed in §3). Given both these values, we can now perform our cross-linguistic analysis. First, though, we will describe some data quality checks.
+
+Filtering Unitran. The phone and utterance alignments for the VoxClamantis dataset were automatically generated and may be noisy due to both of these processes. The labels from the Unitran G2P also contain inherent noise due to their deterministic nature. Accordingly, we filter the data using the mean Mel-Cepstral Distortion (MCD) as an implicit quality measure for the alignments. MCD is an edit-distance metric which evaluates the distance between some reference speech and speech synthesised using the alignments (Kubichek, 1993). We use the utterance-level MCD scores from the CMU Wilderness dataset (Black, 2019), removing all utterances with an MCD score higher than 7. This leaves us with 647 doculects from 600 languages in 69 language families.
+
+# 5 Design Choices
+
+There are several critical design choices that must be made when performing a cross-linguistic analysis of this nature. While some may at first seem inconsequential, they can have a large impact on down-stream results. Specifically, we assume that there is a surprisal-duration trade-off which is caused by a capacity to process information, which should be roughly constant across human populations. We must thus control for other potential sources for this trade-off, which we deem to be uninteresting in this work.
+
+Phone-level Analysis. While there are good reasons for performing this analysis at the syllable- or
+
+word-level, we believe phones are advantageous for out study. Greenberg (1999), for instance, shows syllables are less prone than phones to be completely deleted in casual speech; syllables would thus allow more robust estimates of speech duration. Nonetheless, languages that allow for more complex (and long) syllabic structures will naturally have more valid syllables. A larger number of syllables, in turn, will cause each syllable to be less predictable on average.[8] Therefore, more complex syllables will be both longer and unpredictable. Studying syllables can thus lead to trivial trade-offs which mainly reflect the methodology employed. A similar argument can be made against word-level analyses.[9] Performing this type of analysis at the phone-level should alleviate this effect, making it the more appropriate choice.
+
+Articulatory Costs. Whereas the range of effort used to produce individual phones may be smaller than in other linguistic hierarchies, there is still a considerable variation in the cost associated with each phone's articulation. For instance,Zipf (1935) argued that a phone's articulatory effort was related to its frequency. If this is indeed the case, a direct analysis of surprisal-duration pairs that does not control for articulatory effort could also lead to a trivial trade-off: the long and effortful phones will be less frequent and likely to be more unpredictable, having higher surprises. To account for each phone's articulatory cost, we use mixed effects models in our analysis, and include phone identity as random intercept effects.
+
+# Word-initial and Word-final Lengthening.
+
+There is ample evidence showing that, across languages, word-initial and word-final segments are lengthened during production (Fougeron and Keating, 1997; White et al., 2020). Another property is that word-initial positions carry more information than word-final ones, which has been well-studied in both psycholinguistics and information-theory. From a psycholinguistic perspective, it seems word-initial segments are more important for word recognition (Bagley, 1900; Fay and Cutler, 1977; Bruner and O'Dowd, 1958; Nooteboom, 1981). Under an information-theoretic analysis, it has
+
+been observed that earlier segments in a word are more surprising than later ones (van Son and Pols, 2003; King and Wedel, 2020; Pimentel et al., 2021). Word-initial segments are both lengthened and more surprising, potentially for unrelated reasons. An analysis which does not control for such word-positioning is thus doomed to find trivial correlations. To account for this word-initial and word-final lengthening, we include three word position fixed effects (initial, middle, or final) in our mixed effects models.
+
+Sentential Context. The amount of context that a model conditions on when estimating probabilities will undoubtedly have an impact on a study of this nature. For example, a model that cannot look back beyond the current word, such as the one employed by Coupé et al. (2019), can by definition only condition on the previous phones in the same word. Arguably, a cognitively motivated surprisal-duration trade-off should estimate surprisal using a phone's entire sentential context and not only the prior context inside a specific word. In this work, we make use of LSTMs (as described in §3), which can model long context dependencies (Khandelwal et al., 2018).
+
+# 6 Generalised Mixed Effects
+
+Throughout our experiments, we will use mixed-effects models; we provide a brief introduction here (see Wood (2017) for a longer exposition). Classical linear regressions models can be written as:
+
+$$
+y _ {i} = \phi^ {\intercal} \mathbf {x} _ {i} + \epsilon_ {i}, \quad \epsilon_ {i} \sim \mathcal {N} (0, \sigma_ {\mathrm {e r r}} ^ {2}) \tag {5}
+$$
+
+where $y_{i}$ is the target variable, $\mathbf{x}_i\in \mathbb{R}^d$ is the model's input and $\phi \in \mathbb{R}^d$ a learned weight vector. Further, the error (or unexplained variance) term $\epsilon_{i}$ is assumed to be normally distributed and independent and identically distributed (i.i.d.) across data instances. Such an i.i.d. assumption, however, may not hold. In our analysis, for instance, multiple phones come from each of our analysed languages; it is thus expected that such co-language phones share dependencies in how their $\epsilon_{i}$ are sampled. Mixed-effects models allow us to model such dependencies through the use of random effects. Formally, for an instance $\mathbf{x}_i$ from a specific language $\ell_i$ , we model:
+
+$$
+y _ {i} = \boldsymbol {\phi} ^ {\intercal} \mathbf {x} _ {i} + \omega_ {\ell_ {i}} + \epsilon_ {i}, \quad \begin{array}{l l} \omega_ {\ell_ {i}} \sim \mathcal {N} (0, \sigma_ {\omega} ^ {2}) \\ \epsilon_ {i} \sim \mathcal {N} (0, \sigma_ {\mathrm {e r r}} ^ {2}) \end{array} \tag {6}
+$$
+
+where $\omega_{\ell_i}$ is a random effect and $\phi$ is now termed a fixed effect. Here, $\omega_{\ell_i}$ is an intercept term which is assumed to be shared across all instances of language $\ell_i$ , and $\sigma_{\omega}^{2}$ is directly learned from the data. Similarly, we can add random slope effects:
+
+$$
+y _ {i} = \boldsymbol {\phi} ^ {\mathsf {T}} \mathbf {x} _ {i} + \boldsymbol {\beta} _ {\ell_ {i}} ^ {\mathsf {T}} \mathbf {x} _ {i} + \omega_ {\ell_ {i}} + \epsilon_ {i}, \quad \begin{array}{l} \boldsymbol {\beta} _ {\ell_ {i}} \sim \mathcal {N} (0, \Sigma_ {\boldsymbol {\beta}}) \\ \omega_ {\ell_ {i}} \sim \mathcal {N} (0, \sigma_ {\omega} ^ {2}) \\ \epsilon_ {i} \sim \mathcal {N} (0, \sigma_ {\mathrm {e r r}} ^ {2}) \end{array} \tag {7}
+$$
+
+where each $\beta_{\ell_i} \in \mathbb{R}^d$ is a language-specific random slope and $\Sigma_{\beta}$ is a (learned) covariance matrix. Furthermore, our assumption that error terms are normally distributed may not hold in this setting. Phone durations, for instance, cannot be negative and are positively skewed, making a log-linear model more appropriate:
+
+$$
+\log \left(y _ {i}\right) = \phi^ {\intercal} \mathbf {x} _ {i} + \boldsymbol {\beta} _ {\ell_ {i}} ^ {\intercal} \mathbf {x} _ {i} + \omega_ {\ell_ {i}} + \epsilon_ {i} \tag {8}
+$$
+
+where $\beta_{\ell_i}$ , $\omega_{\ell_i}$ , and $\epsilon_{i}$ are still distributed as in eq. (7). This is similar to modelling the original $\epsilon_{i}$ terms as coming from a log-normal distribution. We note though, that under this model our effects become multiplicative (as opposed to additive): an increase of $\delta$ unit in the right side will make the value of $y_{i}$ be multiplied by $e^{\delta}$ . We will use 1me4's (Bates et al., 2015) notation to represent these models. Under this notation, a parenthesis represents a random effect and parameters are left out. We thus re-write eq. (8) as:
+
+$$
+\log (y) = 1 + \mathbf {x} + (1 + \mathbf {x} \mid \text {l a n g u a g e}) \tag {9}
+$$
+
+# 7 Experiments and Results10
+
+In this section, we will first analyse the surprisal-duration trade-off in individual languages. We will then perform an analysis with our full data, studying the trade-off both within and across languages with a single model. Finally, in our last experiment we will average phone information per language to analyse a purely cross-linguistic trade-off.
+
+# 7.1 Individual Language Analyses
+
+We first analyse languages individually, verifying if more surprising phones have on average a longer duration. With this in mind, we estimate a generalised mixed effects model for each language. We control for each phone's articulatory costs by
+
+
+Figure 3: Language-specific trade-off slopes in Epitran from the mixed effects model in eq. (10). The $y$ -axis represents a multiplicative effect, duration is multiplied by $y$ per extra bit of phone information.
+
+adding phone identity as a random effect. Additionally, we include fixed effects to control for word position effects, adding separate intercepts for word-initial and word-final positions. Finally, we consider a fixed effect relating surprisal and word positions. At word-initial positions, for instance, the connection between surprisal and duration could potentially be stronger or weaker.[11] This leaves us with the following relationship:
+
+$$
+\begin{array}{l} \log (\text {d u r a t i o n}) = 1 + \text {s u r p r i s a l} + \text {p o s i t i o n} \\ + \text {s u r p r i s a l} + (1 | \text {p h o n e}) \tag {10} \\ \end{array}
+$$
+
+In this parametrisation, a trade-off between surprisal and duration will emerge as a positive and significant surprisal slope. Analogously, an inverse trade-off will emerge as a negative and significant slope, since we use two-tailed statistical tests. $^{12}$ Out of the 39 doculects in Epitran, 30 present statistically significant positive slopes $(^{23}/29$ languages, and $8/8$ families; meaning at least one language showed a significant effect per family). On Unitran (which we recall is a noisier dataset), ${}^{326}/647$ doculects presented significantly positive slopes $(^{319}/600$ languages, and $53/69$ families). Additionally, we find no language in either dataset with significantly negative slopes: we either find evidence for the trade-off or we have no association whatsoever.
+
+The trade-off strength, as measured by the surprisal-duration slopes, can be seen in Fig. 1 (on first page) and Fig. 3. As noted above, by
+
+predicting a linear change in logarithmic scale, our effects become multiplicative instead of additive. The average multiplicative slope we get across all the analysed languages in both datasets is roughly 1.02, meaning that each added bit of information multiplies duration by 1.02. We believe this should serve as strong support for our hypothesis of a trade-off within languages. Moreover, to the best of our knowledge, this is the most representative study of the UID hypothesis to date, as measured by the number and typological diversity of analysed languages.
+
+# 7.2 Aggregated Cross-linguistic Analysis
+
+Following the previous study, we now run a cross-linguistic analysis by aggregating all the languages within a single model. We add the same controls as before, but further nest the phone random effects per language (meaning we create one random effect per phone-language pair). We also include random language-specific intercepts and slopes. Formally,
+
+$$
+\begin{array}{l} \log (\text {d u r a t i o n}) = 1 + \text {s u r p r i s a l} + \text {p o s i t i o n} \\ + \text {s u r p r i s a l} \cdot \text {p o s i t i o n} \\ + (1 + \text {s u r p r i s a l} + \text {s u r p r i s a l} \cdot \text {p o s i t i o n} | \text {l a n g u a g e}) \\ + (1 \mid \text {l a n g u a g e}: \text {p h o n e}) \tag {11} \\ \end{array}
+$$
+
+After estimating this generalised mixed effects model, we find statistically significant cross-linguistic trade-off effects in both datasets. The multiplicative slope is roughly 1.02 in both datasets, again meaning each extra bit of information multiplies the duration by this value ( $\phi = 1.023$ in Unitran and $\phi = 1.015$ in Epitran).13 We further analyse the per-language trade-off slopes, which can be seen in Fig. 4. These language-specific slopes are calculated by summing the fixed effect of the surprisal term with its random effects per language. We see a similar trend in this figure as in Fig. 3, with most of the analysed doculects having a positive surprisal-duration trade-off.
+
+# 7.3 Cross-linguistic Trade-offs
+
+Our previous experiment in §7.2 makes use of language-specific random effects. These effects allow the model to potentially represent within-language trade-off effects, while correcting for
+
+
+Figure 4: Language-specific trade-off slopes in Epitran from the mixed effects model in eq. (11). The $y$ -axis represents a multiplicative effect, duration is multiplied by $y$ per extra bit of phone information.
+
+cross-linguistic differences by using the model parameters. It therefore cannot serve as confirmation of a trade-off across the world's languages by itself, only as additional evidence for it. In this section, we do not use language-specific random effects; instead, we average surprisal within a language for each phone-position tuple. We then train the following mixed effects model:
+
+$$
+\begin{array}{l} \text {d u r a t i o n} = 1 + \text {s u r p r i s a l} + \text {p o s i t i o n} \\ + \text {s u r p r i s a l} + (1 \mid \text {p h o n e}) \tag {12} \\ \end{array}
+$$
+
+This equation is identical to the one in eq. (10), but now we model the language–phone–position tuples, instead of a language’s individual phones.[14] Additionally, since we are aggregating results per tuple for this analysis, the central limit theorem tells us our model’s residuals should be roughly Gaussian. We thus use linear mixed effects models, instead of the generalised log-linear ones. By analysing this model we find a significantly positive surprisal–duration additive slope, of $\phi = 1.5$ milliseconds per bit ( $\phi = 1.54$ in Unitran and $\phi = 1.52$ in Epitran). This confirms the expected cross-linguistic trade-off: languages with more surprising phones really have longer durations, even after controlling for word positions and phone-specific articulatory costs.
+
+# 8 Discussion
+
+The pressure towards a specific information rate (potentially set at a specific cognitive channel ca
+
+pacity) has been posited as an invariant across languages. Directly testing such a claim is perhaps impossible, as data alone cannot prove its universality. Moreover, providing meaningful evidence towards this phenomenon requires a careful and comprehensive cross-linguistic analysis, which we attempt to perform in this work. In comparison to similar studies, such as those by Pellegrino et al. (2011) and Coupe et al. (2019), we employ more sophisticated techniques to measure a linguistic unit's (in our case, a phone's) information content. Moreover, we also employ more rigorous strategies for analysing the surprisal-duration relationship, controlling for several potential confounds. By introducing these improvements, we attain a more detailed understanding of the role of information in language production, both across and within languages.
+
+Experimentally, we find that, after controlling for other artefacts, the information conveyed by a phone in context has a modest but significant relationship with phone duration. We see that this relationship is consistently positive across a number of investigated settings, despite being small in magnitude, meaning that more informative phones are on average longer. Additionally, using two-tailed tests at $\alpha < 0.01$ throughout our experiments, we find no language with a significant negative relationship between phone surprisal and duration.
+
+Limitations and Future Work. In this work, we implemented a careful evaluation protocol to study the relationship between a phone's surprisal and duration in a representative set of languages. To perform our study in such a large number of languages, however, we rely on the automatically aligned phone measurements from VoxClamantis, which contain noise from various sources. Future work could investigate if biases in the dataset generation protocol could impact our results. Further, VoxClamantis data is derived from readings of the Bible. Future studies could extend our analysis to other settings, such as conversational data.
+
+# 9 Conclusion
+
+In this work, we have provided the widest cross-linguistic investigation of phone surprisal and duration to date, covering 600 languages from over 60 language families spread across the globe. We confirm a surprisal-duration trade-off both across these analysed languages and within a subset of 319 of them, covering 53 language families. While there exist arguments against some of our design
+
+choices, our overarching conclusion is remarkably consistent across our analyses: the presence of a surprisal-duration trade-off is significant in language production. In other words, both across and within languages, phones carrying more information are longer, while phones carrying less information are produced faster.
+
+# References
+
+Gopala Krishna Anumanchipalli, Kishore Prahallad, and Alan W Black. 2011. Festvox: Tools for creation and analyses of large speech corpora. In Workshop on Very Large Scale Phonetics Research, UPenn, Philadelphia.
+Matthew Aylett and Alice Turk. 2004. The smooth signal redundancy hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and Speech, 47(1):31-56.
+William Chandler Bagley. 1900. The apperception of the spoken sentence: A study in the psychology of language. *The American Journal of Psychology*, 12(1):80-130.
+Douglas Bates, Martin Mächler, Ben Bolker, and Steve Walker. 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1):1-48.
+Alan Bell, Daniel Jurafsky, Eric Fosler-Lussier, Cynthia Girand, Michelle Gregory, and Daniel Gildea. 2003. Effects of disfluencies, predictability, and utterance position on word form variation in english conversation. The Journal of the Acoustical Society of America, 113(2):1001-1024.
+Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):289-300.
+Alan W Black. 2019. CMU wilderness multilingual speech dataset. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5971-5975.
+Jerome S. Bruner and Donald O'Dowd. 1958. A note on the informativeness of parts of words. Language and Speech, 1(2):98-101.
+Ryan Cotterell, Christo Kirov, Mans Hulden, and Jason Eisner. 2019. On the complexity and typology of inflectional morphological systems. Transactions of the Association for Computational Linguistics, 7:327-342.
+Christophe Coupe, Yoon Mi Oh, Dan Dediu, and François Pellegrino. 2019. Different languages, similar encoding efficiency: Comparable information
+
+rates across the human communicative niche. Science Advances, 5(9).
+Johannes Dellert, Thora Daneyko, Alla Münch, Alina Ladygina, Armin Buch, Natalie Clarius, Ilja Grigorjew, Mohamed Balabel, Hizniye Isabella Boga, Zalina Baysarova, Roland Mühlenbernd, Johannes Wahle, and Gerhard Jäger. 2020. NorthEuraLex: A wide-coverage lexical database of Northern Eurasia. Language Resources and Evaluation, 54:273-301.
+David Fay and Anne Cutler. 1977. Malapropisms and the structure of the mental lexicon. Linguistic Inquiry, 8(3):505-520.
+August Fenk and Gertraud Fenk. 1980. Konstanz im Kurzzeitgedächtnis - Konstanz im sprachlichen Informationsfluß? Zeitschrift für Experimentelle und Angewandte Psychologie, 27(3):400-414.
+Ivan Fonagy and Klara Magdics. 1960. Speed of utterance in phrases of different lengths. Language and Speech, 3(4):179-192.
+Cécile Fougeron and Patricia A. Keating. 1997. Articulatory strengthening at edges of prosodic domains. The Journal of the Acoustical Society of America, 101(6):3728-3740.
+Austin F. Frank and T. Florain Jaeger. 2008. Speaking rationally: Uniform information density as an optimal strategy for language production. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 30.
+Dmitriy Genzel and Eugene Charniak. 2002. Entropy rate constancy in text. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 199-206, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Adam Goodkind and Klinton Bicknell. 2018. Predictive power of word surprisal for reading times is a linear function of language model quality. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), pages 10-18, Salt Lake City, Utah. Association for Computational Linguistics.
+Steven Greenberg. 1999. Speaking in shorthand - A syllable-centric perspective for understanding pronunciation variation. Speech Communication, 29(2-4):159-176.
+John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Second Meeting of the North American Chapter of the Association for Computational Linguistics.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
+Charles Francis Hockett. 1958. A Course in Modern Linguistics. Macmillan, New York.
+
+Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284-294, Melbourne, Australia. Association for Computational Linguistics.
+Adam King and Andrew Wedel. 2020. Greater early disambiguating information for less-probable words: The lexicon is shaped by incremental processing. Open Mind, pages 1-12.
+Sabine Kowal, Richard Wiese, and Daniel C. O'Connell. 1983. The use of time in storytelling. Language and Speech, 26(4):377-392.
+R. Kubichek. 1993. Mel-cepstral distance measure for objective speech quality assessment. In Proceedings of IEEE Pacific Rim Conference on Communications Computers and Signal Processing, volume 1, pages 125-128 vol.1.
+Jackson L. Lee, Lucas F.E. Ashby, M. Elizabeth Garza, Yeonju Lee-Sikka, Sean Miller, Alan Wong, Arya D. McCarthy, and Kyle Gorman. 2020. Massively multilingual pronunciation mining with WikiPron. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020). European Language Resources Association (ELRA). Resources downloadable from https://github.com/kylebgorman/wikiPron.
+Roger P. Levy and Tim Florian Jaeger. 2007. Speakers optimize information density through syntactic reduction. In Advances in Neural Information Processing Systems, pages 849-856.
+Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.
+Kyle Mahowald, Isabelle Dautriche, Edward Gibson, and Steven T. Piantadosi. 2018. Word forms are structured for efficient use. Cognitive science, 42(8):3116-3134.
+Andre Martinet. 1955. Économie des changements phonétiques. Éditions A. Francke S. A.
+André Martinet. 1962. A Functional View of Language, volume 196. Clarendon Press.
+Clara Meister, Tiago Pimentel, Patrick Haller, Lena Jäger, Ryan Cotterell, and Roger Levy. 2021. Revisiting the uniform information density hypothesis. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
+Steven Moran and Damian Blasi. 2014. Cross-linguistic comparison of complexity measures in phonological systems. In Frederick J. Newmeyer and Laurel B. Preston, editors, Measuring Grammatical Complexity, pages 217-240. Oxford University Press Oxford, UK.
+
+David R. Mortensen, Siddharth Dalmia, and Patrick Littell. 2018. Epitran: Precision G2P for many languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA).
+Sieb G. Nooteboom. 1981. Lexical retrieval from fragments of spoken words: Beginnings vs endings. Journal of Phonetics, 9(4):407-424.
+Els den Os. 1985. Perception of speech rate of Dutch and Italian utterances. Phonetica, 42(2-3):124-134.
+Els den Os. 1988. Rhythm and tempo of Dutch and Italian: A contrastive study. Dr. Elinkwijk.
+Harry Osser and Frederick Peng. 1964. A cross cultural study of speech rate. Language and Speech, 7(2):120-125.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, pages 8024-8035. Curran Associates, Inc.
+François Pellegrino, Christophe Coupé, and Egidio Marsico. 2011. A cross-language perspective on speech information rate. Language, pages 539-558.
+Steven T. Piantadosi, Harry Tily, and Edward Gibson. 2011. Word lengths are optimized for efficient communication. Proceedings of the National Academy of Sciences, 108(9):3526-3529.
+Tiago Pimentel, Ryan Cotterell, and Brian Roark. 2021. Disambiguatory signals are stronger in word-initial positions. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics.
+Tiago Pimentel, Brian Roark, and Ryan Cotterell. 2020. Phonotactic complexity and its trade-offs. Transactions of the Association for Computational Linguistics, 8:1-18.
+Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society. IEEE Catalog No.: CFP11SRW-USB.
+
+Ting Qian, Kristy Hollingshead, Su-youn Yoon, Kyoung-young Kim, and Richard Sproat. 2010. A Python toolkit for universal transliteration. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA).
+Peter Roach. 1998. Some languages are spoken more quickly than others. In Laurie Bauer and Peter Trudgill, editors, Language myths, pages 150-8. Penguin Books.
+Elizabeth Salesky, Eleanor Chodroff, Tiago Pimentel, Matthew Wiesner, Ryan Cotterell, Alan W Black, and Jason Eisner. 2020. A corpus for large-scale phonetic typology. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4526-4546, Online. Association for Computational Linguistics.
+Claude E. Shannon. 1948. A mathematical theory of communication. The Bell System Technical Journal, 27(3):379-423.
+Rob J. J. H. van Son and Louis C.W. Pols. 2003. How efficient is speech? In Proceedings of the Institute of Phonetic Sciences, volume 25, pages 171-184.
+Laurence White, Silvia Benavides-Varela, and Katalin Mády. 2020. Are initial-consonant lengthening and final-vowel lengthening both universal word segmentation cues? Journal of Phonetics, 81:100982.
+Søren Wichmann, Eric W. Holman, and Cecil H. Brown. 2020. The ASJP database (version 19). Accessed on April 2, 2020.
+Matthew Wiesner, Oliver Adams, David Yarowsky, Jan Trmal, and Sanjeev Khudanpur. 2019. Zero-shot pronunciation lexicons for cross-language acoustic model transfer. In Proceedings of IEEE Association for Automatic Speech Recognition and Understanding (ASRU).
+Ethan Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger Levy. 2020. On the predictive power of neural language models for human real-time comprehension behavior. In Proceedings of the 42nd Meeting of the Cognitive Science Society, page 1707-1713.
+Simon N. Wood. 2017. Generalized Additive Models: An Introduction with R, 2 edition. Chapman and Hall/CRC.
+George KingsleyZipf.1935.The psycho-biology of language:An introduction to dynamic philology. MIT Press,Cambridge.
+George Kingsley Zipf. 1949. Human behavior and the principle of least effort: An introduction to human ecology. Addison-Wesley, Cambridge.
+
+# A Confound analysis
+
+In this section, we analyse the word position and articulatory cost confounds mentioned in §5, as they could have an impact on a surprisal-duration tradeoff analysis. We first investigate the parameters from our mixed effects models containing word positioning effects. The word-initial and word-final intercepts are significantly positive in all 39 languages of our mono-lingual Epitran analysis (represented by eq. (10)) and in both cross-linguistic experiments (eqs. (11) and (12)). The intercepts for word-initial positions average at 67 milliseconds, while the word-final ones average at 32, providing new evidence for this word boundary lengthening effect. Since word position is correlated with surprisal, this boundary lengthening phenomenon could pose as a source of bias in our results, had we not controlled for it.
+
+We now explore the potential bias introduced by phone-specific articulatory costs. As mentioned in §5, languages with larger phonetic inventory sizes may be more inclined to use marked phones, which have longer duration. While this correlation between inventory size and unit cost would be particularly problematic for larger linguistic units (e.g. syllables) it can also affect our phone-level analysis. In fact, we take the Spearman correlation between a language's inventory size (in number of unique phones) and its average phone duration, finding a positive correlation of $\rho = .28$ . The average surprisal-duration Spearman correlation across languages is $\rho = 0.45$ . As inventory size and surprisal are strongly correlated across languages, we find that pure inventory effects may be driving a large part of the analysed correlation.
+
+To analyse how strongly both confounds would reflect in the main effect if left unaccounted for, we rerun our previous analyses, but without effects for either position, phone, or both. We do so for Epitran only. The resulting estimated trade-off effects are given in Tab. 1. We indeed see that these confounds are typically absorbed by the fixed surprisal effect in all three settings. Notably, without confound control we would find supposedly significant results in all analysed languages, and a 10 times stronger cross-linguistic effect, all of which are in fact spurious.
+
+| Controls | Trade-off Slope φ |
| Mono-lingual | Cross-linguistic |
| Phone | Position | eq. (10) | # Sign | eq. (11) | eq. (12) |
| ✓ | ✓ | 1.02 | 30 | 1.02‡ | 1.52‡ |
| ✓ | × | 1.02 | 37 | 1.03‡ | 0.93‡ |
| × | ✓ | 1.03 | 33 | 1.02‡ | 15.75 |
| × | × | 1.04 | 39 | 1.04‡ | 15.73† |
+
+Table 1: Comparison of trade-off (in milliseconds per bit) found when not conditioning on potential confounds. # Sign represents the number of significant languages $(\alpha < 0.01)$ in a mono-lingual analysis.
+
+# B Additive Effects
+
+
+Figure 5: Additive slope of the model in eq. (10).
+
+
+Figure 6: Additive slope of the model in eq. (11).
+
+# C Languages
+
+The languages used in our analyses are listed below, grouped by language family, along with their three character ISO 639-3 code, and the grapheme-to-phoneme schemes for which phone alignments are available for that language in the VoxClamantis dataset – Unitran: U, Epitran: E (Salesky et al., 2020). ISO codes for which there are multiple languages listed may represent dialects or other sub-language variations and/or multiple available Bible versions for which data is available.
+
+| AFRO-ASIATIC: 45 | Achinese ace U | Malay (macrolanguage) msa U E | Jur Modo bex U |
| Bana bcw U | Aguaynen agn U | Mamasam qmj U | Kenga kkyq U |
| Daasanach dsh U | Alangan alj U | Manado Malay xmm U | Lugbara lgg U |
| Daba dbq U | Alune alp U | Mapos Buang bzh U | Ma'di mhi U |
| Dangaléat daa U | Ambai amk U | Maranao mrw U | Mbay myb U |
| Dawro dwr U | Amganad Ifugao ifa U | Marshallese mah U | Moru mgd U |
| Eastern Oromo hae U E | Aralle-Tabulahan atq U | Matigsalug Manobo mbt U | Ngambay sha U |
| Egyptian Arabic arz U | Arop-Lokep apr U | Mayoyao Ifugao ifu U | Nomaande lem U |
| Gamo gmv U | Arosia aia U | Mentawai mww U | CHIBCHAN: 7 |
| Gen gej U | Bada (Indonesia) bhz U | Minangkabau min U | Border Kuna kvn U |
| Gofa gof U | Balantak blz U | Misima-Panaeati mpx U | Cabécar cjp U |
| Gofa gof U | Balinese ban U | Mongondow mog U | Central Tunebo tuf U |
| Gude gde U | Bambam ptu U | Muna mnb U | Cogui kog U |
| Hamer-Banna amf U | Batad Ifugao ifb U | Napu npy U | Ngäbere gym U |
| Hausa hau U E | Batak Dairi btd U | Ngaju nij U | San Blas Kuna cuk U |
| Hdi xed U | Batak Karo btx U | Nias nia U | Terihe ftr U |
| Iraqw irk U | Batak Simalungun bts U | Obo Manobo obo U | CHIQITO: 1 |
| Kabyle kab U | Besoa hep U | Owa stn U | Chiquitano cax U |
| Kafakbr U | Brooke's Point Palawa plw U | Palauan pau U | CHOCO: 2 |
| Kambaata ktb U | Caribbean Javanese jvn U | Pamona pmf U | Epena sja U |
| Kamwe hig U | Cebuanoceb U E | Pampanga pam U | Northern Emberá emp U |
| Kera ker U | Central Bikol bcl U | Pangasinan pag U | COFÁN: 1 |
| Kimré kqp U | Central Malay pse U | Paranan prf U | Cofán con U |
| Konso kxc U | Central Mnong cmo U | Rejang rej U | CREOLE: 14 |
| Koorete kqy U | Central Sama sml U | Roviana rug U | Belize Kriol English bzj U |
| Lele (Chad) lln U | Da'a Kaili kzf U | Sambal xsb U | Bislama bis U |
| Male (Ethiopia) mdy U | Duri mvp U | Sambal xsb U | Eastern Maroon Creole djk U |
| Marba mpg U | Fataleka far U | Samloan smo U | Haitian hat U |
| Mbuko mpgb U | Fijian fij U | Sangir sxn U | Islander Creole Engli icr U |
| Merey meq U | Fordata frd U | Sarangani Blaan bps U | Jamaican Creole Engl jam U |
| Mesopotamian Arabic acm U | Gilbertese gil U | Sasak sas U | Krio kri U |
| Mofu-Gudur mif U | Gorontalo gor U | Sudest tgo U | Morisyen mfe U |
| Muyang muy U | Hanunoo hnn U | Sundanese sun U | Nigerian Pidgin pcm U |
| Mwaghavul sur U | Hiligaynon hil U | Tagalog tgl U E | Pijin pus U |
| North Mofu mfk U | Iban iba U | Tangoa tgp U | Saint Lucian Creole F afc U |
| Parkw pbi U | Ilokilo ilo U | Termanu twu U | Saramaccan srm U |
| Péve lme U | Indonesian ind U E | Tombupon txu U | Sranan Tongo srn U |
| Sebat Bet Gurage sgw U | Indonesian ind U E | Toraja-Sa'dan sda U | Tok Pisin ti p U E |
| Somali som U E | Indonesian ind U E | Tuwali Ifugao ifk U | DOGON: 1 |
| Standard Arabic arb U | Itawit itv U | Uma ppk U | Toro So Dogon dts U |
| Sudanese Arabic apd U | Javanese jav U E | Western Bukidnon Mano mbb U | DRAVIDIAN: 5 |
| Tachelhit shi U | Kadazan Dusun dtp U | Western Tawbuid twb U | Kannada kan U |
| Tamasheq taq U | Kagayanen cgc U | AYMARAN: 2 | Kurukh kru U |
| Tigrinya tir U E | Kalagan kqe U | Central Aymara ayr U | Malayalam mal U |
| Tumak tmc U | Kankanaey kne U | Central Aymara ayr U | Tamil tam U E |
| Wandala mfi U | Keley-I Kallahan ify U | BARBACOAN: 2 | Telugu tel U E |
| ALGIC: 1 | Khehek tlx U | Awa-Cuaiquer kwi U | EAST BIRD'S HEAD: 1 |
| Central Ojibwa ojc U | Kilivila kij U | Guambiano gum U | Meyah mej U |
| ARUAUAN: 1 | Kinaray-A krj U | BASQUE: 1 | EAST BOUGAINVILLE: 1 |
| Paumari pad U | Kisar kje U | Basque eus U | Naasioi nas U |
| ARAWAKAN: 7 | Koronadal Blaan bpr U | CACUA-NUKAK: 1 | EASTERN SUDANIC: 19 |
| Asháninka cni U | Lampung Api ljp U | Cacua cbv U | Acoli ach U |
| Garifuna cab U | Lauje law U | CAHUAPANAN: 1 | Adhola adh U |
| Ignaciano ign U | Ledo Kaill lew U | Chayahuita cbt U | Alur alz U |
| Machiguenga mcb U | Luang lex U | CARIBAN: 3 | Bari bfa U |
| Nomatsiguenga not U | Lundayeh lnd U | Akawao ale U | Datoga tcc U |
| Parecis sab U | Ma'anyan mhy U | Galibi Carib car U | Kakwa keo U |
| Tereno ter U | Madurese mad U | Patamona pbc U | Karamojong kdj U |
| AUSTRO-ASIATIC: 4 | Mag-antsi Ayta sgb U | CENTRAL SUDANIC: 13 | Kumam kdi U |
| Eastern Bru bru U | Makasar mak U | Aringa luc U | Kupsabiny kpz U |
| Juang jun U | Malagasy mlg U | Avokaya avu U | Lango (Uganda) laj U |
| Khmer khm U | Malagasy mlg U | Bedjond bjv U | Luwo lwo U |
| Vietnamese vie U E | Malagasy mlg U | Gor gqr U | Mabaan mfz U |
| AUSTRONESIAN: 106 | Malay (macrolanguage) msa U | Gulay gvl U | Markweeta enb U |
| Murle mur U | Achuar-Shiwiar acu U | Poqomchi' poh U | Gogo gog U |
| Nuer nus U | Aguaruna agr U | Poqomchi' poh U | Gokana gkn U |
| Sabaot spy U | Huambisa hub U | Q'anjob'al kjb U | Gourmanchéma gux U |
| Shilluk shk U | Shuar jiv U | Tektiteko ttc U | Gwere gwr U |
| Southwestern Dinka dik U | KHOE-KWADI: 1 | Tz'utujil tzj U | Hanga hag U |
| Teso teu U | Southern Samo sbd U | Tzeltal tzh U | Haya hay U |
| ESKIMO-ALEUT: 1 | KOMAN: 1 | Tzeltal tzh U | Ifé ife U |
| Central Siberian Yupi ess U | Uduk udu U | Tzotzil tzo U | Ivbie North-Okpela-Ar atg U |
| GUAHBAN: 3 | KORDOFANIAN: 1 | Tzotzil tzo U | Izere izr U |
| Cuiba cui U | Moro mor U | Western Kanjobal knj U | Jola-Fonyi dyo U |
| Guahibo guh U | LOWER SEPIK-RAMU: 1 | Yucateco yua U | Jola-Kasa csk U |
| Guayabero guo U | Aruamu msy U | MISUMALPAN: 1 | Jukun Takum jbu U |
| GUAICURUAN: 1 | MACRO-GE: 1 | Miskito miq U | Kabièy kébp U |
| Toba tob U | Kayapó txu U | MIXE-ZOQUE: 3 | Kagulu kki U |
| HMONG-MIEN: 1 | MANDE: 13 | Coatlán Mixe mco U | Kako kkj U |
| Hmong Daw mwwU | Bambara bam U | Highland Populca poi U | Kasem xsm U |
| HUAVEAN: 1 | Bissa bib U | Quetzaltepec Mixe pxm U | Kasem xsm U |
| San Mateo Del Mar Hua huv U | Boko (Benin) bqc U | MONGOLIC: 2 | Kenyang ken U |
| HUITOTOAN: 3 | Busa bqp U | Halh Mongolian khk U | Kim kia U |
| Bora boa U | Dyula dyu U | Kalmyk xal U | Kim kia U |
| Minica Huitoto hto U | Dyula dyu U | NAKH-DAGHESTAN.: 2 | Koma kmy U |
| Murui Huitoto huu U | Kuranko knk U | Avaric ava U | Konkombaxon U |
| INDO-EUROPEAN: 40 | Loko lok U | Chechen che U | Kono (Sierra Leone) kno U |
| Albanian sqi U | Mandinka mnk U | NIGER-CONGO: 159 | Koonzime ozm U |
| Awadhi awa U | Mende (Sierra Leone) men U | Abidji abi U | Kouya kyf U |
| Bengali ben U E | Northern Bobo Madaré bbo U | Adele ade U | Kukele kez U |
| Bengali ben U E | Susu sus U | Adioukrout adj U | Kunda kdn U |
| Bengali ben U E | Xaasongaxango kao U | Akan aka U | Kuo xuu U |
| Caribbean Hindustani hns U | MASCOIAN: 1 | Akebu keu U | Kusaal kus U |
| Chhattisgarhi hne U | Enxet enx U | Akoose bss U | Kutep kub U |
| Dari prs U | MATACOAN: 1 | Anufo cko U | Kutu kdc U |
| English eng U | Maca mca U | Avatime avn U | Kwaataay cwt U |
| Fiji Hindi huf U | MAYAN: 42 | Bafut bfd U | Kwere cwe U |
| French fra U | Achi acr U | Bandial baj U | Lama (Togo) las U |
| French fra U | Aguacateco agu U | Bekwarra bkv U | Lelemie lef U |
| Hindi hin U E | Chol ctu U | Bete-Bendi btt U | Lobi lob U |
| Iranian Persian pes U | Chorti caa U | Biali beh U | Lokaa yaz U |
| Latin lat U | Chuj cac U | Bimoba bim U | Lukpa dop U |
| Magahi mag U | Chuj cac U | Bokobaru bus U | Lyèlelee U |
| Maithili mai U | Huastec hus U | Bomu bmq U | Machame jmc U |
| Malvi mup U | Ixil ixl U | Buamu box U | Mada (Nigeria) mda U |
| Marathi mar U E | Ixil ixl U | Buli (Ghana) bwu U | Makaa mcp U |
| Northern Kurdish kmr U E | Ixil ixl U | Bum bmv U | Makhwua vmw U |
| Oriya (macrolanguage) ori U | K'iche' quc U | Cameroon Mambila mcu U | Malawi Lomwe lon U |
| Ossetian oss U | K'iche' quc U | Central-Eastern Niger fuq U | Malba Birifor bfo U |
| Polish pol U E | K'iche' quc U | Cerma cme U | Mamara Senoufo myk U |
| Portuguese por U | K'iche' quc U | Cerma cme U | Mampruli maw U |
| Portuguese por U | K'iche' quc U | Chopi cce U | Mankanya knf U |
| Portuguese por U | K'iche' quc U | Chumburung nu U | Masaaba myx U |
| Portuguese por U | Kaqchikel cak U | Delntr U | Meta'mgo U |
| Romanian ron U E | Kaqchikel cak U | Denya anv U | Miyobe soy U |
| Russian rus U E | Kaqchikel cak U | Ditammari tbz U | Moba mfq U |
| Sinte Romani rmo U | Kaqchikel cak U | Djimini Senoufo dyi U | Moba mfq U |
| Spanish spa U E | Kaqchikel cak U | Duruma dug U | Mochi old U |
| Spanish spa U E | Kaqchikel cak U | Eastern Karaboro xrb U | Mossi mos U |
| Spanish spa U E | Kekchi kek U | Ekajuk eka U | Mossi mos U |
| Spanish spa U E | Kekchi kek U | Ewe ewe U | Mumuye mzm U |
| Spanish spa U E | Mam mam U | Ewe ewe U | Mundani mnf U |
| Swedish swe U E | Mam mam U | Farefare gur U | Mwan moa U |
| Swedish swe U E | Mam mam U | Farefare gur U | Mwani wmw U |
| Tajik tgk U E | Mam mam U | Fon fon U | Mündü muh U |
| Urdu urd U | Mopán Maya mop U | Gigyode acd U | Nafaanra nfr U |
| Vlax Romani rny U | Popti' jac U | Giryama nyf U | Nande nmb U |
| JIVAROAN: 4 | Popti' jac U | Gitonga toh U | Nateni ntm U |
+
+| Nawdm nmz U | Lealao Chinanteccle U | Lolopo ycl U | Guarayu gyr U | |
| Ndogo ndz U | Magdalena Peñasco Mix xtm U | Mandarin Chinese cmm U | Kayabi kzy U | |
| Ngangam gng U | Mezquital Otomi ote U | Maru mhx U | Paraguayan Guaranígug U | |
| Nigeria Mambila mzK U | Nopal Chatino cya U | Min Nan Chinese nan U | Urbú-Kaapor urb U | |
| Nilamba nim U | Ozumacin Chinantec chz U | Mro-Khimi Chin emr U | Western Bolivian Guar gnw U | |
| Ninzo nin U | Peñoles Mixtec mil U | Newari new U | TURKIC: 18 | |
| Nkonya nko U | Pinotepa Nacional Mix mio U | Pwo Northern Karen pww U | Bashkir bak U | |
| Noone nuh U | San Jerónimo Tecóatl maa U | Sherpa xsr U | Chuvash chv U | |
| Northern Dagara dgi U | San Jerónimo Tecóatl maa U | Sunwar suz U | Crimean Tatar crh U | |
| Ntcham bud U | San Juan Atzingo Popo poe U | Tedim Chin ctd U | Gagauz gag U | |
| Nyabwa nwB U | San Marcos Tlacoyalco pls U | Yue Chinese yue U | U | |
| Nyakyusa-Ngonde nyy U | San Pedro Amuzgos Amu azg U | Zyphé Chin zyp U | Gagauz gag U | |
| Nyankole nyn U | Santa María Zacatepec mza U | SULKA: 1 | Kara-Kalpak kaa U | |
| Nyaturu rim U | Sochiapam Chinantec cso U | Sulka sue U | Karachay-Balkar krc U | |
| Nyole nuj U | Southern Puebla Mixte mit U | TACANAN: 2 | Kazakh kaz U E | |
| Nyoro nyo U | Tepetotutla Chinantec cnt U | Ese Ejja ese U | Khakas kjh U | |
| Nzima nzi U | Tezoatlán Mixtec mxb U | Tacana tna U | U | |
| Obolo ann U | Usila Chinantec cuc U | TAI-KADAI: 4 | Kumyk kum U | |
| Oku oku U | Yosondúa Mixtec mpm U | Lao lao U E | Nogai nog U | |
| Paasaal sig U | PANOAN: 4 | Northern Thai nod U | North Azerbaijani azj U E | |
| Plapo Krumen ktj U | Cashinahua cbs U | Tai Dam blt U | Southern Altai alt U | |
| Pokomo pkb U | Panoan Katukina knt U | Thai tha U E | Tatar tat U | |
| Pular fuf U | Sharanahua mcd U | TARASCAN: 1 | Turkish tur U E | |
| Rigwe iri U | Shipibo-Conibo shp U | Purepecha tsz U | Turkish tur U E | |
| Rundi run U | PUINAVE: 1 | TICUNA: 1 | Turkish tur U E | |
| Saamia lsm U | Puinave pui U | Ticuna tca U | Tuvinian tvv U | |
| Sango sag U | PÁEZAN: 1 | TOL: 1 | Uighur uig U | |
| Sekpele lip U | Páez pbb U | Tol jic U | URALIC: 3 | |
| Selee snw U | QUECHUAN: 22 | TOR-ORYA: 1 | Finnish fin U | |
| Sena seh U | Ayacacho Quechua quy U | Orya ury U | Komi-Zyrian kpv U | |
| Shambala ksb U | Cajamarca Quechua qvc U | TOTONACAN: 4 | Udmurt udm U | |
| Sissala sld U | Cañar Highland Quichu qxr U | Coyutla Totonac toc U | URARINA: 1 | |
| Siwu akp U | Cusco Quechua quz U | Highland Totonac tos U | U | |
| Soga xog U | Huallaga Huánco Que cqu b | Pisaflores Tepehua ttp U | U | |
| South Fali fal U | Huamalies-Dos de Mayo qvh U | Tlachichilco Tepehua tpt U | URU-CHIPAYA: 1 | |
| Southern Birifor biv U | Huaylas Ancash Quechu qwh U | TRANS-NEW GUINEA: 12 | Chipaya cap U | |
| Southern Bobo Madaré bwcq U | Huaylla Wanca Quechua qvw U | Anjam boj U | UTO-AZTECAN: 15 | |
| Southern Dagaare dga U | Inga inb U | Awa (Papua New Guinea awb U | Central Huasteca Nahu nch U | |
| Southern Nuni nnn U | Lambayeque Quechua quf U | Ese mcq U | Eastern Huasteca Nahu nhe U | |
| Southwest Gbaya gso U | Margos-Yarowilca-Laur qvm U | Gwahatike dah U | El Nayar Cor crn U | |
| Supyire Senoufo spp U | Napo Lowland Quechua qvo U | Huli hui U | Guerrero Nahuatl ngu U | |
| Talinga-Bwisi tlj U | North Bolivian Quechu qul U | Ipili ipi U | Highland Puebla Nahu azz U | |
| Tampulma tpm U | North Junin Quechua qvn U | Kuman (Papua New Guin kue U | Isthmus-Mecayapan Nah nhx U | |
| Tharaka thk U | Northern Conchucos An qxn U | Kyaka kyc U | Isthmus-Mecayapan Nah nhx U | |
| Tikar tik U | Northern Pastaza Quic qyz U | Lower Grand Valley Da dni U | Mayo mfy U | |
| Timne tem U | Panao Huánco Quechua qxh U | Lower Grand Valley Da dni U | Northern Oaxaca Nahu nhy U | |
| Toura (Côte d'Ivoire) neb U | San Martín Quechua qvs U | Nalca nlc U | Northern Oaxaca Nahu nhy U | |
| Tsonga tso U | South Bolivian Quechu quh U | South Tailora omw U | Northern Puebla Nahu ncj U | |
| Tumulung Sisaala sil U | South Bolivian Quechu quh U | TUCANOAN: 11 | Santa Teresa Corc koc U | |
| Tuwuli bov U | Southern Pastaza Quec qup U | Desano des U | Sierra Negra Nahuatl nsu U | |
| Tyap keg U | Tena Lowland Quechua quw U | Guanano gvc U | Southeastern Puebla N npl U | |
| Vengo bay U | SINO-TIBETAN: 24 | Koreguaje coe U | Western Huasteca Nahu nhw U | |
| Vunjo vun U | Achang acn U | Macuna myy U | Zacatlán-Ahuacatlán-T nhi U | |
| West-Central Limba lia U | Akeu aeu U | Piratapuyo pir U | SANOMAM: 1 | |
| Yocoboué Dida gud U | Akha ahk U | Secoya sey U | Galela gbi U | |
| OTO-MANGUEAN: 27 | Bawm Chin bgr U | Siona snn U | WEST PAPUAN: 3 | |
| Atatláhuca Mixtec mib U | Eastern Tamang taj U | Siriano sri U | Tabaru tby U | |
| Ayutla Mixtec miy U | Falam Chin cfm U | Tucano tuo U | Tabelo tlb U | |
| Central Mazahua maz U | Hakka Chinese hak U | Tucano tuo U | YANOMAM: 1 | |
| Chichahuaxta Triqui trs U | Kachin kac U | Tuyuca tue U | Sanumá xsu U | |
| Dixiu-Tilantongo Mixxt xd U | Khumi Chin cnk U | TUPIAN: 8 | ZAMUCOAN: 1 | |
| Jalapa De Diaz Mazate maj U | Kulung (Nepal) kle U | Aché guq U | U | |
| Jamiltepec Mixtec mxt U | Lahu lhu U | Eastern Bolivian Guar gui U | U | |
| Lalana Chinantec cnl U | Lashi lsi U | Guajajára gub U | Chamacoco ceg U | |
\ No newline at end of file
diff --git a/asurprisaldurationtradeoffacrossandwithintheworldslanguages/images.zip b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..31421ea60bd69cf809c64bc7dbaae44ba819dd29
--- /dev/null
+++ b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:586ffc9914da7b71b46cd1ba7f9ff5300354c19aaf053ab7b453d8f5ccfee11d
+size 1235583
diff --git a/asurprisaldurationtradeoffacrossandwithintheworldslanguages/layout.json b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f52f552b00a1bd468fc25a3a9bdfc65b9fff05b9
--- /dev/null
+++ b/asurprisaldurationtradeoffacrossandwithintheworldslanguages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:edd83f0c566c342cf3114dfe25d6de12b0e1a1a7936f99e839178a822cccca1d
+size 403760
diff --git a/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_content_list.json b/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6f74c141db274357b3865ff7385df9f2a3856ab8
--- /dev/null
+++ b/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:77c3e675d0111504b3693dc5fc8a80e65500b9a399057af363923866ae7102ee
+size 40228
diff --git a/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_model.json b/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..19685f7e5d5f947837334c44e4c953774609ecc9
--- /dev/null
+++ b/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c603cdb4afdb75dd6eb95e1dfa668f36645dfdd40746f513cc1c77f8932ce95d
+size 48305
diff --git a/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_origin.pdf b/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d16cce8a2c185458b51fd80aeb8e2d3e7143e13c
--- /dev/null
+++ b/athoroughevaluationoftaskspecificpretrainingforsummarization/2394215d-6acf-4658-81ad-4d27f9a34573_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4aee0ecaa569e35f0d25e5a8c19b190c5ec8b3bd428b79635f8a6fcb24e23882
+size 237551
diff --git a/athoroughevaluationoftaskspecificpretrainingforsummarization/full.md b/athoroughevaluationoftaskspecificpretrainingforsummarization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f27ab6b0f245633e829b7bcaddd300923e345ad8
--- /dev/null
+++ b/athoroughevaluationoftaskspecificpretrainingforsummarization/full.md
@@ -0,0 +1,144 @@
+# A Thorough Evaluation of Task-Specific Pretraining for Summarization
+
+Sascha Rothe*
+
+Google
+
+rothe@google.com
+
+Joshua Maynez*
+
+Google
+
+joshuahm@google.com
+
+Shashi Narayan*
+
+Google
+
+shashinarayan@google.com
+
+# Abstract
+
+Task-agnostic pretraining objectives like masked language models or corrupted span prediction are applicable to a wide range of NLP downstream tasks (Raffel et al., 2019), but are outperformed by task-specific pretraining objectives like predicting extracted gap sentences on summarization (Zhang et al., 2020). We compare three summarization specific pretraining objectives with the task agnostic corrupted span prediction pretraining in a controlled study. We also extend our study to a low resource and zero shot setup, to understand how many training examples are needed in order to ablate the task-specific pretraining without quality loss. Our results show that task-agnostic pretraining is sufficient for most cases which hopefully reduces the need for costly task-specific pretraining. We also report new state-of-the-art number for two summarization tasks using a T5 model with 11 billion parameters and an optimal beam search length penalty.
+
+# 1 Introduction
+
+Previous work mostly used task-agnostic pretraining methods like corrupted span prediction (T5; Raffel et al., 2019), masked language model (BERT; Devlin et al., 2018), denoising objective (BART; Lewis et al., 2019 or a vanilla language model (GPT; Radford et al., 2019). Intuitively it makes sense to refine the pretraining to a setup that closer resembles the downstream task. Wang et al. (2020) demonstrate that task-specific priors into BERT language model pretraining improves on low-resource finetuning tasks. This is also done by Zhang et al. (2020) with PEGASUS, where important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, to teach summarization models to do better content selection and Narayan et al. (2021) proposed a content
+
+| XSum | SOTA (Narayan et al., 2021) | 47.80 / 25.06 / 39.76 |
| PEGASUS (Zhang et al., 2020) | 47.21 / 24.56 / 39.25 |
| T5 base (ours; α = 0.9) | 42.96 / 20.38 / 35.10 |
| T5 xxl (ours; α = 0.8) | 48.83 / 25.96 / 40.70 |
| CNN/DMail | SOTA (Dou et al., 2020) | 45.94 / 22.32 / 42.48 |
| PEGASUS (Zhang et al., 2020) | 44.17 / 21.47 / 41.11 |
| T5 xxl (Raffel et al., 2019) | 43.52 / 21.55 / 40.69 |
| T5 base (ours; α = 0.9) | 43.09 / 20.67 / 39.96 |
| T5 xxl (ours; α = 0.8) | 45.32 / 22.60 / 42.17 |
| SAMSum | SOTA (Rohde et al., 2021) | 53.01 / 28.27 / 48.84 |
| T5 base (ours; α = 1.0) | 49.38 / 24.16 / 44.88 |
| T5 xxl (ours; α = 0.9) | 53.10 / 28.73 / 48.94 |
+
+Table 1: Current state-of-the-art ROUGE-1 / -2 / -L scores for summarization datasets and our results in the T5 framework with optimal beam alpha. Raffel et al. (2019) only report numbers for CNN/DailyMail.
+
+planning pretraining objective with PEGASUS, by pre-pending the output sequence with the entity plans observed in it.
+
+PEGASUS achieved state of the art ROUGE-1/-2/-L scores (Lin, 2004) on BBC XSum (Narayan et al., 2018) with 47.21 / 24.56 / 39.25 and CNN/DailyMail with 44.17 / 21.47 / 41.11. These numbers could not be matched by Raffel et al. (2019) even when using a much larger model with up to 11 billion parameters. This seems to support the intuition that task specific pretraining is important for the best performance. However, Raffel et al. (2019) used a beam search length penalty (beam alpha) of 0.6. We set the beam alpha parameter to the optimal value and report new state of the art results on XSum and SAMSum (Table 1).
+
+Given these new results we want to answer the questions if task-specific pretraining objectives are still at an advantage. To avoid any influence of hyperparameters, pretraining datasets, tokenization or evaluation scripts we reimplement all experiments in the same framework, namely the PEGASUS framework. To our surprise, we found that in a controlled comparison the task-agnostic pretraining methods perform as good as task specific
+
+pretraining methods for large finetuning setups. We further extend our study to a low resource and zero shot setup, to understand how many training examples are needed in order to ablate the task specific pretraining without quality loss. And finally we want to see if our findings also translate to other text generation tasks. We therefore pretrain a model with corrupted text and evaluate it on grammatical error correction.
+
+# 2 Pretraining Models
+
+We use a transformer architecture (Vaswani et al., 2017) with 12 hidden layers, a hidden size of 768, filter size 3072 and 12 attention heads, with a total of 223M parameters. All models are pretrained for 1.5 million steps on the C4 corpus (Raffel et al., 2019) with a batch size of 16, Adafactor (Shazeer and Stern, 2018), a learning rate of 0.01, and maximum input-output lengths of 512 and 256, in the PEGASUS framework. If not mentioned otherwise we do not perform any hyperparameter tuning but use the best performing hyperparameters founded by Zhang et al. (2020). We do not explore a pretraining plus prefinetuning setup in this paper (Aghajanyan et al., 2021).
+
+We now briefly explain the task-agnostic objective of corrupted span prediction and two task-specific objectives, salient sentence selection for summarization and text corruption for grammar error correction. Additionally, we also experimented with the objectives of masking and predicting random and lead sentences.
+
+Corrupted Span Prediction (T5) This pretraining objective is based on a span-prediction task, an adaptation of masked-language objective for autoregressive seq2seq models. As in BERT, we mask out $15\%$ of the input text. We allow masking on continuous spans of lengths 1, 2, 3, 4 and 5 with probabilities 0.1, 0.2, 0.4, 0.2, 0.1, respectively. An example of span prediction:
+
+Input: This is an [x] sentence [y] words.
+
+Target: [x] example [y] with eight
+
+Mask Salient Sentence (PEGASUS) We follow Zhang et al. (2020) to select and mask whole sentences from documents. The concatenated gap-sentences can be seen as a pseudo-summary and will serve as targets. To more closely approximate a summary, sentences that appear to be important/principal to the document are selected. As a
+
+proxy for importance ROUGE-1 F1 score between the sentence and the rest of the document is used.
+
+Mask Random Sentence (MRNDS) We also pretrain a model with randomly select sentences as gap-sentences. This can be seen as a sentence level version of the masked language model (Devlin et al., 2018), a version of T5 that generates whole sentences or as a simplification of PEGASUS where the content selection aspect is missing.
+
+Mask Lead Sentence (MLEADS) In this setup we pretrain a model with the first $m$ sentences of a document as gap-sentences. This is motivated by the fact that for some text snippets, for example news, the most important information comes at the beginning of a paragraph. This is a natural setup for summarization since it is known that lead-sentences are a good baseline to compare summarization models against.
+
+Text Corruption (TEXTCOR) Analogous to PEGASUS for summarization we pretrain a task specific model for grammatical error correction. To create pairs of broken and correct text snippets we corrupt each sentence using a combination of the following operations: a) drop tokens b) swap tokens c) insert tokens d) replace tokens e) drop characters f) swap characters g) insert characters h) lower-case a word i) upper-case the first character of a word. We limited our self to the fore mentioned purely unsupervised corruption techniques and do not use more sophisticated methods like replacing words with common misspellings as done by Náplava and Straka (2019).
+
+# 3 Finetuning Experiments
+
+All our experiments are done in the PEGASUS framework. We validate that the numbers are roughly identical with a comparable setup in T5.2 For this, numbers in Table 1 labeled T5 base ours should match numbers in Table 2 labeled T5 Full. Both experiments correspond to the same model size conducted in different frameworks.
+
+Datasets and Eval Metrics We measure the performance on three commonly used summarization benchmarks, namely CNN/DailyMail (Hermann et al., 2015), BBC XSum, (Narayan et al., 2018) and SAMSum (Gliwa et al., 2019) using ROUGE-1,
+
+ | 0 | 10 | 100 | 1000 | 10000 | Full |
| XSum (Lead-1: 16.30 / 01.61 / 11.95) |
| T5 | 13.57 / 03.50 / 11.10* | 26.49 / 08.15 / 20.92 | 34.87 / 13.08 / 27.50 | 37.07 / 15.22 / 29.87 | 39.91 / 17.61 / 32.46 | 44.34 / 22.01 / 36.65 |
| MRNDS | 19.01 / 02.81 / 14.53 | 19.80 / 03.37 / 15.12 | 31.11 / 10.88 / 24.92 | 38.13 / 16.19 / 30.93 | 40.59 / 18.33 / 33.15 | 43.85 / 21.65 / 36.29 |
| MLEADS | 19.35 / 02.60 / 14.76 | 24.14 / 06.32 / 18.99 | 34.38 / 13.05 / 27.85 | 38.00 / 16.09 / 30.89 | 39.80 / 17.63 / 32.30 | 43.38 / 21.26 / 35.81 |
| PEGASUS | 19.04 / 03.05 / 13.76 | 23.45 / 06.05 / 17.90 | 33.77 / 12.95 / 27.16 | 38.26 / 16.46 / 31.02 | 41.06 / 18.65 / 33.54 | 44.15 / 21.87 / 36.58 |
| CNN/DailyMail (Lead-3: 39.60 / 17.70 / 36.20) |
| T5 | 13.84 / 04.37 / 12.53* | 24.00 / 09.13 / 22.08 | 31.88 / 13.99 / 29.55 | 35.90 / 16.58 / 33.31 | 39.84 / 18.75 / 36.99 | 43.59 / 21.42 / 40.56 |
| MRNDS | 32.86 / 12.40 / 29.57 | 35.30 / 14.35 / 31.81 | 35.53 / 15.65 / 32.53 | 38.06 / 17.44 / 34.94 | 41.16 / 19.12 / 38.09 | 43.59 / 21.18 / 40.40 |
| MLEADS | 39.68 / 17.82 / 36.07 | 39.68 / 17.82 / 36.07° | 39.68 / 17.82 / 36.07° | 39.68 / 17.82 / 36.07° | 40.45 / 18.79 / 37.48 | 43.18 / 20.83 / 40.07 |
| PEGASUS | 34.11 / 13.35 / 29.86 | 33.75 / 13.76 / 29.70 | 34.37 / 15.05 / 30.97 | 37.09 / 17.39 / 34.09 | 40.41 / 19.12 / 37.43 | 43.53 / 21.15 / 40.35 |
| SAMSum (Lead-5: 31.94 / 09.91 / 27.03) |
| T5 | 04.00 / 00.65 / 03.75* | 33.76 / 11.30 / 29.43 | 41.28 / 16.51 / 37.04 | 46.18 / 21.44 / 41.59 | 50.16 / 26.36 / 45.83 | 50.96 / 27.02 / 46.56 |
| MRNDS | 26.90 / 07.63 / 24.79 | 31.03 / 10.25 / 27.94 | 42.12 / 18.06 / 38.22 | 47.25 / 21.39 / 42.20 | 50.19 / 25.74 / 45.96 | 50.54 / 25.98 / 46.45 |
| MLEADS | 29.86 / 08.59 / 26.43 | 33.75 / 10.97 / 29.83 | 39.02 / 15.10 / 34.89 | 45.18 / 20.11 / 40.10 | 49.15 / 24.65 / 44.58 | 49.84 / 25.72 / 45.56 |
| PEGASUS | 22.78 / 05.80 / 20.60 | 29.76 / 09.83 / 26.61 | 38.94 / 15.62 / 34.77 | 46.45 / 21.15 / 41.29 | 50.15 / 25.98 / 45.80 | 50.21 / 26.34 / 46.18 |
+
+Table 2: ROUGE-1 / -2 / -L scores in summarization datasets. Results are shown on their full test sets using only 10, 100, 1000 and 10000 training examples, and the whole training set (all). We also report on zero-shot results. We report Lead-1 baseline for BBC from (Narayan et al., 2018) and Lead-3 baseline for CNN/DailyMail from (Rothe et al., 2020). For SAMSum, we achieve the best lead scores when we select top 5 sentences for each input. Result in gray are worse than the lead sentence baseline. Best results in each block are bolded. Results marked with * are not comparable, see text. For results marked with o, the untrained checkpoint at step 0 was performing best on the development set.
+
+-2 and -L as metric. The datasets differ in the degree of abstraction and summarization length. The summaries of CNN/DailyMail are more of extractive nature and have an average length of 3 sentences. The summaries of BBC XSum are single-sentences and more abstractive. The SAMSum summaries consist of 2-3 meeting minutes. Finally, the CNN/DailyMail, BBC XSum and SAMSum datasets have $287\mathrm{k} / 13.4\mathrm{k} / 11.5\mathrm{k}$ , $204\mathrm{k} / 11.3\mathrm{k} / 11.3\mathrm{k}$ and $14.7\mathrm{k} / 818 / 819$ training/development/test examples, respectively. We finetune our pretrained models on the full datasets and subsampled versions with 10, 100, 1,000 and 10,000 examples.
+
+During finetuning, we use maximum input/output lengths of 1024/128 for CNN/DailyMail, 1024/64 for XSum and 512/128 for SAMSum. All models were finetuned with a batch size of 256. The best model was selected based on the ROUGE-L performance on the full development set. During inference, all models were decoded with a beam alpha of 0.8 and a beam size of 5. Results shown in Table 2 are the average performance of 5 models trained with different samples, as low resource setups are known to have high-variance.
+
+Results We found that the performance of the span prediction objective is always better or on par with the performance of the salient sentence prediction objective for all three datasets when using the whole training set. Linguistically, it might be more interesting to generate full sen
+
+tences than spans, but empirically, we found no evidence to support that the mask salient sentence pretraining is better at content selection than the corrupted span pretraining for summarization. In fact, we found that constraining pretraining to task-specific information such as the most important information at the beginning of a paragraph (MLEADS; CNN/DailyMail), makes it hard to generalize across datasets and leads to inferior performance compared to pretraining by generating random sentences (MRNDS).
+
+For low-resource setups results varied a bit depending on the task. For abstractive datasets such as XSum and SAMSum, T5 achieved better performance than PEGASUS with as little as 10 or 100 examples. With 1000 and 10000 examples, results from both models were on par for SAMSum, but PEGASUS reported better than T5 for XSum. For CNN/DailyMail, PEGASUS continuously outperformed T5 for all low-resource setups. On the other side CNN/DailyMail is not ideal for evaluating low-resource models due to the extractive nature of summaries; one can simply perform well by selecting the first few sentences. The Lead baseline and MLEADS are on par and outperform the other methods, while MLEADS does not use any training data when 1000 examples or less are provided.
+
+Zero-Shot We also assess how well pretrained models perform out-of-the-box on different generation tasks (zero-shot). For this we simply infer
+
+
+Figure 1: Comparison of how models adapt to target lengths from zero-shot to low-resource cases. We plot the average summary lengths for different models. We report results on XSum, similar patterns were found on CNN/DailyMail and SAMSum.
+
+on the test sets using different pretrained checkpoints without any finetuning. Results in Table 2 are not surprising that the sentence-level pretraining in MRNDS, MLEADS and PEGASUS are better than T5 in producing well-formed summaries; also it is probably not fair to evaluate T5 for zero-shot summarization as T5 models are pretrained to generate masked spans and not full sentences.
+
+Length comparisons It has been argued that ROUGE tends to prefer longer summaries, so we wanted to investigate if (a) model leverages this phenomenon and (b) if it is unfair to compare different pretraining methods trained on self-supervised targets with different length distributions. As depicted in Figure 1, for the zero-shot case we observe very different average lengths in predicted summaries for different models, with PEGASUS being closest to the target lengths. However, by 1000 training examples, all models start generating summaries of comparable lengths.
+
+# 4 Grammatical Error Correction
+
+We further investigate if our findings translate to other generation tasks. Here, we focus on the task of grammatical error correction, but also other important aspects of text generation show benefit from task specific pretraining and are still underexplored; e.g., improving evaluations (Sellam et al., 2020), factuality (Chen et al., 2020) or planning for grounded generation (Narayan et al., 2021).
+
+Datasets and Eval Metrics For Grammatical Error Correction we fine-tune our pre-trained models on the FCE (Yannakoudakis et al., 2011) and W&I (Bryant et al., 2019) corpora. We evaluate on the standard benchmark of CoNLL-14, using CoNLL-13 as the development set. Reported numbers in
+
+ | 10 | 100 | 1000 | 10000 | all |
| T5 | 19.54 | 28.43 | 39.36 | 51.08 | 55.07 |
| TEXTCOR | 21.71 | 30.86 | 49.40 | 55.94 | 59.67 |
| MRND5 | 03.78 | 03.78 | 03.78 | 31.24 | 39.63 |
+
+Table 3: $\mathrm{F}_{0.5}$ scores on CoNLL-14 for the grammatical error correction task.
+
+Table 3 are $\mathrm{F}_{0.5}$ scores (Dahlmeier and Ng, 2012) computed by the $M^2$ scorer. $^3$
+
+Results As shown in Table 3 TEXTCOR outperforms T5 on all dataset sizes. The results also show that an unrelated task-specific pretraining objective hurts performance even when training on the full dataset. This is notable as for example the MRNDS pretraining is not that far of from a normal language model pretraining and should learn a reasonable amount about language and well formed sentences.
+
+Zero Shot In contrast to summarization, no easy baseline exists for grammatical error correction. A simple copy baseline would give us a high word overlap like BLEU or ROUGE, but on our main metric $\mathrm{F_{0.5}}$ this only gets a score of 4.24. Our pretrained TEXTCOR model achieves an $\mathrm{F_{0.5}}$ score of 18.64, precision 40.94 and recall 5.87. The T5 model needs only 10 training examples to achieve the same $\mathrm{F_{0.5}}$ score (Table 3). We hypothesize that zero shot performance of the TEXTCOR pretraining could be greatly improved by tuning the hyperparameters of the text corruption to better match distribution the CoNLL dev and test sets. However, this would limit the scope the pretrained model even further as this distribution would not translate to other datasets or related tasks, like correcting OCR (optical character recognition) or ASR (automatic speech recognition) errors.
+
+# 5 Conclusion
+
+We evaluated several pretraining techniques on two different text generation tasks, summarization and grammatical error correction. Our findings are that, while pretraining for summarization is very important, we found no evidence that task specific pretraining improved on common benchmarks for abstractive datasets, even in a low resource setting. On extractive datasets, task specific pretraining showed benefits but the results are below a sentence selection baseline, questioning the practical usefulness. Given the trend to larger neural network
+
+models with significant costs to train them, we recommend to use a task agnostic pretraining regime. Corrupted span prediction is currently our most successful candidate, with state-of-the-art results on two investigated summarization benchmarks. But we are curious if even more flexible pretraining technique will emerge. For grammar error correction, task specific pretraining was showing superior performance, especially in a low resource setting. We therefore believe that, task-specific pretraining or prefetchuning can still be useful for important aspects of text generation.
+
+# Acknowledgments
+
+We thank the reviewers and the area chair for their feedback. We would like to thank Yao Zhao, Dipanjan Das, Mohammad Saleh, Samer Hassan, and Slav Petrov for useful discussions.
+
+# References
+
+Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. CoRR, abs/2101.11038.
+Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Computational Linguistics.
+Wenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020. KGPT: Knowledge-grounded pretraining for data-to-text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8635-8648, Online. Association for Computational Linguistics.
+Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568-572, Montreal, Canada. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2020. Gsum: A general framework for guided neural abstractive summarization. arXiv preprint arXiv:2010.08014.
+
+Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A human-annotated dialogue dataset for abstractive summarization. arXiv preprint arXiv:1911.12237.
+Karl Moritz Hermann, Tomáš Kočisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 1693-1701, Cambridge, MA, USA. MIT Press.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.
+Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
+Jakub Náplava and Milan Straka. 2019. Grammatical error correction in low-resource scenarios. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 346-356, Hong Kong, China. Association for Computational Linguistics.
+Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, Brussels, Belgium. Association for Computational Linguistics.
+Shashi Narayan, Yao Zhao, Joshua Maynez, Goncalo Simões, Vitaly Nikolaev, and Ryan T. McDonald. 2021. Planning with learned entity prompts for abstractive summarization. CoRR, abs/2104.07606, To appear in TACL.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
+Tobias Rohde, Xiaoxia Wu, and Yinhan Liu. 2021. Hierarchical learning for generation with long source sequences. arXiv preprint arXiv:2104.07545.
+Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics, 8:264-280.
+
+Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computational Linguistics.
+Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596-4604. PMLR.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
+Rui Wang, Shijing Si, Guoyin Wang, Lei Zhang, Lawrence Carin, and Ricardo Henao. 2020. Integrating task specific information into pretrained language models for low resource fine tuning. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3181-3186, Online. Association for Computational Linguistics.
+Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA. Association for Computational Linguistics.
+Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328-11339. PMLR.
\ No newline at end of file
diff --git a/athoroughevaluationoftaskspecificpretrainingforsummarization/images.zip b/athoroughevaluationoftaskspecificpretrainingforsummarization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7c34489c5fcc09dd2de71d1d561e11ef8d0975e2
--- /dev/null
+++ b/athoroughevaluationoftaskspecificpretrainingforsummarization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ee28eb68707f799cf31288d21af31267b97342ef491f1f819ddaa71de752a05
+size 190785
diff --git a/athoroughevaluationoftaskspecificpretrainingforsummarization/layout.json b/athoroughevaluationoftaskspecificpretrainingforsummarization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..86d540f1449e3017854d18dbbd804cc3b724c87e
--- /dev/null
+++ b/athoroughevaluationoftaskspecificpretrainingforsummarization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3cc71e1e469ccc6a2c7b0d86bdb944092ed9b63d4ff8d9a2691da8b39c3b30c4
+size 154496
diff --git a/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_content_list.json b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a7479a02d46bcc7402498ed165fa87ccdb4dac30
--- /dev/null
+++ b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e6075d70e73ad203f8e1e8b37b45806cb4d74d8e3812ab60570c1a5b4b78bd8
+size 77619
diff --git a/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_model.json b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..21de6d0f9dd9c9c9ac1fe8ee4914066e709aed01
--- /dev/null
+++ b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f88914cb1b8cc16436f106693ca86b6360281a9efd197ce7974c9847ff673255
+size 93803
diff --git a/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_origin.pdf b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e63dbb00c0d63ab2ee37a6f12c98f38d02496ada
--- /dev/null
+++ b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/aa7daa77-a80a-4559-b110-954df756c581_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff90109523ff5341d0406a809bd7d9c9e1f952a56f00bb68f822602b3649251b
+size 439050
diff --git a/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/full.md b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6c6c9879e88e69bfceebe9a6b64beefbf7443c0f
--- /dev/null
+++ b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/full.md
@@ -0,0 +1,310 @@
+# A Three-Stage Learning Framework for Low-Resource Knowledge-Grounded Dialogue Generation
+
+Shilei Liu, Xiaofeng Zhao, Bochao Li, Feiliang Ren*, Longhui Zhang, Shujuan Yin
+School of Computer Science and Engineering
+
+Key Laboratory of Medical Image Computing of Ministry of Education
+
+Northeastern University, Shenyang, 110169, China
+
+liusl@live.cn, renfeiliang@cse.neu.edu.cn
+
+# Abstract
+
+Neural conversation models have shown great potentials towards generating fluent and informative responses by introducing external background knowledge. Nevertheless, it is laborious to construct such knowledge-grounded dialogues, and existing models usually perform poorly when transfer to new domains with limited training samples. Therefore, building a knowledge-grounded dialogue system under the low-resource setting is a still crucial issue. In this paper, we propose a novel three-stage learning framework based on weakly supervised learning which benefits from large scale ungrounded dialogues and unstructured knowledge base. To better cooperate with this framework, we devise a variant of Transformer with decoupled decoder which facilitates the disentangled learning of response generation and knowledge incorporation. Evaluation results on two benchmarks indicate that our approach can outperform other state-of-the-art methods with less training data, and even in zero-resource scenario, our approach still performs well.
+
+# 1 Introduction
+
+Neural dialogue systems have made rapid progress in recent years thanks to the advances in sequence generation technology (Vinyals and Le, 2015; Vaswani et al., 2017). Though such models in neural architectures are able to reply with plausible responses regarding to dialogue history, people can still feel a clear gap when they converse with the chatbots, compared with the conversation with humans. To bridge the gap and generate fluent and informative responses, a number of approaches have been proposed by leveraging external knowledge. Knowledge-grounded dialogue is a task of generating an informative response based on both dialogue history and a collection of external knowledge (Dinan et al., 2019). The forms of knowledge
+
+are diverse, and in this work, we only focus on knowledge in the form of unstructured documents.
+
+Generally, it is difficult to construct large scale conversations that are naturally grounded on the documents for learning of a response generation model (Zhao et al., 2020a), and most of the previous methods (Lian et al., 2019; Li et al., 2019; Kim et al., 2020; Dinan et al., 2019) perform poorly when transfer into a new domain with limited training samples. So there are growing appeals for low-resource dialogue response generation, which aims to leverage past experience to improve the performance with limited labeled training examples of target corpus.
+
+To address this issue, we envisage to absorb useful information from other easily accessible heterogeneous datasets to enhance the performance of the knowledge-based dialogue model under low-resource setting. Based on this assumption, we propose a novel Three-Stage Learning Framework (TSLF). TSLF attempts to divide the parameters of a model into dialogue-related and knowledge integration-related. In the first stage, we use supervised learning to pre-train dialogue-related parameters on general dialogues (e.g., online forum comments), and perform domain-adaptive pre-training (Gururangan et al., 2020) to initialize knowledge-related parameters on unlabeled knowledge base (e.g., items in Wikipedia). In the second stage, inspired by the distant supervision in the relation extraction (Mintz et al., 2009), we match a set of pseudo-knowledge for each ungrounded dialogue to construct a lower quality knowledge-grounded dialogue dataset, and further co-pretrain the above two groups of parameters on this dataset. In the third stage, the trained model will be fine-tuned on the target low-resource dataset. The flow of TSLF is shown in Figure 1.
+
+In order to better cooperate with the disentangled learning mechanism in TSLF, we devise Knowledge-Aware Transformer (KAT), a vari-
+
+
+Figure 1: Our three-stage learning framework (TSLF).
+
+
+
+
+
+ant of vanilla Transformer (Vaswani et al., 2017) whose parameters are decoupled that facilitates the separate learning of dialogue generation and knowledge incorporation. As shown in Figure 2, besides dialogue history, KAT also accepts a set of knowledge as additional input. KAT has a knowledge-aware decoder which could obtain information from the dialogue context and background documents through cross-attention and integrates them through a controller.
+
+We conduct experiments on two knowledge-grounded dialogue generation benchmarks including Wizard-of-Wikipedia (Dinan et al., 2019) and CMU_DOG (Zhou et al., 2018). Evaluation results in terms of both automatic metrics and human judgment indicate that using only about $1/4$ of the training data on Wizard (1/16 on CMU_DOG), the performance of our approach outperforms the competitive baselines which are learned from full crowdsourced training corpora. Even without using any training data of the target dataset, our method still performs well.
+
+The contributions in this work are summarized as follows: (1) We propose a novel three-stage learning framework that leverages weakly supervised learning to help build a low-resource knowledge-grounded dialogue generation model; (2) We devise knowledge-aware Transformer, a knowledge-grounded neural conversation model with a novel dynamic knowledge selection mechanism, which can fully exploits the external knowledge to generate fluent and informative dialogue responses; (3) Our KAT-TSLF achieves surprising performance under the scenarios of full data, low-resource and even zero-resource.
+
+The source code is available at https://github.com/neukg/KAT-TSLF.
+
+# 2 Approach
+
+Low-resource knowledge-grounded dialogue generation is task that requires a method to learn from experience $E$ , which consists of direct experience $E_{d}$ containing limited monolingual context-knowledge-response triples and indirect experience $E_{i}$ , to improve the performance in response generation measured by the evaluation metric $P$ . The direct experience $E_{d}$ refers to the training samples of target corpus $\mathcal{D}_l = \{(U_i, \mathcal{K}_i, Y_i)\}_{i=1}^{m_1}$ ( $U_i$ is a dialog history, $Y_i$ is response, and $\mathcal{K}_i = \{K_j\}_{j=1}^s$ is a set of external knowledge documents of $i$ -th sample) which are under low-resource settings. In this work, we consider $E_i$ as a large scale ungrounded dialogue dataset $\mathcal{D}_d = \{(U_i, Y_i)\}_{i=1}^{m_2}$ , a knowledge base $\mathcal{D}_k = \{K_i\}_{i=1}^{m_3}$ ( $m_2, m_3 \gg m_1$ ) and a pretrained language model which are easy to obtain. In the following, we first introduce our KAT, and then show how to train it from coarse to fine under our TSLF.
+
+# 2.1 Knowledge-Aware Transformer
+
+KAT accepts $U$ and $\mathcal{K} = \{K_i\}_{i=1}^s$ as inputs, and generates a response $\hat{Y}$ . It consists of three components: a dialogue context encoder (DE) to encode $U$ , a knowledge encoder (KE) to encode $\mathcal{K}$ , and a decoder to incorporate dialog history, dynamically select knowledge and generate response. The architecture of KAT is shown in Figure 2.
+
+# 2.1.1 Encoder
+
+We define DE as a Transformer encoder, and the output is represented as $\mathbf{U} \in \mathbb{R}^{n \times d}$ , where $n$ is the sequence length, and $d$ is the hidden state dimension. Similarly, KE is defined as another Transformer encoder, and it encode each document individually. Following KE is a concatenation opera
+
+
+Figure 2: The architecture of our KAT.
+
+tion that concatenates all document representations: $\mathbf{K} = [\mathbf{K}_1; \dots; \mathbf{K}_s] \in \mathbb{R}^{sz \times d}$ , where $\mathbf{K}_i \in \mathbb{R}^{z \times d}$ is output of $i$ -th KE, and $z$ is the sequence length of each document. $\mathbf{K}$ and $\mathbf{U}$ will be used for the input of the decoder.
+
+# 2.1.2 Knowledge-Aware Decoder
+
+Generally, not all knowledge in the $\kappa$ contributes to the generation of the response, so the model should have the ability to select knowledge. Different from (Dinan et al., 2019; Lian et al., 2019; Kim et al., 2020) who perform knowledge selection in the encoding phase (or in a pipeline), we leaves it to the decoding phase. Based on the Transformer decoder, we propose a cross attention based decoder which can select knowledge dynamically and generate informative response.
+
+Knowledge Integration Block (KIB) As shown in the right part of Figure 2, we add a new block after the dialogue history attention block in Transformer decoder layer. It takes the output from last block as query, and the memory from K as key and value. The output of this block can be obtained by multi-head attention mechanism (Vaswani et al., 2017). During decoding, KIB can dynamically select different knowledge according to dialogue
+
+context and the tokens that have been generated at current time step.
+
+Controller To control the knowledge and context contributions in each layer, we add a gate after the knowledge selection block. Denote $\mathbf{h}_k$ as output of KIB and $\mathbf{h}_c$ as the residual from the previous block, the output of controller can be expressed by
+
+$$
+\mathbf {C T} \left(\mathbf {h} _ {k}, \mathbf {h} _ {c}\right) = \beta \cdot \operatorname {L N} \left(\mathbf {h} _ {k}\right) + (1 - \beta) \cdot \mathbf {h} _ {c} \tag {1}
+$$
+
+$$
+\beta = \sigma \left(\mathbf {w} \cdot \left[ \mathbf {h} _ {k}; \mathbf {h} _ {c} \right]\right)
+$$
+
+where $\mathbf{w} \in \mathbb{R}^{2d}$ is a learnable parameter and $\sigma$ denotes sigmoid function.
+
+# 2.2 Three-Stage Learning Framework
+
+For further discussion, we denote $\theta_{d}$ , $\theta_{k}$ , and $\theta_{a}$ as the learnable parameters of the green, yellow and pink parts in Figure 2 respectively. We can observe that $\theta_{d}$ is related to context encoding and response generation, $\theta_{k}$ is related to knowledge representation and integration, and these two parts are disentangled. In order to benefit from a wealth of heterogeneous corpora, we propose a three-stage learning framework. In TSLF, we first initialize $\theta_{d}$ and $\theta_{k}$ in a decoupled scheme by training in ungrounded dialogues and unstructured knowledge documents respectively, and then co-optimize them with $\theta_{a}$ by weakly supervised learning and finally transfer KAT to target low-resource dataset. The illustration of TSLF is shown in Figure 1.
+
+# 2.2.1 Stage I
+
+We choose the state-of-the-art Transformer based encoder-decoder model BART (Lewis et al., 2020) as the backbone, pre-training it on $\mathcal{D}_d$ with dialogue response generation task:
+
+$$
+\mathcal {L} _ {d} \left(\theta_ {d}\right) = - \sum_ {(U, Y) \in \mathcal {D} _ {d}} \sum_ {t} \log p \left(y _ {t} \mid y _ {< t}, U\right) \tag {2}
+$$
+
+Besides, inspired by Gururangan et al. (2020), we conduct domain-adaptive pre-training on unlabeled knowledge documents to improve knowledge representation ability. Specifically, $15\%$ of tokens in a text $K$ are replaced with $\langle \text{mask} \rangle$ or noise words, and another Transformer tries to rebuild it:
+
+$$
+\mathcal {L} _ {k} \left(\theta_ {k} ^ {+}\right) = - \sum_ {K \in \mathcal {D} _ {k}} \sum_ {t} \log p \left(k _ {t} \mid k _ {< t}, \hat {K}\right) \tag {3}
+$$
+
+where $\hat{K}$ is the corrupt $K$ . We disentangle the encoder and the cross-attention block in each decoder layer from this Transformer $(\theta_{k}^{+})$ and initialize $\theta_{k}$ with them.
+
+# Algorithm 1 Construction of $\mathcal{D}_p$
+
+Input: Ungrounded dialogues $\mathcal{D}_d$ , documents $\mathcal{D}_k$ , threshold $\gamma$ and number of negative samples $o$ ;
+
+# Output: $\mathcal{D}_p$
+
+1: Initialize $\mathcal{D}_p = \phi$ ;
+2: for $(U,Y)$ in $\mathcal{D}_d$ do
+3: $K, \text{score} = \mathcal{I}(Y, \mathcal{D}_k)$ ;
+4: if score $> \gamma$ then
+5: $\mathcal{K} = \{K\}$
+6: for $i$ in $\{1,\dots,o\}$ do
+7: Sample $K^{\prime}$ from $\mathcal{D}_k - \mathcal{K}$ randomly;
+8: $\mathcal{K} \gets \mathcal{K} \cap \{K'\}$ ;
+9: end for
+10: $\mathcal{D}_p\gets \mathcal{D}_p\cap \{(U,\mathcal{K},Y)\} ;$
+11: end if
+12: end for
+13: return $\mathcal{D}_p$
+
+# 2.2.2 Stage II
+
+In stage I, $\theta_{d}$ and $\theta_{k}$ are trained separately, and the connection between knowledge and dialogue has not yet been established. If KAT is fine-tuned directly on low-resource dataset $\mathcal{D}_k$ , it may cause inconsistency problems, so we add a warm-up process to it.
+
+Intuitively, responses from humans carry clues to relevance of the knowledge candidates (Zhao et al., 2020b), so the knowledge document that promotes the flow of dialogue usually has a high textual similarity with the response. Based on this assumption, we construct a set of pseudo-knowledge for some dialogues in $\mathcal{D}_d$ to form a new weak supervision dataset $\mathcal{D}_p$ according to Algorithm 1.
+
+$\mathcal{I}(\text{query}, \text{documents})$ means retrieve the document with the highest similarity (e.g., TF-IDF and BM25). Context-response pairs with low quality will be removed. In the knowledge-grounded dialogue corpora, only less documents in knowledge pool are valuable, and others are noise. The design of negative samples is to simulate this situation and make the distribution of knowledge in $\mathcal{D}_p$ closer to the target data set.
+
+We perform weakly supervised learning on $\mathcal{D}_p$ to warmup KAT:
+
+$$
+\mathcal {L} \left(\theta_ {d}, \theta_ {k}, \theta_ {a}\right) = - \sum_ {\left(U, \mathcal {K}, Y\right) \in \mathcal {D} _ {p}} \log p (Y | \mathcal {K}, U) \tag {4}
+$$
+
+# 2.2.3 Stage III
+
+After warming up on $\mathcal{D}_p$ , KAT will be fine-tuned on the target low-resource dataset:
+
+$$
+\mathcal {L} \left(\theta_ {d}, \theta_ {k}, \theta_ {a}\right) = - \sum_ {\left(U, \mathcal {K}, Y\right) \in \mathcal {D} _ {l}} \log p (Y | \mathcal {K}, U) \tag {5}
+$$
+
+If not fine-tuned, KAT can also be directly applied to zero-resource response generation.
+
+# 3 Experiments
+
+# 3.1 Datasets and Evaluation Methods
+
+We conduct extensive experiments on two public English knowledge-grounded datasets: Wizard-of-Wikipedia (Dinan et al., 2019) and CMU_DOG (Zhou et al., 2018). Wizard-of-Wikipedia is a chitchatting dataset between two agents, and the two participants are not quite symmetric: one will play the role of a knowledgeable expert (which we refer to as the wizard) while the other is a curious learner (the apprentice). Each wizard turn is associated with $\sim 60$ sentences retrieved from the Wikipedia and each sentence contains $\sim 30$ words, and most of them are noise. The test set is split into two subsets, test seen and test unseen. The difference between the two is that the former contains some topics that overlap with the training set. CMU_DOG also contains conversations between two workers who know the background documents and try to discuss the content in depth. Different from Wizard-of-Wikipedia which spans multiple topics, CMU_DOG mainly focuses on film reviews.
+
+Reddit Conversation Corpus is a large scale open domain dialogue corpus cleaned by Dziri et al. (2018) which consists of $\sim 15\mathrm{M}$ samples for training and $\sim 0.8\mathrm{M}$ samples for validation. Following Zhao et al. (2020a); Li et al. (2020), we merge the training and validation data of RedditCC as $\mathcal{D}_d$ . Besides, we split $\sim 0.5\mathrm{M}$ Wikipedia articles provided by ParlAI(Miller et al., 2017) into $\sim 6.6\mathrm{M}$ sentences as $\mathcal{D}_k$ . Information retrieval function $\mathcal{I}$ mentioned in Sec. 2.2.2 is implemented by Apache Lucene with BM25 algorithm and the size of $\mathcal{D}_p$ is $\sim 0.1\mathrm{M}$ . $\gamma$ and $o$ are set to 16.4 and 39 respectively.
+
+Following the common practice in evaluating open domain dialogue generation, we choose perplexity (PPL), corpus-level BLEU (Papineni et al., 2002), sentence-level ROUGE (Lin, 2004) and corpus-level DISTINCT (Li et al., 2016) as metrics. Response with higher BLEU and ROUGE is closer to the ground-truth, and response with higher DIST
+
+| Models | PPL | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | R-1 | R-2 | DIST-1 | DIST-2 |
| ITDD (Li et al., 2019) | 17.8 | 15.8 | 7.1 | 4.0 | 2.5 | 16.2 | - | - | - |
| BARTcat | 19.7 | 23.1 | 11.4 | 6.7 | 4.3 | 19.3 | 5.1 | 7.1 | 29.9 |
| BARTskt (Kim et al., 2020) | 20.3 | 23.2 | 11.9 | 7.6 | 4.4 | 19.4 | 5.4 | 6.8 | 30.3 |
| DRD (Zhao et al., 2020a) | 23.0 | 21.8 | 11.5 | 7.5 | 5.5 | 18.0 | - | - | - |
| ZRKGCT (Li et al., 2020) | 40.4 | 22.2 | 7.3 | 2.8 | 1.8 | 18.6 | 2.4 | 5.4 | 22.5 |
| KAT Full Data | 14.5 | 25.5 | 13.9 | 9.0 | 6.6 | 21.6 | 7.5 | 9.3 | 37.0 |
| KAT-TSLF Full Data | 14.4 | 25.5 | 13.9 | 9.1 | 6.7 | 21.7 | 7.6 | 9.5 | 38.3 |
| KAT-TSLF 1/4 Data | 17.6 | 23.3 | 12.2 | 7.7 | 5.5 | 20.3 | 6.8 | 9.9 | 39.1 |
| KAT-TSLF 1/8 Data | 18.8 | 22.5 | 11.5 | 7.1 | 4.9 | 19.8 | 6.3 | 9.9 | 39.5 |
| KAT-TSLF Zero Data | 100+ | 19.5 | 8.1 | 4.0 | 2.2 | 14.7 | 3.0 | 7.5 | 33.9 |
+
+Table 1: Evaluation results on Wizard test seen. $\dagger$ marks zero-resource setting. The results of ITDD and DRD are copied from (Zhao et al., 2020a) and DRD is under full-data. The performance of KAT-TSLF 1/4 Data outperforms BART $_{cat}$ and BART $_{skt}$ significantly except BLEU-1 (t-test with $p$ -value $< 0.01$ , the same table below).
+
+| Models | PPL | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-1 | ROUGE-2 | DIST-1 | DIST-2 |
| ITDD | 44.8 | 13.4 | 4.7 | 2.1 | 1.1 | 11.4 | - | - | - |
| BARTcat | 24.5 | 23.2 | 11.0 | 6.3 | 4.1 | 18.9 | 4.5 | 5.3 | 22.2 |
| BARTskt | 22.3 | 23.4 | 10.9 | 6.8 | 4.6 | 19.0 | 4.7 | 5.2 | 24.5 |
| DRD | 25.6 | 20.7 | 10.1 | 6.2 | 4.3 | 16.5 | - | - | - |
| ZRKGCT | 41.5 | 21.8 | 7.1 | 2.7 | 1.1 | 18.5 | 2.4 | 3.4 | 15.6 |
| KAT | 15.8 | 24.4 | 12.5 | 7.8 | 6.6 | 20.5 | 6.4 | 10.1 | 39.1 |
| Full Data | 15.8 | 24.1 | 12.9 | 8.3 | 6.0 | 20.7 | 7.2 | 6.7 | 26.0 |
| 1/4 Data | 18.4 | 23.1 | 11.9 | 7.5 | 5.2 | 19.9 | 6.4 | 6.6 | 25.1 |
| 1/8 Data | 20.1 | 22.3 | 11.3 | 7.0 | 4.8 | 19.0 | 5.9 | 6.6 | 25.3 |
| Zero Data | 100+ | 19.6 | 8.6 | 4.7 | 2.7 | 14.9 | 3.0 | 5.7 | 26.4 |
+
+Table 2: Evaluation results on Wizard-of-Wikipedia test unseen.
+
+has a larger vocabulary that could express more information. BLEU is computed with NLTK library (Bird, 2006) and ROUGE is calculated with the code published with Kim et al. (2020).
+
+Besides quantitative evaluation, we also recruit three human annotators to do qualitative analysis on response quality. For each dataset, we randomly sample 100 samples, and each sample contains the conversation history, response, and external knowledge set (for Wizard-of-Wikipedia, we only provide ground-truth knowledge). The annotators then judge the quality of the responses from three aspects, including context coherence, language fluency and knowledge relevance, and assign a score in $\{0,1,2\}$ to each response for each aspect. Each response receives 3 scores per aspect, and the agreement among the annotators is measured via Fleiss' kappa (Fleiss, 1971).
+
+# 3.2 Baselines
+
+We compare our approach with the following baselines: (1) ITDD: an Transformer-based architecture which incrementally represents multi-turn dialogues and knowledge, and conducts response decoding in two passes (Li et al., 2019); (2) BARTcat:
+
+A simple BART-based model that take the concatenation of dialogue context and all knowledge as the input of BART for response generation. BART sets constraint on the maximum number of tokens it can handle, and we directly truncate the text that exceeds the length limit; (2) $\mathbf{BART}_{skt}$ : SKT is variational model that introduced BERT on the basis of Lian et al. (2019) and considered the knowledge selection history in multi-turn dialogue (Kim et al., 2020). We feed the knowledge candidate selected by SKT to BART for response generation. It should be noted that training SKT requires human labels that indicate ground-truth knowledge which are crucial to the performance of the model. For fair comparison, we use $\mathcal{I}$ to reselect the knowledge label; (3) DRD: Another low-resource dialogue model which devise a disentangled response decoder with copy mechanism (See et al., 2017) and use a two-stage framework to learn it (Zhao et al., 2020a). DRD is not open source, so we can't make a very detailed comparison with it; (4) ZRKGC: A double latent variable model that achieves the state-of-the-art performance in zero-resource knowledge-grounded dialogue generation (Li et al., 2020). ZRKGC is based on UNILM
+
+| Models | PPL | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-1 | ROUGE-2 | DIST-1 | DIST-2 |
| ITDD | 26.0 | 9.5 | 3.6 | 1.7 | 0.9 | 10.4 | - | - | - |
| BARTcat | 36.4 | 17.0 | 8.6 | 5.3 | 3.4 | 13.6 | 3.1 | 1.5 | 7.3 |
| BARTskt | 40.1 | 16.2 | 8.3 | 5.1 | 3.1 | 12.7 | 2.6 | 1.2 | 7.3 |
| DRD | 54.4 | 15.0 | 5.7 | 2.5 | 1.2 | 10.7 | - | - | - |
| ZRKGCT | 53.5 | 15.1 | 4.2 | 1.2 | 0.4 | 12.5 | 0.7 | 1.2 | 8.1 |
| KAT | 22.2 | 19.4 | 10.5 | 6.9 | 4.7 | 14.4 | 3.3 | 1.8 | 8.9 |
| Full Data | 21.7 | 20.4 | 10.6 | 6.7 | 4.4 | 15.1 | 3.7 | 2.0 | 11.1 |
| 1/8 Data | 25.7 | 19.1 | 10.1 | 6.5 | 4.4 | 13.9 | 3.2 | 1.9 | 10.5 |
| 1/16 Data | 28.1 | 18.5 | 9.8 | 6.3 | 4.2 | 13.4 | 2.9 | 1.8 | 9.9 |
| Zero Data | 100+ | 12.8 | 4.7 | 2.4 | 1.4 | 7.9 | 1.0 | 2.6 | 15.7 |
+
+Table 3: Evaluation results on CMU_DoG. The performance of KAT-TSLF 1/16 Data outperforms BARTcat and BARTskt significantly except ROUGE-1 and ROUGE-2 (t-test with $p$ -value $< 0.01$ ).
+
+
+Figure 3: Comparison with DRD in low-resource setting. DRD does not provide results when the training data is less than 1/16 (1/8 in CMU_DoG). In order to save space, we merge the Wizard seen and unseen into one subfigure.
+
+
+
+
+
+
+
+(Dong et al., 2019) with 110M parameters whose performance is close to BART, so we will not replace the backbone of ZRKGC.
+
+# 3.3 Implementation Details
+
+The knowledge pool of target dataset is usually very large (e.g. $\sim 60$ sentences in Wizard), in order to reduce the time overhead, following (Kim et al., 2020), we only keep the first 40 sentences. We use the base version of BART with 139M parameters in our work, and the number of parameters of KAT is 196M. The batch size in stage I, II and III is 2048, 128 and 16 respectively. The max sequence length in source and target is 256 and 64 respectively. All models are optimized with AdamW (Loshchilov and Hutter, 2017) with learning rate $5e - 5$ in 3 epochs. We employ beam search in response decoding (the number of beams from 1 to 3) implemented by Wolf et al. (2020).
+
+# 3.4 Evaluation Results
+
+Table 1, 2 and 3 reports the evaluation results on automatic metrics, and we have the following observations: (1) In the full-data scenario, KAT achieves state-of-the-art performance without using any additional corpora, which means that KAT itself is an excellent dialogue model. Besides, additional
+
+resources are unnecessary when there are enriched training datas, so TSLF has little effect in this setting; (2) KAT-TSLF achieves the comparable performance with $\mathrm{BART}_{\mathrm{cat} / \mathrm{skt}}$ even though the baselines have leveraged all training data, while our model is only learned with 1/4 training data on Wizard (1/16 on CMU_DOG). We compare the low-resource performance with DRD, and the results are shown in Figure 3. For a fair comparison, we removed the pre-training language model and reduce the number of model parameters. We can see that KAT-TSLF outperforms DRD (especially in CMU_DOG). The comparison with $\mathrm{BART}_{\mathrm{cat}}$ is supplemented in Figure 4; (3) Although our TSLF is mainly for low-resource scenarios, under the setting of zero resources (i.e., without stage III), the performance of KAT-TSLF also surpasses ZRKGC in most evaluation metrics; (4) Responses generated by KAT have higher DIST- $n$ , which means that our KAT can better obtain information from multiple knowledge and generate more diverse texts.
+
+Table 4 reports the human evaluation results. We observe that responses from our KAT-TSLF are more fluent and more contextually coherent than those from $\mathrm{BART}_{\mathrm{skt}}$ and ZRKGC. Compared with our low-resource model, SKT has stronger knowledge relevance in the case of full data, thanks to its
+
+| Models | Wizard Test Seen | Wizard Test Unseen | CMU_DOG |
| CC | LF | KR | Kappa | CC | LF | KR | Kappa | CC | LF | Kappa |
| BARTskt | 1.78 | 1.80 | 1.34 | 0.61 | 1.72 | 1.74 | 1.36 | 0.64 | 1.70 | 1.72 | 0.65 |
| ZRKGCT | 1.72 | 1.75 | 1.12 | 0.63 | 1.69 | 1.70 | 1.16 | 0.63 | 1.67 | 1.69 | 0.63 |
| Ours 1/8 Data | 1.81 | 1.82 | 1.35 | 0.63 | 1.79 | 1.78 | 1.35 | 0.66 | 1.74 | 1.75 | 0.69 |
| Ours Zero Data | 1.76 | 1.78 | 1.14 | 0.64 | 1.70 | 1.72 | 1.24 | 0.64 | 1.69 | 1.71 | 0.66 |
+
+Table 4: Human evaluation results on Wizard-of-Wikipedia and CMU_DOG. CC, LF and KR marks context coherence, language fluency and knowledge relevance respectively. In zero-resource setting, our KAT-TSLF outperforms ZRKGC. Besides, our model surpasses BARTskt (full data) in most metrics with only only 1/8 of the training data.
+
+well-designed knowledge selection module.
+
+# 3.5 Ablation Study
+
+We conduct ablation experiments on Wizard and CMU_DOG, and the results are shown in Figure 4.
+
+So as to verify the effect of TSLF, we first removed stage I, stage II, and stage I II respectively. Inserting a new module into an already well-trained large-scale pre-trained language model will cause inconsistency problems, which require a lot of data to reconcile, so after removing stage II or stage I II, the performance of our KAT in low-resource dropped sharply. Although the quality of the automatically constructed warm-up dataset $\mathcal{D}_p$ is lower than the target dataset $\mathcal{D}_l$ , it also helps to establish the connection between the knowledge representation component and the dialogue component. Besides, we tried not to pre-train $\theta_k$ on unlabeled documents, and the result has dropped slightly, which demonstrates that is still helpful to tailor a pretrained model to the domain of a target task. In addition, replacing negative sampling with top-k retrieval will increase the inconsistency with the knowledge distribution of target dataset, leading to performance degradation. Moreover, the controller also has an effect on the generalization of the model. It can help KAT quickly adapt to new domains by adjusting the proportion of knowledge and context in the response. In order to improve the generalization performance with limited training data, some works (Chen and Shuai, 2021; Zhao et al., 2020a) fix most of the parameters during fine-tuning. We also tried to frozen knowledge encoder and context encoder in stage III or stage II III, and the results show that the performance has not improved, indicating that with the help of stage II, our model can hardly fall into overfitting.
+
+In order to verify the effect of our TSLF on other models, we try to combine $\mathrm{BART}_{\mathrm{cat}}$ with TSLF. Since the parameters of BART are tightly coupled, we can only apply stage II to it. Experimental
+
+results show that the performance is improved significantly under low-resource setting.
+
+# 3.6 Discussions
+
+Case Study Table 5 shows a case from Wizard, from which we can see that the response from our model with zero data not only smoothly catches the ground-truth knowledge (highlighted in blue), but also expands the topic with proper pieces of other knowledge (highlighted in yellow). ZRKGC generated sentences that were inconsistent with the facts. Although $\mathrm{BART}_{\mathrm{skt}}$ chose the correct knowledge, the narrative was too straightforward, and there is a repetition phenomenon. We showed some other cases in the supplementary material.
+
+Comparison with DRD If we ignore the details, DRD is actually a special case of our method, which skips stage II. During pre-training, DRD completely separates dialogue-related components and knowledge representation-related components, which makes it difficult to effectively promote the integration of dialogue and knowledge with only a small number of samples during fine-tuning. So when the training data is extremely small, DRD can hardly work. Besides, in order to prevent overfitting, DRD has to limit the number of parameters of the knowledge integration component and use fix other parameters when fine-tuning, which leads to limited performance of the model. In addition, the complex model structure makes it difficult for DRD to use pre-trained language models.
+
+KAT v.s. BART $_{cat}$ BART (as well as most other pre-training language models) has a limit on the maximum tokens of the input, so useful knowledge is likely to be truncated. For example, there are about 60 external documents per sample in Wizard, and about 40 documents will be truncated. In theory, KAT can accept an unlimited number of knowledge, so this should be one of the reasons why KAT's performance is better than BAER $_{cat}$ .
+
+
+Figure 4: Ablation experiments on Wizard-of-Wikipedia and CMU_DoG. The number of beams are set to 1 for all models. It is recommended to view the picture after zooming in, and the more the curve is to the upper right, the better the result.
+
+
+
+
+
+
+
+When we reduce the maximum number of knowledge that KAT can handle (a hyperparameter) to 15, the performance is close to $\mathrm{BART}_{\mathrm{cat}}$ .
+
+| Dial. Hist. | A: Yea it was a great movie. The Last of the Mohicans was released in 1992.
+B: I didn't realize it's been out that long!
+What is it about? |
| GT Kno. | The Last of the Mohicans is a 1992 American epic historical drama, set in 1757 during the French and Indian War. |
| Ref. | Well The Last of the Mohicans is an epic historical drama. It was set in 1757 during the Indian and French war. |
| (BARTskt) It's about the French and Indian War. It's about the French and Indian War.
+(ZRKGC) It's a classic movie. The Last of My Moh-icans was released in 2016, and is still out on Netflix.
+(Ours Zero Data) It's a series of short stories set in 1757 during the French and Indian War in the Adi-rondack mountains of Virginia.
+(Ours 1/16 Data) It is about a group of people who fight to keep their independence from the French and Indian War. |
+
+Table 5: A case from test seen of Wizard-of-Wikipedia. This dialogue contains a total of 40 external knowledge, one of which is marked as ground-truth (GT).
+
+# 4 Related Work
+
+Open domain end-to-end dialogue response generation is inspired by the success of applying neural sequence to sequence models on machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017). Very recently, in order to generate fluent, coherent and informative response, many approaches have been proposed by introducing external background documents (Ghazvininejad et al., 2018; Yavuz et al., 2019; Li et al., 2019; Lin et al., 2020). Besides documents (Dinan et al.,
+
+2019; Zhou et al., 2018), the are many forms of knowledge such as images (Huber et al., 2018) and triples in knowledge graph (Wu et al., 2019; Tuan et al., 2019).
+
+Dinan et al. (2019) presents to divide knowledge-grounded dialogue into two steps: knowledge selection and dialogue generation. PostKS (Lian et al., 2019), SKT (Kim et al., 2020), PIPM (Chen et al., 2020) and SKT-KG (Zhan et al., 2021) use the prior and posterior distribution of knowledge to improve the accuracy of knowledge selection. Zhao et al. (2020b) devise a reinforcement learning method to train a knowledge selector without ground-truth knowledge label. DeepCopy (Yavuz et al., 2019), ITDD (Li et al., 2019) and KIC (Lin et al., 2020) have improved the structure of the decoder so that it can better integrate knowledge. Since knowledge-guided dialogue corpora need to be constructed through crowdsourcing, the size of datasets such as Wizard-of-Wikipedia (Dinan et al., 2019) are relatively small. Zhao et al. (2020a) and Li et al. (2020) proposed to conduct the knowledge-grounded conversation under the low-resource and zero-resource settings respectively. We do not compare with Lin et al. (2020); Zhao et al. (2020b) since they did not release their entire source codes.
+
+Our three-stage learning framework is inspired by Zhao et al. (2020a), which uses ungrounded dialogues and unstructured documents to train a knowledge-grounded dialogue model that can work in low-resource situations. In addition, the design of stage II is inspired by distant supervision technology in relation extraction task (Mintz et al., 2009). The idea of KAT is also encouraged by disentangled decoder (Raghu et al., 2019) and the recent breakthrough in variants of Transformer (Li et al., 2019; Hashemi et al., 2020; Izacard and Grave, 2020).
+
+# 5 Conclusion
+
+We study knowledge-grounded dialogue generation under a low-resource setting by proposing a three-stage learning framework and a knowledge-aware Transformer. Evaluation results on two benchmarks indicate that our model achieves the state-of-the-art performance with less training data. Besides, KAT-TSLF exhibits a good generalization ability on zero-resource scenario.
+
+# Acknowledgments
+
+This work is supported by the National Natural Science Foundation of China (No.U1708261 and No. 61572120), Shenyang Medical Imaging Processing Engineering Technology Research Center (17-134-8-00), the Fundamental Research Funds for the Central Universities (No. N181602013 and No.N2016006), Ten Thousand Talent Program (No.ZX20200035), and Liaoning Distinguished Professor (No.XLYC1902057).
+
+# Broader Impact
+
+Incorporating knowledge into dialogue systems has been the pursuit of researchers in this field for many years. This kind of system will make AI dialogue more natural definitely. It will be more favored by people when the technology does not require a large amount of artificially annotated data. More importantly, the knowledge-based dialogue system can fundamentally change the experience of human-machine dialogue, because system can develop with the update of external knowledge base. One day it will be true that people can obtain effective information through simple conversations. However, coins always have two sides. In addition to the well-known problems caused by large pretrained datasets for end-to-end dialogue models, special knowledge bases which may be deliberately tailored can also be used to make the generated dialogues biased, just as search engines inadvertently spread biased content created by someone. In order to prevent this technology from being abused, we look forward to more research effort for detecting fake/biased/offensive content. At the same time, we recommend that developers choose content carefully to build a knowledge base for the dialogue system. Good external knowledge can adjust the behavior of the dialogue model in the response process and help the model overcome the biases hidden in large-scale social media datasets.
+
+# References
+
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Steven Bird. 2006. NLTK: the natural language toolkit. In ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006. The Association for Computer Linguistics.
+Xiuyi Chen, Fandong Meng, Peng Li, Feilong Chen, Shuang Xu, Bo Xu, and Jie Zhou. 2020. Bridging the Gap between Prior and Posterior Knowledge Selection for Knowledge-Grounded Dialogue Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3426-3437.
+Yi-Syuan Chen and Hong-Han Shuai. 2021. Meta-transfer learning for low-resource abstractive summarization. CoRR, abs/2102.09397.
+Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of Wikipedia: Knowledge-Powered Conversational Agents. In International Conference on Learning Representations.
+Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13042-13054.
+Nouha Dziri, Ehsan Kamalloo, Kory Wallace Mathewson, and Osmar R. Zaiane. 2018. Augmenting neural response generation with context-aware topical attention. CoRR, abs/1811.01063.
+Joseph Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76:378-.
+Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A Knowledge-Grounded Neural Conversation Model. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5110-5117.
+
+Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8342-8360. Association for Computational Linguistics.
+Helia Hashemi, Hamed Zamani, and W. Bruce Croft. 2020. Guided transformer: Leveraging multiple external sources for representation learning in conversational search. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1131-1140. ACM.
+Bernd Huber, Daniel J. McDuff, Chris Brockett, Michel Galley, and Bill Dolan. 2018. Emotional dialogue generation using image-grounded language models. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, April 21-26, 2018, page 277. ACM.
+Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. CoRR, abs/2007.01282.
+Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim. 2020. Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880.
+Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In *NAACL HLT* 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110-119. The Association for Computational Linguistics.
+Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, and Chongyang Tao. 2020. Zero-Resource Knowledge-Grounded Dialogue Generation. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+
+Zekang Li, Cheng Niu, Fandong Meng, Yang Feng, Qian Li, and Jie Zhou. 2019. Incremental transformer with deliberation decoder for document grounded conversations. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 12-21. Association for Computational Linguistics.
+Rongzhong Lian, Min Xie, Fan Wang, Jinhua Peng, and Hua Wu. 2019. Learning to Select Knowledge for Response Generation in Dialog Systems. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5081-5087.
+Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. page 10.
+Xiexiong Lin, Weiyu Jian, Jianshan He, Taifeng Wang, and Wei Chu. 2020. Generating Informative Conversational Response using Recurrent Knowledge-Interaction and Knowledge-Copy. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 41-52.
+Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. CoRR, abs/1711.05101.
+Alexander H. Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. Parlai: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017 - System Demonstrations, pages 79-84. Association for Computational Linguistics.
+Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 1003-1011. The Association for Computer Linguistics.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318. ACL.
+Dinesh Raghu, Nikhil Gupta, and Mausam. 2019. Disentangling language and knowledge in task-oriented dialogs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN,
+
+USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1239-1255. Association for Computational Linguistics.
+Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083.
+Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
+Yi-Lin Tuan, Yun-Nung Chen, and Hung-yi Lee. 2019. Dykgchat: Benchmarking dialogue generation grounding on dynamic knowledge graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1855-1865. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998-6008.
+Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. CoRR, abs/1506.05869.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 38-45.
+Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, and Haifeng Wang. 2019. Proactive human-machine conversation with explicit conversation goal. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3794-3804. Association for Computational Linguistics.
+Semih Yavuz, Abhinav Rastogi, Guan-Lin Chao, and Dilek Hakkani-Tur. 2019. DeepCopy: Grounded Response Generation with Hierarchical Pointer Networks. In Proceedings of the 20th Annual SIGdial
+
+Meeting on Discourse and Dialogue, SIGdial 2019, Stockholm, Sweden, September 11-13, 2019, pages 122-132.
+Haolan Zhan, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Yongjun Bao, and Yanyan Lan. 2021. Aug-menting Knowledge-grounded Conversations with Sequential Knowledge Transition. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5621-5630, Online. Association for Computational Linguistics.
+Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, and Rui Yan. 2020a. Low-Resource Knowledge-Grounded Dialogue Generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, and Rui Yan. 2020b. Knowledge-Grounded Dialogue Generation with Pre-trained Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3377-3390. Association for Computational Linguistics.
+Kangyan Zhou, Shrimai Prabhumoye, and Alan W. Black. 2018. A dataset for document grounded conversations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 708-713. Association for Computational Linguistics.
\ No newline at end of file
diff --git a/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/images.zip b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..79322535467043c4b36ceb306fba2dc6d735221f
--- /dev/null
+++ b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7f35f52ea8517212685f0ffec29051d59ad222665320538b58e8743ff5d82bc
+size 523246
diff --git a/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/layout.json b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d93561b9fc80b0250b6ea2639c84652f89d45f89
--- /dev/null
+++ b/athreestagelearningframeworkforlowresourceknowledgegroundeddialoguegeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:24a813b7bac490ebfc2ec924b10ccfe62dec3219a3c553069e832e32faef8f05
+size 392069
diff --git a/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_content_list.json b/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..490d511550888b24f4170cf6338b20d51b99b9ab
--- /dev/null
+++ b/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1777b2c092944eea04718e4ef1d7f5360d83293368ce8433f2e3a1f864b58907
+size 103568
diff --git a/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_model.json b/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0cc52246867486abd7309f9bc754526e5853d0f4
--- /dev/null
+++ b/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:887ced315ae619891e3683fbb0c6c859bde056244af89b5334ba5c585d924616
+size 119475
diff --git a/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_origin.pdf b/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4f97270753debfa34257a3503282ea171df7b2cc
--- /dev/null
+++ b/aunifiedencodingofstructuresintransitionsystems/c87f83c7-d9eb-4ecf-94f1-50ca36b95cbf_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f4fbcdbb2fb7523c8b305920e470c3936d2365ee0931a70c0cc50b12262d8f2f
+size 575805
diff --git a/aunifiedencodingofstructuresintransitionsystems/full.md b/aunifiedencodingofstructuresintransitionsystems/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5bc9d97b8aff3f8d1059a86f0874b42b9216f787
--- /dev/null
+++ b/aunifiedencodingofstructuresintransitionsystems/full.md
@@ -0,0 +1,482 @@
+# A Unified Encoding of Structures in Transition Systems
+
+# Tao Ji*, Yong Jiang, Tao Wang, Zhongqiang Huang, Fei Huang, Yuanbin Wu, Xiaoling Wang
+
+School of Computer Science and Technology,
+
+East China Normal University
+
+DAMO Academy, Alibaba Group
+
+{taoji@stu, ybwu@cs, xlwang@cs}.ecnu.edu.cn
+
+{yongjiang.jy,leeo.wangt,z.huang,f.huang}@alibaba-inc.com
+
+# Abstract
+
+Transition systems usually contain various dynamic structures (e.g., stacks, buffers). An ideal transition-based model should encode these structures completely and efficiently. Previous works relying on templates or neural network structures either only encode partial structure information or suffer from computation efficiency. In this paper, we propose a novel attention-based encoder unifying representation of all structures in a transition system. Specifically, we separate two views of items on structures, namely structure-invariant view and structure-dependent view. With the help of parallel-friendly attention network, we are able to encoding transition states with $\mathcal{O}(1)$ additional complexity (with respect to basic feature extractors). Experiments on the PTB and UD show that our proposed method significantly improves the test speed and achieves the best transition-based model, and is comparable to state-of-the-art methods.
+
+# 1 Introduction
+
+Transition systems have been successfully applied in many fields of NLP, especially parsing (dependency parsing (Nivre, 2008), constituent parsing (Watanabe and Sumita, 2015), and semantic parsing (Yin and Neubig, 2018)). Basically, a transition system takes a series of actions which attach or detach some items (e.g., sentence words, intermediate outputs) to or from some structures (e.g., stacks, buffers, partial trees). Given a set of action series, a classifier is trained to predict the next action given a current configuration of structures in the transition system. The performances of the final system strongly depend on how well the classifier encodes those transition system configurations.
+
+Ideally, a good configuration encoder should encode transition system structures completely and
+
+
+Figure 1: An overview of our transition-based parser.
+
+efficiently. However, challenges appear when we try to have the cake and eat it. For example, traditional template-based methods (Chen and Manning, 2014) are fast, but only encode partial information of structures (e.g., few top items on stacks and buffers). Structure-based networks (e.g., StackRNN (Dyer et al., 2015)) rely on carefully designed network architecture to get a full encoding of structures (actually, they still miss some off-structure information, see our discussions in Section 4.2), but they are usually slow (e.g., not easy to batch). Furthermore, different structures have different ways of update (stacks are first-in-last-serve, buffers are first-in-first-serve), it also takes efforts to design different encoders and ways of fusing those encoders.
+
+In this work, we aim to provide a unified encoder for different transition system structures. Instead of inspecting different structures individually, to unify the encoding, we turn to inspect items in each structure, which are ultimate targets for any structure encoder. One key observation is that every item has two-views, namely structure-invariant view which is unchanged when the item is placed on different structures, and structure-dependent view which reflects which part of which structure the item stands. For example, when a word $w$ (item) is on the buffer (structure), its structure-invariant view could contain its lexical form and part-of-speech tag, while its structure-dependent view indicates that $w$ is now sitting on the buffer and its distance to the buffer head is $p$ . When $w$ is detached from buffer and attached to the stack, its structure-dependent view will switch to "sitting on the stack" while its structure-invariant view stay unchanged. A unified
+
+structure encoder thus suffices to uniformly encode both views.
+
+For the structure-invariant view, we share them among different structures, thus it is automatically unified. For the structure-dependent view, we propose a simple yet powerful encoder. It assigns each structure a set of indicating vectors (structure indicators), each indicator specifies certain part of that structure. For example, we use indicators (vectors) to expressing "on top of the stack", "the second position of the buffer", and "index of head words in partial trees". To encode an item, we only need to concatenate its structure-invariant encoding and corresponding indicators according to its position in that structure.
+
+Regarding completeness and efficiency, we find that with structure indicators, it is relatively easy to encode a structure completely: one only needs to decompose the structure into identifiable subparts. In fact, we can use them to track some parts of structures which are not revealed in previous work (e.g., words have been popped out from stacks). It runs in the same manner as templated-based models, thus the decoding efficiency is guaranteed. We also note that using structure indicator is different from existing ways to include structure information into neural network models (Shaw et al., 2018; Wang et al., 2019; Shiv and Quirk, 2019): it encodes dynamical structures (changing with transition system running) rather than static structures (e.g., fixed parse trees).
+
+We can easily implement the unified structure encoding with existing multi-head attention networks (MHA, (Vaswani et al., 2017)). It is also easy to fuse encodings of different structures with multilayer MHA. We conduct experiments on the English Penn Treebank 3.0 and Universal Dependencies v2.2, show that the unified structure encoder is able to help us achieving state-of-the-art transition-based parser (even competitive to the best graph-based parser), while retaining a fast training and testing speed.
+
+# 2 Transition Systems
+
+We briefly review transition-based dependency parsing. Given a sentence $x = \mathrm{root}_0, w_1, \dots, w_n$ (root $_0$ is a synthetic word) and a relation set $\mathbb{R}$ , we denote a dependency tree for $x$ to be $\{(i,j,r)\}$ , where $(i,j,r)$ represents a dependency relation $r \in \mathbb{R}$ between $w_i$ (head) and $w_j$ (dependent).
+
+A transition system is a sound quadruple $S =$
+
+| root0 | He1 | nsubj | good3 | amod | control4 |
| t | a | Configuration |
| | ([ ], | [1, ..., 4, 0], | ∅ | | ) |
| 1 | sh | ([1], | [2, 3, 4, 0], | | | ) |
| 2 | la | ([ ], | [2, 3, 4, 0], | ∪(2, 1, nsubj) | | ) |
| 3 | sh | ([2], | [3, 4, 0], | | | ) |
| 4 | sh | ([2, 3], | [4, 0], | | | ) |
| 5 | la | ([2, ], | [4, 0], | ∪(4, 3, amod) | | ) |
| 6 | sh | ([2, 4], | [0], | | | ) |
| 7 | ra | ([2, ], | [0], | ∪(2, 4, dobj) | | ) |
| 8 | la | ([ ], | [0], | ∪(0, 2, root) | | ) |
+
+Figure 2: A running example of the Arc-hybrid transition-based parsing. The above gold tree is constructed after performing 8 correct actions. We will use the grey row as an example for the structural indicator.
+
+$(\mathbb{C}, \mathbb{A}, c_x, \mathbb{C}_t)$ , where $\mathbb{C}$ is the set of configurations. $\mathbb{A}$ is the set of actions, $c_x$ is an initialization function mapping $x$ to a unique initial configuration, and $\mathbb{C}_t \subseteq \mathbb{C}$ is a set of terminal configurations. Given a configuration $c \in \mathbb{C}$ , a transition-based parser aims to predict a correct action $a \in \mathbb{A}$ and move to a new configuration. We specifically describe the arc-hybrid system (Kuhlmann et al., 2011). In this system, each configuration $c = (\sigma | i, j | \beta, \mathbb{T})$ aggregates information from three structures, namely a stack $(\sigma, \beta, w_i)$ , a buffer $(\beta, w_j)$ , and a partial tree $(\mathbb{T})$ . The actions of arc-hybrid are formalized as (a running example is shown in Figure 2):
+
+$$
+(\sigma , i | \beta , \mathbb {T}) \vdash (\sigma | i, \beta , \mathbb {T}) \quad (\text {s h})
+$$
+
+$$
+(\sigma | i, j | \beta , \mathbb {T}) \vdash (\sigma , j | \beta , \mathbb {T} \cup (j, i, r)) \quad (\mathrm {l a} _ {r})
+$$
+
+$$
+(\sigma | i | j, \beta , \mathbb {T}) \vdash (\sigma | i, \beta , \mathbb {T} \cup (i, j, r)) \quad (\operatorname {r a} _ {r})
+$$
+
+There are three actions, sh moves the front item of the buffer $(w_{i})$ to the top item of the stack. $1\mathsf{a}_r$ removes the top item the stack $w_{i}$ , attaches it as a dependent to $w_{j}$ with label $r$ , and adds a left-arc $(j,i,r)$ to the partial tree. $r\mathsf{a}_r$ removes the top of the stack $w_{j}$ attaches it as a dependent to $w_{i}$ with label $r$ , and adds a right-arc $(i,j,r)$ to the partial tree. We note that all actions are actually attaching or detaching item to or from structures, where an item could be a word (in stack and buffer) or an edge (in the partial tree).
+
+Note that besides structures in the configurations, we can also incorporate other structures to help learning action predictors. For example, we can consider the history action list which contain all
+
+previous actions in a sequential manner. In this case, an item in this action list is an action label.
+
+# 3 Two Views of an Item
+
+We can see that, while its "content" remains the same, an item may appear in different structures in a transition system's configurations. To uniformly encode an item (and thus structures containing them), we can decouple the encoding of contents and structures, then combine them in a unified way. This simple method also suggests us to design unified structure encoders which make the whole transition system model concise and efficient.
+
+The structure-invariant view typically captures the lexical (shallow) form of an item. For example, in the arc-hybrid system, items in stacks and buffers have words as their structure-invariant view, items in action list have actions as their structure-invariant view. This view is shared when the item moving from one structure to another, and we only need to encode it once (e.g., no matter the stack or the buffer a word appears, its structure-invariant representation is identical). We describe how to encode this view in Section 4.3.
+
+The more interesting problem is how to characterize the structure-dependent view. We would like to have a unified strategy to represent those structures. Our major tool is structure indicators, which are basically a set of vectors bounded to each structure. Taking the stack for example. We use vectors to indicate "the top of stack" (we name the vector with $1_{\sigma}$ ), "the second to the stack top" (naming with $2_{\sigma}$ ). Vector $0_{\sigma}$ indicates the parts haven't been in the stack. Different with previous work, we could also represent "the previous stack top which has been popped" (a vector naming by $-1_{\sigma}$ ), and "the previous previous stack top" (a vector naming by $-2_{\sigma}$ ). That is, for different parts of the structure, we employ vectors to indicate them.
+
+Similarly, for the buffer we have another set of structure indicators $\{1_{\beta}, -1_{\beta}, 2_{\beta}, -2_{\beta} \cdots\}$ , where a positive number indicates the position in the buffer, a negative number indicates the time step passed since the item has been removed from the buffer. For the partial tree with dependency relation, we decompose it into two structures, the tree arc $(\mathbb{T}_{arc})$ and the dependency relation $(\mathbb{T}_{rel})$ . A set of $\mathbb{T}_{arc}$ indicators $\{0_{\mathbb{T}_{arc}}, 1_{\mathbb{T}_{arc}}, -1_{\mathbb{T}_{arc}} \cdots\}$ indicates the position from item to its head word. A set of $\mathbb{T}_{rel}$ indicators $\{0_{\mathbb{T}_{rel}}, 1_{\mathbb{T}_{rel}}, 2_{\mathbb{T}_{rel}}, \cdots, |\mathbb{R}|_{\mathbb{T}_{rel}}\}$
+
+ | root0 | He1 | has2 | good3 | control4 |
| σ | 0 | -1 | 2 | 1 | 0 |
| β | 2 | -4 | -2 | -1 | 1 |
| Tarc | 0 | 1 | 0 | 0 | 0 |
| Trel | 0 | 1 | 0 | 0 | 0 |
| sh | lansubj | sh | sh | |
| α | 4 | 3 | 2 | 1 | |
+
+Figure 3: An instance of structure indicators after the 4th step in Figure 2. Grey rows indicate structure-invariant parts $(\sigma, \beta, \mathbb{T}_{arc}$ and $\mathbb{T}_{rel}$ are shared), and other rows indicate structure-dependent parts. To simplify, we express the relation $nsubj$ by vector $1_{\mathbb{T}_{rel}}$ .
+
+indicates the IDs of dependency relations. Vectors “ $0_{\mathbb{T}_{arc}}$ ” and “ $0_{\mathbb{T}_{rel}}$ ” indicate that this dependency edge is not in partial tree. For the action list we have a set of structure indicators $\{1_a, 2_a, \dots\}$ where a vector indicates the position in the list.
+
+In Figure 3, we show the two-views of an instance from the 4th step in Figure 2. We can observe that the five different structures mentioned above have a unified form now.
+
+# 4 The Unified Structure Encoder
+
+# 4.1 Encoding with USE
+
+When a transition system is running at moment $t$ , the parser needs to capture as much information as possible about the current configuration to determine which is the correct action. The key is to encode the configuration containing many structures concisely and efficiently. We propose a unified structure encoder (USE) by using multi-head self-attention networks (Vaswani et al., 2017). Each head extracts a feature vector of one structure (e.g., $o_{\sigma}$ for the stack).
+
+A common USE function maps a query and a set of key-value pairs to an output. The query vector $\mathbf{q}$ represents the current time step and data structure. The key-value pairs both represent two-views of the structure. The output vector $\mathbf{o}$ is calculated as a weighted sum of values, where the weight assigned to each value is calculated by a scaled $\left(\frac{1}{\sqrt{d_k}}\right)$ dot-product function of the query with the corresponding key. In practice, we pack the keys and values into matrices $K$ and $V$ , then compute the output as:
+
+$$
+\boldsymbol {o} = \operatorname {U S E} (\boldsymbol {q}, K, V) = \operatorname {s o f t m a x} \left(\frac {\boldsymbol {q} \cdot K ^ {\top}}{\sqrt {d _ {k}}}\right) \cdot V \tag {1}
+$$
+
+It is universal for different structures. Take the stack $\sigma$ for example, we calculate the feature vector
+
+$\pmb{o}_{\sigma ,t}$ by assigning the $\pmb{q}_{\sigma ,t}$ , $K_{\sigma ,t}$ , and $V_{\sigma ,t}$ :
+
+$$
+\begin{array}{l} \boldsymbol {q} _ {\sigma , t} = W _ {\sigma} ^ {Q} \cdot (\boldsymbol {m} _ {t} \oplus \boldsymbol {m} _ {\sigma}) \\ K _ {\sigma , t} = W _ {\sigma} ^ {K} \cdot \left(X + S _ {\sigma , t} ^ {K}\right) \tag {2} \\ V _ {\sigma , t} = W _ {\sigma} ^ {V} \cdot (X + S _ {\sigma , t} ^ {V}). \\ \end{array}
+$$
+
+Where $m_{t}$ and $m_{\sigma}$ are the marker embeddings of time step $t$ and data structure $\sigma$ ; $W_{\sigma}^{Q}$ , $W_{\sigma}^{K}$ , $W_{\sigma}^{V}$ are parameter matrices for linear transformation; $X$ is the word embedding matrix. We describe $X$ and $A$ in detail later. The $S_{\sigma,t}^{K}$ and $S_{\sigma,t}^{V}$ are the embedding matrices of the structural indicator. Take $\sigma$ in Figure 3 for example, $S_{\sigma,t}^{K} = [0_{\sigma}, -1_{\sigma}, 2_{\sigma}, 1_{\sigma}, 0_{\sigma}]^{K}$ and $S_{\sigma,t}^{V} = [0_{\sigma}, -1_{\sigma}, 2_{\sigma}, 1_{\sigma}, 0_{\sigma}]^{V}$ . Following Shaw et al. (2018), we use this two sets of structural embeddings for key-value pairs and add them to $X$ to combine the information.
+
+When the system comes to the next moment $t + 1$ we use the $\pmb{m}_{t + 1},S_{\sigma ,t + 1}^{K}$ and $S_{\sigma ,t + 1}^{V}$ for an updated configuration. For the other four structures, we calculate their feature vectors $\pmb{o}_{\beta ,t}$ $\pmb{o}_{\alpha ,t}$ $\pmb{O}_{\mathbb{T}_{arc},t}$ and $\pmb{o}_{\mathbb{T}_{rel},t}$ by assigning the corresponding $\pmb {q}$ $K$ and $V$ , respectively 3.
+
+# 4.2 Fusion of Structure Encodings
+
+After obtaining feature vector of each structure, the encoder incorporates all of them into configuration representation $\pmb{c}_t$ . Here, we simply use a multi-layer perceptron (MLP):
+
+$$
+\boldsymbol {c} _ {t} = \mathrm {M L P} \big (\boldsymbol {o} _ {\sigma , t} \oplus \boldsymbol {o} _ {\beta , t} \oplus \boldsymbol {o} _ {\alpha , t} \oplus \boldsymbol {o} _ {\mathbb {T} _ {a r c}, t} \oplus \boldsymbol {o} _ {\mathbb {T} _ {r e l}, t} \big).
+$$
+
+Besides that, to enhance more interaction among structures, we stack $L$ USE layers and add the previous layer's configuration vector $\pmb{c}_t^{(l-1)}$ ( $1 < l \leq L$ ) when computing the query vector $\pmb{q}_{*,t}^{(l)}$ (* for any structures).
+
+$$
+\boldsymbol {q} _ {* t} ^ {(l)} = W _ {*} ^ {Q (l)} \cdot \left(\boldsymbol {m} _ {t} ^ {(l)} \oplus \boldsymbol {m} _ {*} ^ {(l)} \oplus \boldsymbol {c} _ {t} ^ {(l - 1)}\right)
+$$
+
+Since $\pmb{c}_t^{(l-1)}$ contains the complete structural information, the $l$ th layer's USE module can interact with other structures and output a more informative representation $\pmb{o}_*^{(l)} = \mathrm{USE}(\pmb{q}_*^{(l)}, K_*^{(l)}, V_*^{(l)})$ . Then, we obtain a high layer configuration representation by combining these output vectors:
+
+$$
+\boldsymbol {c} _ {t} ^ {(l)} = \mathrm {M L P} (\boldsymbol {o} _ {\sigma , t} ^ {(l)} \oplus \boldsymbol {o} _ {\beta , t} ^ {(l)} \oplus \boldsymbol {o} _ {\alpha , t} ^ {(l)} \oplus \boldsymbol {o} _ {\mathbb {T} _ {a r c}, t} ^ {(l)} \oplus \boldsymbol {o} _ {\mathbb {T} _ {r e l}, t} ^ {(l)}).
+$$
+
+ | σ | β | a | T | GPU |
| in | out | in | out | | | ◎ |
| Top-k | ▲ | - | ▲ | - | - | ▲ | ◎ |
| σ-LSTM | ★ | - | ★ | - | ★ | - | ◎ |
| Binary | ▲ | ▲ | ▲ | ▲ | - | - | ◎ |
| USE | ★ | ★ | ★ | ★ | ★ | ★ | ◎ |
+
+Figure 4: Structural information coverage and GPU-friendliness of different feature extractors. $\star$ indicates complete extraction, $\triangle$ indicates partial extraction, $\odot$ indicates GPU parallel friendly, and $\odot$ indicates unfriendly.
+
+We set different layers with different parameters (preliminary experiments suggest shared parameter performs worse). To support deeper networks, the residual connection and layer normalization (Ba et al., 2016) are employed on MLP and USE modules. Finally, we use $\boldsymbol{c}_t^{(L)}$ of the last layer to classify action.
+
+Basically, we need at least 5 attention heads to extract full structures (each head corresponds to one structure). Vaswani et al. (2017) noted that a multi-head attention layer has a constant number $(\mathcal{O}(1))$ of sequentially executed operations, which means that efficient GPU-based computing is possible. In training, the USE calculations at different moments are independent of each other, so we can pack them into the batch dimension to obtain an $\mathcal{O}(1)$ training complexity. Hence, USE can uniformly extract full structure features efficiently.
+
+Comparing to Previous Encoders We divide previous work into three encoding methods: top- $k$ , stack-LSTM, and binary vector. Top- $k$ methods (Chen and Manning, 2014; Weiss et al., 2015) capture the conjunction of only few $1 \sim 3$ in-structure items. It extracts only partial structural information. Since the feature template is fixed, it is easy to batchify. Stack-LSTM methods (Dyer et al., 2015; Ballesteros et al., 2016) can efficiently represent all in-structure items, via the PUSH( $\cdot$ ) and POP( $\cdot$ ) functions. But it loses the information of outside parts and subtree which cannot be treated as a stack. Besides, Che et al. (2019) point out that its batch computation is very inefficient. Binary Vector methods (Zhang et al., 2017) use two binary vectors to model whether each element is in a $\sigma$ or a $\beta$ . It can efficiently encode some outside parts of stack and buffer but loss the information of inside position.
+
+We compare existing work with our USE encoder in terms of the coverage of structure features and GPU computing friendly (in Figure 4). Overall, USE does not lose any structural information and more efficient than previous feature extraction schemes.
+
+# 4.3 Encoding the Structure-invariant View
+
+Given a sentence $s = \omega_0, \ldots, \omega_n$ , we learn a lexical vector $\pmb{x}_i$ for each word $\omega_i$ , and pack them into matrix $X$ for Equation 1. The vector $\pmb{x}_i$ is composed of three parts: the word embedding $e(\omega_i)$ , the part-of-speech (POS) tag embedding $e(g_i)$ , and the character-level representation vector $\mathrm{CharCNN}(\omega_i)$ .
+
+$$
+\boldsymbol {x} _ {i} = \boldsymbol {e} (\omega_ {i}) \oplus \boldsymbol {e} (g _ {i}) \oplus \operatorname {C h a r C N N} (\omega_ {i}). \tag {3}
+$$
+
+We simply initialize all embedding matrices in a random way. The $\mathrm{CharCNN}(\omega_i)$ vector is obtained by feeding $\omega_{i}$ into a character convolutional neural network (Zhang et al., 2015). To encode more sentence context, $e(\omega_{i})$ is obtained by trainable bidirectional long short-term memory, Transformer encoder networks or pre-trained networks like Bert.
+
+Given an action list $\alpha = a_0, \ldots, a_m$ , we learn a structure-invariant vector $\pmb{a}_i$ for each action $a_i$ . Because the action space is only $2|\mathbb{R}| + 1 (< 10^2)$ , we directly obtain $\pmb{a}_i = e(a_i)$ by action embedding.
+
+Since all decoding steps share the same structure-invariant representations, they are just computed only once. In the experiments, we will discuss all mentioned encoding ways.
+
+# 5 The Action Classifier
+
+The action set $\mathbb{A}$ is first divided into three main types: sh, la and ra, then divided into $|\mathbb{R}|$ dependency labels only for la and ra actions. Thus we perform a two-stage process with 3-class and $|\mathbb{R}|$ -class classifications. It effectively reduces the classification space compared with one-stage process. For example, the space of a sh action is 3-class in two-stage process while $2|\mathbb{R}| + 1$ class in one-stage process.
+
+For the action type classification, based on $\pmb{c}_t^{(L)}$ , we follow Kiperwasser and Goldberg (2016) which scores the three actions by an MLP,
+
+$$
+\operatorname {S c o r e} \left(t, \left[ \begin{array}{c} \text {s h} \\ \text {l a} \\ \text {r a} \end{array} \right]\right) = \operatorname {M L P} \left(\boldsymbol {c} _ {t} ^ {(L)}\right) \left[ \begin{array}{c} \text {s h} \\ \text {l a} \\ \text {r a} \end{array} \right]. \tag {4}
+$$
+
+To classify the dependency label $r$ between word $i$ and $j$ , based on the lexical representations $\boldsymbol{x}_i, \boldsymbol{x}_j$ and $\boldsymbol{c}_t^{(L)}$ , we follow Dozat and Manning (2017)
+
+which uses a biaffine score function to predict the label's probability,
+
+$$
+\boldsymbol {z} = \boldsymbol {x} _ {i} ^ {\top} \boldsymbol {W} _ {1} \boldsymbol {x} _ {j} + \left(\boldsymbol {x} _ {i} \oplus \boldsymbol {x} _ {j} \oplus \boldsymbol {c} _ {t} ^ {(L)}\right) ^ {\top} W _ {2} + \boldsymbol {b}
+$$
+
+$$
+P (r | i, j) = \operatorname {S o f t m a x} (z) [ r ]. \tag {5}
+$$
+
+Where $W_{1}$ is a 3-dimensional parameter tensor, $W_{2}$ is a parameter matrix, and $\pmb{b}$ is a parameter vector. A slight difference is that we induce $c_t^{(L)}$ to model the prior probability of each label under the current configuration.
+
+# 5.1 Training Details
+
+We have two training objectives, one is scoring the correct action higher than the incorrect action, and the second one is maximizing the probability of the correct dependency label. For correct action $\alpha^{*}$ , aiming to maximize the margin between its score and the highest incorrect action $(\hat{\alpha})$ score, we use the hinge loss:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\alpha} = \frac {1}{2 n} \sum_ {t = 1} ^ {2 n} \max \left(0, 1 - \operatorname {S c o r e} (t, \alpha^ {*}) \right. \\ + \max _ {\hat {\alpha} \neq \alpha^ {*}} \operatorname {S c o r e} (t, \hat {\alpha}). \\ \end{array}
+$$
+
+For correct dependency label $r^*$ , aiming to maximize its probability, we use the cross-entropy loss:
+
+$$
+\mathcal {L} _ {r} = \frac {1}{n} \sum_ {(i, j, r ^ {*}) \in \mathbb {T}} - \log P (r ^ {*} | i, j).
+$$
+
+The final objective is to minimize a weighted combination of them: $\mathcal{L} = \lambda_1\mathcal{L}_\alpha +\lambda_2\mathcal{L}_r$
+
+We follow Kiperwasser and Goldberg (2016) which use error exploration training with dynamic-oracle and aggressive strategies. A parser that always takes the correct action during training will suffer from error propagation during testing. To take wrong actions, the dynamic-oracle welldefines the "correct" actions even if the current configuration cannot lead to the gold tree. The aggressive exploration forces taking a wrong action with probability $p_{agg} = 0.1$ .
+
+# 6 Interpret the Features
+
+Since a transition system contains various structures, a natural question is which part of structures
+
+are important for the parser? To show the importance of a variable, one standard approach is using partial derivatives multiplied by the variable's value (Denil et al., 2015). Hence, the importance score $(\triangle)$ of a structural indicator is a dot product between objective function gradient and indicator embedding:
+
+$$
+\bigtriangleup (\sigma , i) = \left(\nabla_ {\boldsymbol {s} _ {\sigma , i} ^ {K}} \mathcal {L}\right) ^ {\top} \cdot \boldsymbol {s} _ {\sigma , i} ^ {K} + \left(\nabla_ {\boldsymbol {s} _ {\sigma , i} ^ {V}} \mathcal {L}\right) ^ {\top} \cdot \boldsymbol {s} _ {\sigma , i} ^ {V}
+$$
+
+Concretely, $\triangle (\sigma ,i)$ shows the relevance between the stack indicator $i$ and the decision of our parser. The importance at indicator $i$ of any structure can be derived similarly. We can further accumulate multiple items' relevance. For example, stack inside relevance $\triangle_{in}(\sigma) = \sum_{i > 0}\triangle (\sigma ,i)$ , stack outside relevance $\triangle_{out}(\sigma) = \sum_{i\leq 0}\triangle (\sigma ,i)$ , etc. In the experiments, we explain the importance of each structure part by $\triangle$ score.
+
+# 7 Experiments
+
+Data We conduct experiments and analysis on two main datasets including 12 languages: the English Penn Treebank (PTB 3.0) with Stanford dependencies, and the Universal Dependencies (UD 2.2) (Nivre et al., 2018) treebanks used in CoNLL 2018 shared task (Zeman et al., 2018). The statistics of datasets are in Appendix C. For PTB, we use the standard train/dev/test splits and the external POS tags obtained by the Stanford tagger (accuracy $\approx 97.3\%$ ). Following Ji et al. (2019), we select 12 languages from UD, and use CoNLL shared task's official train/dev/test splits, where the POS tags were assigned by the UDPipe (Straka et al., 2016).
+
+Evaluation We mainly report unlabeled (UAS) and labeled attachment scores (LAS). For evaluations on PTB, five punctuation symbols (“” : , .) are excluded, while on UD, we use the official evaluation script.
+
+Hyper-parameters For structure-invariant part, we directly adopt most parameter settings of Ji et al. (2019) and Zhang et al. (2020), including pretrained embeddings, BiLSTM, and CharCNN. For structure-dependent part, we use a total of 8 structural heads, allocating two each for the stack, buffer and action list, one for the subtree's edges and one for the edges' labels. Our pre-experiments show that stacking 6 layers of USE yields the best results. The weight $\lambda$ of the objective function is assigned to 0.5. We trained our parser for up to 1k
+
+| Parser | Type | Test |
| UAS | LAS |
| Chen and Manning (2014) | | 91.8 | 89.6 |
| Weiss et al. (2015) | | 94.26 | 91.42 |
| Andor et al. (2016) | | 94.61 | 92.79 |
| Dyer et al. (2015) | | 93.1 | 90.9 |
| Ballesteros et al. (2016) | T | 93.56 | 92.41 |
| Kiperwasser and Goldberg (2016) | | 93.1 | 91.0 |
| Zhang et al. (2017) | | 93.71 | 91.60 |
| Mohammadshahi and Henderson (2020) | | 93.07 | 91.08 |
| Ma et al. (2018) | | 95.87 | 94.19 |
| Yuan et al. (2019) | | 94.60 | 94.02 |
| Dozat and Manning (2017) | | 95.74 | 94.08 |
| Li et al. (2019) | G | 95.93 | 94.19 |
| Ji et al. (2019) | | 95.97 | 94.31 |
| Zhang et al. (2020) | | 96.14 | 94.49 |
| Our USE Parser | | | |
| arc-hybrid | T | 95.99 | 94.28 |
| arc-standard | | 95.95 | 94.26 |
| arc-eager | | 95.93 | 94.23 |
+
+Table 1: Results on the English PTB dataset. "T" represents transition-based parsers, and "G" represents graph-based parsers. We report the average over 5 runs.
+
+iterations, stopping early if peak performance on dev did not increase over 100 epochs. The details of the chosen hyper-parameters in default settings are summarized in Appendix D.
+
+# 7.1 Main Results
+
+Firstly, we compare our method with previous work (Table 1). The first part contains transition-based models. We particularly compare with the two strong baselines in the blue cell, where Ma et al. (2018) decode parse trees in a depth-first manner with a stack-pointer network, and Yuan et al. (2019) decode transition sequences in both the forward and backward directions by multi-task learning. In a fair comparison, our three unified structure encoding (USE) parsers all achieve significant improvements on PTB. This demonstrates the benefit of complete structural information by our unified encoding.
+
+Secondly, we compare with strong graph-based parsers. The second part of Table 1 contains two first-order parsers and two high-order parsers (in the red cell). Our USE parsers beat the first-order methods, but underperform the high-order methods which capture high-order features by graph neural networks and TreeCRF. However, speed experiments show that USE is about 2 times faster than them. It's our future work to bridge the performance gap by using the bi-directional transition system (Yuan et al., 2019) and stronger decoding
+
+
+Figure 5: Analysis of the allocation of structural heads. The red line is the performance of basic setup.
+
+methods (Andor et al., 2016).
+
+Thirdly, we compare the results of three USE parsers with different transition systems (third part of Table 1). We can see that the arc-hybrid system is more expressive than the arc-eager and the arc-standard. Shi et al. (2017) demonstrate that the arc-eager is more expressive on a minimal feature set, but our results do not support them on a full feature set. The reason may be that, when the feature set is full, arc-eager system has one more action (REDUCE) than arc-standard in the first stage of classification.
+
+Head Allocation Here we discuss the allocation of structural heads. Our basic idea is assigning one head to one structure, which means five heads in total. We perform two sets of ablation experiments based on the basic setup: decreasing or increasing one head for each structure respectively (Figure 5). Decreasing one head means that the corresponding structure is not visible to the parser. Losing the information of stack or buffer severely hurts the performance. Comparatively, losing the information of action list or subtree slightly hurts the performance. This suggests that the stack and buffer are more important in arc-hybrid transition system, and we should pay more attention to them. Increasing one head shows the improved performance of giving the corresponding structure double attention. We observe obvious performance gains on the stack, buffer, and action list, which means that augmenting their information is helpful. Considering the performance gain and computational cost of adding heads, we finally use a total of 8 structural heads. The parser double attend to the stack, buffer and action list.
+
+Lexical Representation We analyze different lexical word representation from Section 4.3 (Table 2). The first part reports the use of context
+
+| Lexical Encoder | Dev | Test |
| UAS | LAS | UAS | LAS |
| Glove | 95.72 | 93.79 | 95.71 | 94.05 |
| + BiLSTM | 95.81 | 93.87 | 95.93 | 94.21 |
| + Xformer | 95.84 | 93.93 | 95.99 | 94.28 |
| Bert | 95.90 | 93.97 | 96.21 | 94.56 |
| + finetune | 95.97 | 94.02 | 96.28 | 94.60 |
| M&H20 | 95.78 | 93.74 | 96.11 | 94.33 |
+
+Table 2: Lexical encoder comparison on PTB. M&H20: Mohammadshahi and Henderson (2020).
+
+| Parser | Type | Speed |
| Ma et al. (2018) | T | 183 |
| Dozat and Manning (2017) | G | 496 |
| Ji et al. (2019) | G‡ | 403 |
| Zhang et al. (2020) | G‡ | 466 |
| Our arc-hybrid parser | T | 918 |
+
+Table 3: Parsing speed comparison on PTB test set. The $\ddagger$ indicates high-order graph-based parsers.
+
+independent Glove embeddings (Pennington et al., 2014) in the arc-hybrid system. We learn the context via BiLSTM or Transformer encoder. The results show that encoding context can further improve performance and the Transformer encoder is better than BiLSTM. The second part reports the use of contextual Bert networks (Devlin et al., 2019). The introduction of Bert networks and in particular fine-tuning usage can significantly increase the performance. Compared with Mohammadshahi and Henderson (2020), our parser performs better because it encodes the full structure rather than only top-k in-structure items.
+
+Parsing Speed Table 3 compares the parsing speed of different parsers on PTB test set. For a fair comparison, we run all parsers with python implementation on the same machine with Intel Xeon E5-2650v4 CPU and GeForce GTX1080Ti GPU. The USE parser can parse about 918 sentences per second, over 5 times faster than the strongest transition-based parser (Ma et al., 2018). This result shows the efficiency of the attention mechanism. Compared to three graph-based parsers, our parser is nearly 2 times faster than theirs. It's because the transition-based parser decodes linearly and does not require complex decoding algorithms like minimum spanning tree or TreeCRF. Considering the parsing performance and speed together, our proposed parser is able to meet the requirements of a real-time system.
+
+ | bg | ca | cs | de | en | es | fr | it | nl | no | ro | ru | Avg. |
| Ma18 | 89.31 | 90.55 | 89.62 | 77.75 | 82.32 | 90.28 | 85.83 | 90.75 | 87.57 | 89.82 | 85.34 | 92.06 | 87.60 |
| Zhang20 | 89.72 | 91.27 | 90.94 | 78.26 | 82.88 | 90.79 | 86.33 | 91.02 | 87.92 | 90.17 | 85.71 | 92.49 | 88.13 |
| arc-hybrid | 89.81 | 90.91 | 90.68 | 78.48 | 82.52 | 90.27 | 85.98 | 90.83 | 87.96 | 89.91 | 85.88 | 92.36 | 87.97 |
+
+Table 4: LAS on UD2.2 test datasets. Ma18: Ma et al. (2018); Zhang20: Zhang et al. (2020). We report the average over 3 runs.
+
+
+Figure 6: Analysis of the importance score $(\triangle)$ for different structure part.
+
+Interpretability Figure 6 visualizes the importance score $(\triangle)$ of each structure part in arc-hybrid transition system. Consistent with the findings in Figure 5, the stack and buffer achieve higher importance scores. However, in reaching the same conclusion, the interpretable method does not require retraining of the parser. Furthermore, we observe that the outside information of the stack and buffer is more important than the subtree structure. It suggests that the transition parser should encode them.
+
+UD Treebanks Table 4 compares our USE parser with two baselines on UD datasets. We adopt the non-projective arc-hybrid system for handling the ubiquitous non-projective trees (de Lhoneux et al., 2017). As the transition-based baseline, the parser proposed by Ma et al. (2018) was re-runned under the same hyper-parameter settings as ours. Our USE parser outperforms the baseline on all of the 12 languages, the averaged improvement is 0.37 LAS, again showing the power of the complete transition system encoder. Compared with the strongest graph-based baseline (Zhang20), our parser performs better on 4 treebanks, including bg, de, nl, and ro. These four treebanks are relatively smaller than other treebanks, probably indicating that our parser is more suitable for low resource languages. Overall, there is still a 0.15 averaged LAS gap with the graph-based baseline, and it is our future work to further improve the USE transition-based parser.
+
+# 8 Related Work
+
+We have already surveyed related transition system encoder in Section 4.2. Here we present several powerful transition-based parsers. Ma et al. (2018) decode a parse tree step-by-step based on a depth-first traversal order. A stack is usually used to maintain the depth-first search. Thus they use a stack-pointer network for decoding. Note that their work is not based on any transition systems. Yuan et al. (2019) propose a bidirectional decoding method for a stack-LSTMs transition-based parser. They perform joint decoding with a left-to-right parser and a right-to-left parser. Mohammadshahi and Henderson (2020) propose a Graph2Graph framework for enhancing expression by treating multiple structures as multiple sentences and using a Transformer encoder (Bert) to encode top-k words. These works focus on improving the decoding approach or representation learning of structure-invariant parts, but still follow the traditional encoders. Our work focuses on proposing a new encoder with both information completeness and computational effectiveness.
+
+There have been several attempts to combine attention networks with structures: to represent the sequential structure better, Shaw et al. (2018) introduce relative position between words in attention networks instead of concatenating absolute position in input. Wang et al. (2019) define the relative positions on parse trees to encode each word pair's tree distance. They feed these positional embeddings to attention networks too. These two works encode a static structure, while we encode a dynamically changing transition system. Shiv and Quirk (2019) extend the Transformer's sinusoidal position function to the tree structure. Similar to us, their decoder dynamically computes the new position encoding when generating a tree structure. But their structural embeddings are computed by fixed sinusoidal function, while ours are learnable. These works encode only one structure, while we encode multiple structures from a transition sys
+
+tem.
+
+# 9 Conclusion
+
+We presented a comprehensive and efficient encoder for transition system. We separate each structure to the structure-invariant part and structure-dependent part. It allows us to dynamically encode the complete structure and also retains the efficiency of training and testing. Experiments show that the proposed parser achieves new state-of-the-art transition-based results.
+
+# Acknowledgments
+
+The authors wish to thank the reviewers for their helpful comments and suggestions, thank Peng Li and Zhengyi Lei for their comments on writing. This research is funded by the NSFC (62076097) and the 2020 East China Normal University Future Scientists and Outstanding Scholars Incubation Programme (WLKXJ2020). The corresponding authors are Tao Ji, Yuanbin Wu and Xiaoling Wang.
+
+# References
+
+Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers.
+Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450.
+Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A. Smith. 2016. Training with exploration improves a greedy stack LSTM parser. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2005-2010.
+Wanxiang Che, Longxu Dou, Yang Xu, Yuxuan Wang, Yijia Liu, and Ting Liu. 2019. HIT-SCIR at MRP 2019: A unified pipeline for meaning representation parsing via efficient training and effective encoding. In Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning, CoNLL 2019, Hong Kong, November 3, 2019, pages 76-85. Association for Computational Linguistics.
+Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing,
+
+EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 740-750.
+Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2017. Arc-hybrid non-projective dependency parsing with a static-dynamic oracle. In Proceedings of the 15th International Conference on Parsing Technologies, pages 99-104, Pisa, Italy. Association for Computational Linguistics.
+Misha Denil, Alban Demiraj, and Nando de Freitas. 2015. Extraction of salient sentences from labelled documents. CoRR, abs/1412.6815v2.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
+Timothy Dozat and Christopher D. Manning. 2017. Deep bioaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 334-343.
+Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph-based dependency parsing with graph neural networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2475-2485, Florence, Italy. Association for Computational Linguistics.
+Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. TACL, 4:313-327.
+Marco Kuhlmann, Carlos Gomez-Rodriguez, and Giorgio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 673-682, Portland, Oregon, USA. Association for Computational Linguistics.
+
+Ying Li, Zhenghua Li, Min Zhang, Rui Wang, Sheng Li, and Luo Si. 2019. Self-attentive biaffine dependency parsing. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5067-5073. ijcai.org.
+Xuezhe Ma, Zeong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1403-1414, Melbourne, Australia. Association for Computational Linguistics.
+Alireza Mohammadshahi and James Henderson. 2020. Graph-to-graph transformer for transition-based dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 3278-3289. Association for Computational Linguistics.
+Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the Eighth International Conference on Parsing Technologies, pages 149-160, Nancy, France.
+Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 50-57, Barcelona, Spain. Association for Computational Linguistics.
+Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Comput. Linguistics, 34(4):513-553.
+Joakim Nivre et al. 2018. Universal Dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University, Prague, http://hdl.handle.net/11234/1-1983xxx.
+Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
+Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics.
+Tianze Shi, Liang Huang, and Lillian Lee. 2017. Fast(er) exact decoding and global training for transition-based dependency parsing via a minimal feature set. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 12-23, Copenhagen, Denmark. Association for Computational Linguistics.
+
+Vighnesh Leonardo Shiv and Chris Quirk. 2019. Novel positional encodings to enable tree-based transformers. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 12058-12068.
+Milan Straka, Jan Hajic, and Jana Straková. 2016. UD-Pipe: trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016), Portořž, Slovenia. European Language Resources Association.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Xing Wang, Zhaopeng Tu, Longyue Wang, and Shuming Shi. 2019. Self-attention with structural position representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1403-1409. Association for Computational Linguistics.
+Taro Watanabe and Eiichiro Sumita. 2015. Transition-based neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1169-1179, Beijing, China. Association for Computational Linguistics.
+David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 323-333.
+Pengcheng Yin and Graham Neubig. 2018. TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 7-12. Association for Computational Linguistics.
+Yunzhe Yuan, Yong Jiang, and Kewei Tu. 2019. Bidirectional transition-based dependency parsing. In
+
+The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7434-7441. AAAI Press.
+Daniel Zeman, Jan Hajic, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-20, Brussels, Belgium. Association for Computational Linguistics.
+Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649-657.
+Yu Zhang, Zhenghua Li, and Min Zhang. 2020. Efficient second-order TreeCRF for neural dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3295-3305, Online. Association for Computational Linguistics.
+Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2017. Stack-based multi-layer attention for transition-based dependency parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1677-1682. Association for Computational Linguistics.
+
+# Supplementary Material for A Unified Encoding of Structures in Transition Systems
+
+# A Other Transition Systems
+
+Similar to Section 2, the arc-standard system from Nivre (2004) is formally defined as:
+
+$$
+\left(\left[ \operatorname {r o o t} _ {0} \right], [ 1, \dots , n ], \varnothing\right) \quad \left(c _ {x}\right)
+$$
+
+$$
+\left(\left[ \operatorname {r o o t} _ {0} \right], [ ], \mathbb {T} _ {t}\right) \quad \left(\mathbb {C} _ {t}\right)
+$$
+
+$$
+(\sigma , i | \beta , \mathbb {T}) \vdash (\sigma | i, \beta , \mathbb {T}) \tag {sh}
+$$
+
+$$
+(\sigma | i | j, \beta , \mathbb {T}) \vdash (\sigma | j, \beta , \mathbb {T} \cup (j, i, r)) \quad (1 a _ {r})
+$$
+
+$$
+(\sigma | i | j, \beta , \mathbb {T}) \vdash (\sigma | i, \beta , \mathbb {T} \cup (i, j, r)) \quad (\operatorname {r a} _ {r})
+$$
+
+The arc-eager system from Nivre (2003) is formally defined as:
+
+$$
+([ ], [ 1, \dots , n ], \emptyset) \quad (\mathbf {c} _ {x})
+$$
+
+$$
+(\sigma , [ ], \mathbb {T} _ {t}) \quad (\mathbb {C} _ {t})
+$$
+
+$$
+(\sigma , i | \beta , \mathbb {T}) \vdash (\sigma | i, \beta , \mathbb {T}) \tag {sh}
+$$
+
+$$
+(\sigma | i, \beta , \mathbb {T}) \vdash (\sigma , \beta , \mathbb {T}) \tag {rd}
+$$
+
+$$
+(\sigma | i, j | \beta , \mathbb {T}) \vdash (\sigma , j | \beta , \mathbb {T} \cup (j, i, r)) \quad (1 a _ {r})
+$$
+
+$$
+(\sigma | i, j | \beta , \mathbb {T}) \vdash (\sigma | i | j, \beta , \mathbb {T} \cup (i, j, r)) \quad (\operatorname {r a} _ {r})
+$$
+
+The non-projective arc-hybrid from de Lhoneux et al. (2017) for UD treebanks is formally defined as:
+
+$$
+\left(\left[ \begin{array}{l} \end{array} \right], \left[ 1, \dots , n, \operatorname {r o o t} _ {0} \right], \varnothing\right) \quad \left(\mathbf {c} _ {x}\right)
+$$
+
+$$
+\left(\left[ \begin{array}{l} \end{array} \right], \left[ \begin{array}{l} \mathrm {r o o t} _ {0} \end{array} \right], \mathbb {T} _ {t}\right) \quad \left(\mathbb {C} _ {t}\right)
+$$
+
+$$
+(\sigma , i | \beta , \mathbb {T}) \vdash (\sigma | i, \beta , \mathbb {T}) \tag {sh}
+$$
+
+$$
+(\sigma | i, j | \beta , \mathbb {T}) \vdash (\sigma , j | i | \beta , \mathbb {T}) \quad (s w a p)
+$$
+
+$$
+(\sigma | i, j | \beta , \mathbb {T}) \vdash (\sigma , j | \beta , \mathbb {T} \cup (j, i, r)) \quad (\mathrm {l a} _ {r})
+$$
+
+$$
+(\sigma | i | j, \beta , \mathbb {T}) \vdash (\sigma | i, \beta , \mathbb {T} \cup (i, j, r)) \quad \left(r a _ {r}\right)
+$$
+
+# B USE for Other Structures
+
+Here we give the formal definitions $(\pmb{q}_{*,t}, K_{*,t}, V_{*,t}$ in Equation 1) of buffer $\beta$ , action list $\alpha$ , subtree's arcs $\mathbb{T}_{(arc)}$ , and subtree's relations $\mathbb{T}_{(rel)}$ , which are omitted in the main text.
+
+$$
+\boldsymbol {q} _ {\beta , t} = W _ {\beta} ^ {Q} \cdot (\boldsymbol {m} _ {t} \oplus \boldsymbol {m} _ {\beta})
+$$
+
+$$
+K _ {\beta , t} = W _ {\beta} ^ {K} \cdot \left(X + S _ {\beta , t} ^ {K}\right) \tag {6}
+$$
+
+$$
+V _ {\beta , t} = W _ {\beta} ^ {V} \cdot (X + S _ {\beta , t} ^ {V}).
+$$
+
+$$
+\boldsymbol {q} _ {\alpha , t} = W _ {\alpha} ^ {Q} \cdot (\boldsymbol {m} _ {t} \oplus \boldsymbol {m} _ {\alpha})
+$$
+
+$$
+K _ {\alpha , t} = W _ {\alpha} ^ {K} \cdot \left(A + S _ {\alpha , t} ^ {K}\right) \tag {7}
+$$
+
+$$
+V _ {\alpha , t} = W _ {\alpha} ^ {V} \cdot (A + S _ {\alpha , t} ^ {V}).
+$$
+
+$$
+\boldsymbol {q} _ {\mathbb {T} _ {(a r c)}, t} = W _ {\mathbb {T} _ {(a r c)}} ^ {Q} \cdot (\boldsymbol {m} _ {t} \oplus \boldsymbol {m} _ {\mathbb {T} _ {(a r c)}})
+$$
+
+$$
+K _ {\mathbb {T} _ {(a r c)}, t} = W _ {\mathbb {T} _ {(a r c)}} ^ {K} \cdot \left(X + S _ {\mathbb {T} _ {(a r c)}, t} ^ {K}\right) \tag {8}
+$$
+
+$$
+V _ {\mathbb {T} _ {(a r c)}, t} = W _ {\mathbb {T} _ {(a r c)}} ^ {V} \cdot (X + S _ {\mathbb {T} _ {(a r c)}, t} ^ {V}).
+$$
+
+$$
+\boldsymbol {q} _ {\mathbb {T} _ {(r e l)}, t} = W _ {\mathbb {T} _ {(r e l)}} ^ {Q} \cdot (\boldsymbol {m} _ {t} \oplus \boldsymbol {m} _ {\mathbb {T} _ {(r e l)}})
+$$
+
+$$
+K _ {\mathbb {T} _ {(r e l), t}} = W _ {\mathbb {T} _ {(r e l)}} ^ {K} \cdot \left(X + S _ {\mathbb {T} _ {(r e l), t}} ^ {K}\right) \tag {9}
+$$
+
+$$
+V _ {\mathbb {T} _ {(r e l)}, t} = W _ {\mathbb {T} _ {(r e l)}} ^ {V} \cdot (X + S _ {\mathbb {T} _ {(r e l)}, t} ^ {V}).
+$$
+
+# C Details of Datasets
+
+The statistics (number of sentences) of the English Penn Treebank (PTB) and Universal Dependency (UD) treebanks are summarized in Table 5 and Table 6 respectively.
+
+ | #train | #dev | #test |
| PTB | 39832 | 1700 | 2416 |
+
+Table 5: Statistics of the PTB dataset we used.
+
+| Treebanks | #train | #dev | #test |
| Bulgarian | 8907 | 1115 | 1116 |
| Catalan | 13123 | 1709 | 1846 |
| Czech | 102993 | 11311 | 12203 |
| Dutch | 18310 | 1518 | 1396 |
| English | 12543 | 2002 | 2077 |
| French | 14554 | 1478 | 416 |
| German | 13841 | 799 | 977 |
| Italian | 12838 | 564 | 482 |
| Norwegian | 29870 | 4300 | 3450 |
| Romanian | 8043 | 752 | 729 |
| Russian | 48814 | 6584 | 6491 |
| Spanish | 28492 | 4300 | 2174 |
+
+Table 6: Statistics of the UD dataset we used.
+
+# D Hyper-parameters
+
+The hyper-parameters we used in default settings (Table 7).
+
+ | Stack | Buffer | Subtree [arc] | Subtree [rel] | Action |
| t | root | He | has | good | control | root | He | has | good | control | root | He | has | good | control | root | He | has | good | control | <s> | sh | la_n subj | sh | sh | la_a mod | sh | ra_d boj | la_r oot | |
| 0 | 0 | 0 | 0 | 0 | 0 | 5 | 1 | 2 | 3 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | | |
| 1 | 0 | 1 | 0 | 0 | 0 | 4 | -1 | 1 | 2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 1 | | |
| 2 | 0 | -1 | 0 | 0 | 0 | 4 | -2 | 1 | 2 | 3 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 | 2 | 1 | | |
| 3 | 0 | -2 | 1 | 0 | 0 | 3 | -3 | -1 | 1 | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 | 3 | 2 | 1 | | |
| 4 | 0 | -3 | 2 | 1 | 0 | 2 | -4 | -2 | -1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 | 4 | 3 | 2 | 1 | | |
| 5 | 0 | -4 | 1 | -1 | 0 | 2 | -5 | -3 | -2 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 2 | 6 | 5 | 4 | 3 | 2 | 1 | | |
| 6 | 0 | -5 | 2 | -2 | 1 | 1 | -6 | -4 | -3 | -1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 2 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | | |
| 7 | 0 | -6 | 1 | -3 | -1 | 1 | -7 | -5 | -4 | -2 | 0 | 1 | 0 | 1 | -2 | 0 | 1 | 0 | 2 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | | |
+
+Figure 7: The structured-dependent information for all steps on the example. For simplicity, we use the number 0,1,2,3 to denote the none,nsubj,amod,dobj relations in the "subtree[rel]", respectively.
+
+| Layer | Hyper-parameter | Value |
| Input | word, POS tag, Glove | 100 |
| BERT | 768 |
| dropout | 0.33 |
| CharCNN | kernel | [1,2,3,5] |
| hidden size | 25 |
| dropout | 0.33 |
| BiLSTM | #layer | 6 |
| hidden size | 400 |
| dropout | 0.33 |
| Xformer | #layer | 6 |
| model size | 200 |
| #head | 8 |
| FeedForward size | 800 |
| dropout | 0.2 |
| USE | #layer | 6 |
| output size | 256 |
| #head | 8 |
| MLP size | 800 |
| dropout | 0.2 |
| Trainer | optimizer | Adam |
| learning rate | 0.002 |
| (β1, β2) | (0.9, 0.9) |
+
+Table 7: Hyper-parameters for experiments.
+
+# E Structured-Dependent for All Steps
+
+In Figure 7, we list the structured-dependent information for all steps on the example.
\ No newline at end of file
diff --git a/aunifiedencodingofstructuresintransitionsystems/images.zip b/aunifiedencodingofstructuresintransitionsystems/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4452a162d384d134fb8ba891c1d961b8307a8434
--- /dev/null
+++ b/aunifiedencodingofstructuresintransitionsystems/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:781db71d3927a6f59913ee6eaa012bd34e4e085479f456fdb3e9911dd9e869ad
+size 667183
diff --git a/aunifiedencodingofstructuresintransitionsystems/layout.json b/aunifiedencodingofstructuresintransitionsystems/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b51d3b372e9834314bdb10be585050645429dd3f
--- /dev/null
+++ b/aunifiedencodingofstructuresintransitionsystems/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ce010aea0c8087f1bad31a4091bae19d7fedd4d92b1238686f663993fbf656bc
+size 510733
diff --git a/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_content_list.json b/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8f107c57e4aaf78cdf86c5f91688e88977ac91cb
--- /dev/null
+++ b/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4a50d40aaa8db3eb1de3e1feef4db6b19f571d8a822244d8a6bd22a589205a51
+size 77982
diff --git a/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_model.json b/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c870e4ed41d07597223718a7a5662a664a56e4a9
--- /dev/null
+++ b/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e415b1c93ec9085b22c7c8185932ebd2be2974bfec589ac49a1999b43533967d
+size 94281
diff --git a/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_origin.pdf b/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0fa4caa4a4ae88f9e6c12d52d0a1c22061ef0af5
--- /dev/null
+++ b/aunifiedspeakeradaptationapproachforasr/dc687cdd-fd4c-41ea-96f3-43002e5fbc24_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2794d7c8f666feb985e4abbd637e9417ce0d5f8420645ec82938de7f87521e1a
+size 2106284
diff --git a/aunifiedspeakeradaptationapproachforasr/full.md b/aunifiedspeakeradaptationapproachforasr/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..28bcf29fb2296b3583634e121504bb6ae5b4684d
--- /dev/null
+++ b/aunifiedspeakeradaptationapproachforasr/full.md
@@ -0,0 +1,355 @@
+# A Unified Speaker Adaptation Approach for ASR
+
+Yingzhu Zhao $^{1,2*}$ , Chongjia Ni $^{2}$ , Cheung-Chi Leung $^{2}$ , Shafiq Joty $^{1}$ , Eng Siong Chng $^{1}$ , Bin Ma $^{2}$
+
+$^{1}$ Nanyang Technological University, Singapore
+
+$^{2}$ Machine Intelligence Technology, Alibaba Group
+
+{srjoty, aseschng} @ntu.edu.sg
+
+{yingzhu.zhao, ni.chongjia, cc.leung, b.ma} @alibaba-inc.com
+
+# Abstract
+
+Transformer models have been used in automatic speech recognition (ASR) successfully and yields state-of-the-art results. However, its performance is still affected by speaker mismatch between training and test data. Further finetuning a trained model with target speaker data is the most natural approach for adaptation, but it takes a lot of compute and may cause catastrophic forgetting to the existing speakers. In this work, we propose a unified speaker adaptation approach consisting of feature adaptation and model adaptation. For feature adaptation, we employ a speaker-aware persistent memory model which generalizes better to unseen test speakers by making use of speaker i-vectors to form a persistent memory. For model adaptation, we use a novel gradual pruning method to adapt to target speakers without changing the model architecture, which to the best of our knowledge, has never been explored in ASR. Specifically, we gradually prune less contributing parameters on model encoder to a certain sparsity level, and use the pruned parameters for adaptation, while freezing the unpruned parameters to keep the original model performance. We conduct experiments on the Librispeech dataset. Our proposed approach brings relative $2.74 - 6.52\%$ word error rate (WER) reduction on general speaker adaptation. On target speaker adaptation, our method outperforms the baseline with up to $20.58\%$ relative WER reduction, and surpasses the finetuning method by up to relative $2.54\%$ . Besides, with extremely low-resource adaptation data (e.g., 1 utterance), our method could improve the WER by relative $6.53\%$ with only a few epochs of training.
+
+# 1 Introduction
+
+End-to-end models yield state-of-the-art performance on automatic speech recognition (ASR)
+
+in the past decade, such as connectionist temporal classification (CTC) model (Miao et al., 2015; Graves, 2012), attention-based encoder-decoder model (Zhang et al., 2017), recurrent neural network transducer (RNN-T) (Graves, 2012), transformer model (Dong et al., 2018) and conformer model (Gulati et al., 2020). However, model performance deteriorates due to speaker mismatch between training and test data. Given the target speaker, finetuning the trained model could alleviate the speaker mismatch problem to some extent, but finetuning the entire model requires large amounts of compute to be effective, and it could in turn bring catastrophic forgetting (McCloskey and Cohen, 1989) to the existing speakers.
+
+Currently, there are two lines of studies to address the speaker mismatch problem in neural network based models. One category is working on the acoustic features, i.e., either by normalizing acoustic features to be speaker-independent (Seide et al., 2011; Tomashenko and Esteve, 2018; Ochiai et al., 2018) or by introducing additional speaker-related knowledge (e.g., i-vector) to adapt the acoustic model (Saon et al., 2013; Senior and Lopez-Moreno, 2014; Pan et al., 2018; Fan et al., 2019). A summary vector of each utterance can be trained to replace speaker i-vector (Vesely et al., 2016). To adapt to acoustic variability, Kim et al. (2017) add shifting and scaling parameters in the layer-normalization layer.
+
+The other category belongs to model adaptation, i.e., to train the speaker-dependent model from speaker-independent model parameters with extra adaptation data. To avoid overfitting, techniques such as L2 regularization (Liao, 2013), Kullback-Leibler divergence (Yu et al., 2013) and adversarial multitask learning (Meng et al., 2019) have been used. Because finetuning the entire model is computationally expensive, Yao et al. (2012); Siniscalchi et al. (2013); Samarakoon and Sim (2016a) only adapt specific layers or a subset of parame
+
+ters. In particular, Swietojanski et al. (2016); Samarakoon and Sim (2016b); Xie et al. (2019) reparameterize each hidden unit with speaker-dependent amplitude function in fully-connected or convolutional neural network layers. However, it is difficult to determine which model parameters to adapt for target speaker, and choosing certain sub-layer(s) intuitively may not be optimal.
+
+In this work, we propose a unified speaker adaptation model by making use of both feature adaptation and model adaptation. For feature adaptation, we propose the speaker-aware persistent memory model to generalize better to unseen test speakers. In particular, speaker i-vectors from the training data are sampled and concatenated to speech utterances in each encoder layer, and the speaker knowledge is learnt through attention computation with speaker i-vectors. Our method learns utterance level speaker knowledge, which is more effective than learning time step dependent speaker knowledge (Fan et al., 2019) since it is more robust to various variability factors along an utterance.
+
+For model adaptation, we explore gradual pruning (Zhu and Gupta, 2018), which to the best of our knowledge, is the first time being studied for speaker adaptation. We gradually prune less contributing parameters on model encoder, and then use the pruned parameters for target speaker adaptation while freeze the unpruned parameters to retain the model performance on general speaker data. In this way, our model could adapt to target speakers very fast by updating only a small percentage $(10\%)$ of encoder parameters, and it does not change the model architecture. Freezing unpruned parameters alleviates the catastrophic forgetting problem as well.
+
+Our proposed approach brings relative 2.74- $6.52\%$ WER reduction on general speaker adaptation. On target speaker adaptation, our method outperforms the baseline with up to $20.58\%$ relative WER reduction, and surpasses the finetuning method by up to relative $2.54\%$ .
+
+# 2 Background
+
+# 2.1 Speech Transformer
+
+Speech transformer (Dong et al., 2018) is an extension of the transformer model (Vaswani et al., 2017) for ASR. We briefly introduce the speech transformer model here. For a speech input sequence,
+
+speech transformer first applies two convolution layers with stride two to reduce hidden representation length. A sinusoidal positional encoding is added to encode position information. Both encoder and decoder in speech transformer model use multi-head attention network. Attention network has three inputs key, query and value, which are distinct transformations of an input sequence. The multi-head attention network is computed by concatenating single attention network $h$ times:
+
+$$
+\operatorname {A t t e n t i o n} (Q, K, V) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {T}}{\sqrt {d _ {k}}}\right) V \tag {1}
+$$
+
+$$
+\operatorname {M u l t i} H d (Q, K, V) = \operatorname {C o n c a t} \left(h d _ {1}, \dots , h d _ {h}\right) W ^ {O} \tag {2}
+$$
+
+$$
+h d _ {i} = \text {A t t e n t i o n} \left(Q W _ {i} ^ {Q}, K W _ {i} ^ {K}, V W _ {i} ^ {V}\right) \tag {3}
+$$
+
+where $h$ is the head number, $W_{i}^{Q}\in \mathbb{R}^{d_{model}\times d_{q}}$ , $W_{i}^{K}\in \mathbb{R}^{d_{model}\times d_{k}}$ , $W_{i}^{V}\in \mathbb{R}^{d_{model}\times d_{v}}$ , $W^{O}\in \mathbb{R}^{hd_{v}\times d_{model}}$ , we set $d_{k} = d_{q} = d_{v} = d_{model} / h$ .
+
+Multi-head attention could learn input representation in different subspaces simultaneously. For encoder, the three inputs all come from the speech input, so the attention network is called self-attention network. For decoder, the text input first goes through self-attention network. To maintain autoregression in decoder, a mask is applied to future tokens. To incorporate information from the speech input, in the next attention network, key and value vectors come from encoder, and query vector comes from decoder, so this attention network is called cross-attention network. Layer normalization and residual connection are applied before and after multi-head attention network. Afterwards, there is a position-wise feedforward network with rectified linear unit (ReLU) activation:
+
+$$
+F F N (x) = \max \left(0, x W _ {1} + b _ {1}\right) W _ {2} + b _ {2} \tag {4}
+$$
+
+where $W_{1}\in \mathbb{R}^{d_{model}\times d_{ff}}$ $W_{2}\in \mathbb{R}^{d_{ff}\times d_{model}}$ , and the biases $b_{1}\in \mathbb{R}^{d_{ff}},b_{2}\in \mathbb{R}^{d_{model}}$
+
+Self-attention network and position-wise feedforward network form an encoder layer. There is an additional cross-attention network in a decoder layer. There are $N_{e}$ encoder layers and $N_{d}$ decoder layers in total.
+
+# 2.2 Speaker Adaptation
+
+Speaker adaptation arises due to speaker mismatch between training and test data. It aims to adapt the
+
+model to a target speaker, which is a critical component in HMM-based models (Keith and Matthias, 2005; Furui, 1980; Gauvain and Lee, 1994; Kuhn et al., 2000). For neural network based models, many approaches are developed as well as discussed briefly in Section 1.
+
+Adapting an ASR model is a challenging task given that ASR model is a huge and complex model with a large number of parameters to update. Finetuning the entire model takes significant computational resources to reach the optimal performance, and it potentially causes catastrophic forgetting problem (McCloskey and Cohen, 1989; Kirkpatrick et al., 2017), which means when model parameters trained for existing speakers are adapted for target speaker, knowledge learnt previously is lost.
+
+# 2.3 I-vector
+
+I-vector is a low-dimension vector that is dependent on speaker and channel. I-vector dimension is fixed as defined no matter how long the utterance is. It is extracted based on a data-driven approach by mapping frames of an utterance to a low-dimensional vector space using a factor analysis technique (Dehak et al., 2011a). The system is based on Support Vector Machines or directly using the cosine distance value as a final decision score (Dehak et al., 2011b). I-vector was initially invented for audio classification and identification, but recently it is used for speaker adaptation as well (Saoi et al., 2013; Dehak et al., 2011b; Karafiat et al., 2011).
+
+# 3 Proposed Method
+
+We propose an efficient speaker adaptation model by making use of both feature adaptation and model adaptation. For feature adaptation, we embed speaker knowledge represented by a number of fixed speaker i-vectors into each input utterance (Zhao et al., 2020). This aims to capture speaker information through attention computation between each utterance and speaker i-vectors. For model adaptation, an effective method is employed that can adapt to target speaker very fast without sacrificing performance on existing speakers. In particular, we prune the model gradually and finetune a small subset of parameters to be speaker-specific.
+
+# 3.1 Speaker-Aware Persistent Memory for Feature Adaptation
+
+Speaker-aware persistent memory model learns speaker knowledge from i-vectors. We first ran
+
+
+Figure 1: Speaker-aware persistent memory model. $M_{k}$ and $M_{v}$ from speaker i-vectors are concatenated to key and value vectors.
+
+domly sample $N$ speaker i-vectors $m_{1},\ldots ,m_{N}\in \mathbb{R}^{d_{k}}$ which form the speaker space (Gales, 1998; Yu and Gales, 2006). Here we have the assumption that the linear combinations of speaker space are enough to cover the speaker information space, i. e., any unknown speaker not seen in the training data can be represented approximately by the sampled i-vectors from the training data. We name the learned transformation of speaker space as persistent memory vectors $M_{k}$ and $M_{v}$ :
+
+$$
+M _ {k} = \operatorname {C o n c a t} \left(\left[ U _ {k} m _ {1}, \dots , U _ {k} m _ {N} \right]\right) \in \mathbb {R} ^ {N \times d _ {k}} \tag {5}
+$$
+
+$$
+M _ {v} = \operatorname {C o n c a t} ([ U _ {v} m _ {1}, \dots , U _ {v} m _ {N} ]) \in \mathbb {R} ^ {N \times d _ {k}} \tag {6}
+$$
+
+where $U_{k}\in \mathbb{R}^{d_{k}\times d_{k}}$ , $U_{v}\in \mathbb{R}^{d_{k}\times d_{k}}$ . Only $U_{k}$ and $U_{v}$ matrices are learnable while sampled i-vectors are fixed in this method.
+
+With the persistent memory vectors, we concatenate them respectively to the input vectors of self-attention network $X = [x_{1},\dots,x_{t}]$ to be the new key and value vectors. Attention network thus captures speaker-specific knowledge through attention computation between each utterance and persistent memory vectors as Eq. 9:
+
+$$
+K _ {m} = \left[ k _ {1}, \dots , k _ {t + N} \right] = \left(\left[ W _ {k} x _ {1}, \dots , W _ {k} x _ {t} \right], M _ {k}\right) \tag {7}
+$$
+
+$$
+V _ {m} = \left[ v _ {1}, \dots , v _ {t + N} \right] = \left(\left[ W _ {v} x _ {1}, \dots , W _ {v} x _ {t} \right], M _ {v}\right) \tag {8}
+$$
+
+$$
+\operatorname {A t t e n t i o n} (Q, K _ {m}, V _ {m}) = \operatorname {s o f t m a x} \left(\frac {Q K _ {m} ^ {T}}{\sqrt {d _ {k}}}\right) V _ {m} \tag {9}
+$$
+
+Since $M_{k}$ and $M_{v}$ are shared across all layers, they form the persistent memory. Given that persistent memory is meant for capturing speaker knowledge, we name it as speaker-aware persistent memory. The overall framework of speaker-aware persistent memory model is shown in Figure 1.
+
+Since our method aims at learning any speaker information from the speaker space, it effectively addresses the problem of having unseen speakers in the test data. Furthermore, using static i-vectors saves the effort to compute the i-vectors of all speakers in the training data. Besides, the attention computation (Eq. 9) with persistent memory vectors is taken over the entire utterance $x_{1}$ to $x_{t}$ , so each input speech time step takes part in extracting speaker information. This holistic consideration is more effective than Fan et al. (2019) who compute time step dependent speaker representations, which may be more susceptible to various variability factors along an utterance such as speaking rhythm.
+
+# 3.2 Gradual Pruning for Model Adaptation
+
+The speaker-aware persistent memory method discussed above is for general speaker adaptation without knowing target speaker profile. If the target speaker data is available, finetuning the trained model with target speaker data could result in catastrophic forgetting problem as detailed in Section 2.2. To address this, we take advantage of an effective approach to adapt to target speaker in a fast manner, and retain the model performance on the general speaker data at the same time.
+
+Given an input speech $y = \{y_{1},\dots,y_{n}\}$ and output text $z = \{z_{1},\dots,z_{m}\}$ , ASR models the conditional probability of the output text over the input speech as follows:
+
+$$
+P (z | y; \theta) = \prod_ {i = 1} ^ {m} P \left(z _ {i} | y, z _ {< i}; \theta\right) \tag {10}
+$$
+
+where $\theta$ represents the model parameters. Given a training dataset $D = \{y_{D}^{j},z_{D}^{j}\}_{j = 1}^{L}$ , $\theta$ is trained to
+
+
+Figure 2:WER results on Librispeech test data with different pruning rates.
+
+maximize the following log-likelihood objective:
+
+$$
+\mathcal {J} (\theta) = \underset {\theta} {\operatorname {a r g m a x}} \sum_ {j = 1} ^ {L} \log P \left(z _ {D} ^ {j} \mid y _ {D} ^ {j}; \theta\right) \tag {11}
+$$
+
+Given the target speaker dataset $D_{t} = \{y_{D_{t}}^{j},z_{D_{t}}^{j}\}_{j = 1}^{L_{t}}$ , directly finetuning the trained model means continuing to train the model to maximize the log-likelihood:
+
+$$
+\mathcal {J} \left(\theta_ {D _ {t}}\right) = \underset {\theta_ {D _ {t}}} {\operatorname {a r g m a x}} \sum_ {j = 1} ^ {L _ {t}} \log P \left(z _ {D _ {t}} ^ {j} \mid y _ {D _ {t}} ^ {j}; \theta_ {D _ {t}}\right) \tag {12}
+$$
+
+where $\theta_{D_t}$ is initialized with the trained parameters $\theta$ in Eq. 11.
+
+As shown in many recent studies (Frankle and Carbin, 2019; Zhu and Gupta, 2018; Liu et al., 2019), not all parameters in a neural network model contribute to the training objective. Pruning the redundant parameters leads to negligible performance degradation (Li et al., 2017; Han et al., 2015) or may even outperform the original model (Zhu and Gupta, 2018) due to better generalization.
+
+Our experiments in Figure 2 show that up to $50\%$ of encoder parameters can be pruned in ASR with negligible performance degradation. Therefore, we first prune the model with training data gradually to the predetermined sparsity level by zeroing out low magnitude parameters for every 10k training steps, i.e., only retain a certain percentage of high magnitude unpruned parameters $\theta_{UP}$ ultimately. This is to unearth the sub-network whose performance well matches the original model.
+
+Different from Liang et al. (2021) who prune on a well-trained model, we train and prune concurrently as seen in Figure 3(b) (with warm-up training first) to reduce the total number of training steps
+
+
+Figure 3: Illustration of gradual pruning method for speaker adaptation. After network parameters are initialized (a), we train and prune the model at the same time for $n$ epochs (b), and lastly finetune the pruned parameters with target speaker data (c). Light gray connections in (c) mean corresponding parameters are frozen, while blue ones indicate parameters are finetuned, which are the parameters pruned at earlier stage (b).
+
+and thus save computation resources. Because speech or speaker information is learnt through the encoder of an end-to-end ASR model, we only prune encoder parameters including embedding network, self-attention network and feedforward network in all encoder layers.
+
+Afterwards, we keep the informative subnetwork untouched by freezing the unpruned parameters $\theta_{UP}$ to retain the performance on existing speakers, as represented by the light gray connections in Figure 3(c), and only finetune the pruned free parameters $\theta_{P}$ for target speaker adaptation (blue connections in Figure 3(c)). The training objective will be:
+
+$$
+\mathcal {J} \left(\theta_ {P}\right) = \underset {\theta_ {P}} {\operatorname {a r g m a x}} \sum_ {j = 1} ^ {L _ {t}} \log P \left(z _ {D _ {t}} ^ {j} \mid y _ {D _ {t}} ^ {j}; \theta_ {U P}, \theta_ {P}\right) \tag {13}
+$$
+
+where $\theta_{UP}$ is frozen and $\theta_P$ is updated. Since the informative sub-network is already capable of performing ASR task very well, we believe further finetuning the free parameters with target speaker data is an added value to the speaker-specific model.
+
+Our method does not change the model architecture, unlike some approaches to attach an additional adapter module (Ding et al., 2020). Besides, we only need to finetune a small number of parameters compared to finetuning the entire model. Fixing the informative sub-network makes our model retain past knowledge with no catastrophic forgetting issue. It also prevents the model from easily overfitting on low-resource target speaker data to some extent.
+
+# 4 Experiments
+
+In this section, we present our experiments using the proposed speaker-aware persistent memory model and the gradual pruning method.
+
+# 4.1 Datasets
+
+We conduct experiments to confirm the effectiveness of the proposed model on the opensource Librispeech dataset (Panayotov et al., 2015). LibriSpeech consists of $16\mathrm{kHz}$ read English speech from audiobooks. We use the given train/development/test splits of Librispeech dataset. Test.clean data is clean and Test_other data has noise in speech. See Appendix A.1 for the statistics of Librispeech dataset used in our experiments.
+
+# 4.2 Training Setup
+
+We use PyTorch and Espnet (Watanabe et al., 2018) toolkit for our experiments, and we train the model for 100 epochs ( $n = 100$ in Figure 3(b)). We use the best set of hyperparameters tested by Watanabe et al. (2018) for transformer model without further tuning, and we pre-process the data following the Espnet toolkit. The total number of model parameters is 31 million. Input features are generated by 80-dimensional filterbanks with pitch on each frame, with a window size of $25\mathrm{ms}$ shifted every $10\mathrm{ms}$ . The acoustic features are mean and variance normalized. We exclude utterances longer than 3000 frames or 400 characters to keep memory manageable. For joint decoding of CTC and attention, the coefficient is 0.3 and 0.7 for CTC and attention respectively. The convolutional frontend before transformer encoder is two 2D convolutional
+
+| Model | Test.clean | Test_other |
| End-to-end (E2E) (Löscher et al., 2019) | 14.7 | 40.8 |
| E2E with augmented data (Bérard et al., 2018) | 15.1 | - |
| Local prior matching (Hsu et al., 2020) | 14.85 | 39.95 |
| LAS (Irie et al., 2019) | 12.9 | 35.5 |
| Self-training (Kahn et al., 2020) | 8.06 | 30.44 |
| Baseline | 9.2 | 21.9 |
+
+neural network layers with filter size (3,2) and stride 2, each followed by a ReLU activation. The attention dimension $d_{model}$ is 256, and the feedforward network hidden state dimension $d_{ff}$ is 2048. In the transformer structure, the number of attention heads $h$ is 4, with $d_k = d_q = d_v = 64$ for each head, the number of encoder layers $N_e$ is 12, the number of decoder layers $N_d$ is 6, the initial value of learning rate is 5.0, the encoder and decoder dropout rate is 0.1. The input samples are shuffled randomly and trained with batch size 12. We use unigram sub-word algorithm with the vocabulary size capped to be 5000. For i-vector generation, we follow SRE08 recipe in Kaldi (Povey et al., 2011) toolkit on the training data. I-vectors extracted are of dimension 100. They are then transformed to have the same dimension as speech vectors for concatenation. Our baseline model is competitive compared with other model results from Table 1.
+
+# 4.3 Experimental Results
+
+# 4.3.1 Adaptation for General Speakers
+
+We first test on the adaptation for general speakers without knowing target speaker profile. Speaker-aware persistent memory model introduced in Section 3.1 achieves this objective. Here we omit the hyperparameter tuning part, and directly use the best hyperparameters tested by Zhao et al. (2020), including the number of speaker i-vectors in the speaker space and the number of layers applied with speaker-aware persistent memory. We randomly sample 64 speaker i-vectors and apply on all the encoder layers in speech transformer. 64 i-vectors were tested to be a good choice to provide diverse speaker information (Zhao et al., 2020), and applying on all encoder layers helps capture speaker knowledge from both low-level phonetic features and high-level global information. Table 2 shows that our method brings $2.74 - 6.52\%$ relative improvement over the baseline, and surpasses Fan et al. (2019) who also use speaker i-vectors. Fur
+
+Table 1:WER results of speech recognition models on LibriSpeech 100h.
+
+| Model | Test.clean | Test_other |
| Baseline | 9.2 | 21.9 |
| You et al. (2019) | 8.9 | 21.6 |
| Fan et al. (2019) | 8.9 | 21.4 |
| Ours | 8.6 | 21.3 |
+
+Table 2: State-of-the-art results of different speaker adaptation algorithms on Librispeech test data.
+
+thermore, here we also compare our model with the first persistent memory model used in ASR (You et al., 2019), in which persistent memory vectors are randomly initialized and meant to capture general knowledge. Different from them, our model is to address the speaker mismatch issue. Our method achieves the best results.
+
+# 4.3.2 Adaptation for Target Speaker
+
+If the target speaker profile is known beforehand, the gradual pruning method discussed in Section 3.2 could adapt to the target speaker. Directly finetuning the entire model takes high computation resources by updating all model parameters, and could overfit easily if the amount of target speaker data is limited. We are interested to see the performance of the gradual pruning method especially on low-resource data, as well as how much it alleviates the catastrophic forgetting problem. Therefore, we randomly choose a speaker from the Librispeech Test_other data as the target speaker, and only select 10 utterances of the target speaker as training data. The remaining utterances of the target speaker are chosen as the test data. We do this four times and report the average performance to see the generalizability of the proposed approach. The average baseline WER of four speakers is 20.5, and is slightly smaller than the average WER of Test_other speakers, which is 21.9, so further improving the target speaker performance is a bit more challenging. The pruning rate is set as $10\%$ here. We compare the perfor
+
+
+
+
+Figure 4: WER results of target speaker (a) and nontarget speakers (b). Finetune: directly finetuning the entire model as Eq. 12. I-vec: speaker-aware persistent memory method proposed in Section 3.1 by adding i-vectors. Pruning: gradual pruning proposed in Section 3.2. Pruning+I-vec: combining feature adaptation and model adaptation methods proposed. The dotted lines are the second order polynomial trendlines.
+
+mance of 1) Finetune: directly finetuning the entire model as Eq. 12, 2) I-vec: speaker-aware persistent memory method by adding i-vectors, 3) Pruning: gradual pruning, 4) Pruning+I-vec: combining feature adaptation and model adaptation methods proposed.
+
+For results on the target speaker in Figure 4(a), finetuning works better than the baseline. Adding i-vectors has the highest WER initially and the performance is worse than simply finetuning the trained model after 20 epochs. We believe speaker-aware persistent memory method works better on general speaker adaptation given that the sampled i-vectors form the speaker space to capture any speaker knowledge. It is not designed to adapt to some specific speakers. Using the gradual pruning method alone has lower WER than finetuning at the initial stage, but surprisingly it overfits more than the finetuning method after 20 epochs. More
+
+detailed analysis is needed and we leave it to future work. Lastly, we combine the feature adaptation and model adaptation methods, and it achieves our best result. It outperforms the baseline with up to $20.58\%$ relative WER reduction, and surpasses the finetuning method by up to relative $2.54\%$ . We see that the feature adaptation method and the model adaptation method we propose complement each other, as the combined model result surpasses each individual one.
+
+We want to analyze the performance of the rest non-target speaker data to see if catastrophic forgetting happens. From Figure 4(b), all target speaker adapted models perform slightly worse than the baseline, which is expected. Combining feature adaptation and model adaptation could alleviate catastrophic forgetting problem effectively. It generally outperforms finetuning in Figure 4(b).
+
+# 5 Analysis
+
+In this section, we revisit our approach to reveal more details and explore the effectiveness of the gradual pruning method in combination with the speaker-aware persistent memory model.
+
+# 5.1 Pruning Rate
+
+We first test different pruning rates on encoder. Results are shown in Figure 5. Less pruning rate keeps more parameters for the general speaker data, and has less learning capability to target speaker. It is more suitable for simple adaptation tasks. Higher pruning rate generates a more sparse network and is more flexible for speaker adaptation, except that it retains less original model parameters, thus forgets more on the general speaker data. It can be seen from Figure 5 that pruning $10\%$ of encoder parameters achieves the best result.
+
+# 5.2 Gradual Pruning vs One-time Pruning
+
+We use the gradual pruning method (Zhu and Gupta, 2018) to prune to target sparsity for every 10k training steps. One-time pruning at the initial/middle/final stage of the overall training is tested for comparison as well. We train for 100 epochs, and initial/middle/final stage pruning is done at 0/50/100 epoch respectively. Gradual pruning and one-time pruning will reach the same sparsity level after the training. Here we use either gradual or one-time pruning at different stages during training, and show the best results of finetuning for 15 epochs. Table 3 shows that gradual pruning
+
+
+Figure 5: WER results of target speaker with different pruning rates. The dotted lines are the second order polynomial trendlines.
+
+| Model | Target Speaker |
| Baseline | 19.9 |
| One-time pruning at initial stage | 16.2 |
| One-time pruning at middle stage | 17.6 |
| One-time pruning at final stage | 16.0 |
| Gradual pruning | 15.9 |
+
+works better than one-time pruning, be it initial, middle or final stage of the training. Compared with one-time pruning, gradual pruning could learn and prune at the same time. In particular, gradual pruning follows the train prune cycle, and is capable of iteratively learning the unpruned parameters after less contributing parameters are pruned. For the one-time pruning, pruning at an earlier stage has the advantage to let the model learn the unpruned parameters based on the pruned ones in the remaining training of the model, but pruning earlier has the risk to prune important parameters since the model is not well learnt yet, vice versa for pruning late. Hence, gradual pruning works the best.
+
+# 5.3 Extremely Low-resource Adaptation Data
+
+Lastly, we would like to see the extremely low-resource adaptation data scenarios. We reduce the amount of adaptation data and compare the perfor
+
+Table 3: WER results of gradual pruning versus one-time pruning. We train for 100 epochs, and initial/middle/final stage pruning is done at 0/50/100 epoch respectively.
+
+| Utterance(s) | 1 | 5 | 10 |
| Total No. of Words | 32 | 104 | 174 |
| Total Duration (s) | 15.00 | 40.00 | 67.17 |
+
+Table 4: Characteristics of utterances selected as the extremely low-resource adaptation data.
+
+
+Figure 6: WER results of target speaker with different amount of adaptation data. The dotted lines are the second order polynomial trendlines.
+
+mance with the baseline, where no adaptation is performed. The characteristics of the adaptation data selected are listed in Table 4. From Figure 6, when the amount of adaptation data is reduced from 10 utterances to 5 utterances, the results are similar to that of 10 utterances at the initial training stage, and could outperform the baseline by up to relative $18.59\%$ . With less adaptation data, the model overfits much faster, especially in the case of having only 1 utterance for adaptation. However, even with only 1 utterance, it could surpass the baseline by up to relative $6.53\%$ with only 5 epochs of training. Therefore, even with extremely low-resource adaptation data such as 1 utterance, our method is effective with fast adaptation.
+
+# 6 Conclusion
+
+In this paper, we have proposed a unified speaker adaptation approach consisting of feature adaptation and model adaptation. Speaker-aware persistent memory model makes use of speaker i-vectors to adapt at the feature level, and we use the gradual pruning approach to retrieve a subset of model parameters for adaptation at the model level. Gradual pruning is found to be better than one-time pruning because gradual pruning could iteratively learn based on pruned parameters. It can alleviate catastrophic forgetting problem as well by retaining a subnetwork whose performance matches the original network. We find that our proposed method is effective in both general speaker adaptation and specific target speaker adaptation. In particular, our method brings relative $2.74 - 6.52\%$ WER reduction on general speaker adaptation, and outperforms the baseline with up to $20.58\%$ relative WER reduction on target speaker adaptation. Even with extremely
+
+low-resource adaptation data, our method could bring $6.53\%$ relative improvement with only a few training epochs. In the future, we are interested in the overfitting issue with low-resource data, as well as multi-speaker adaptation with our method.
+
+# References
+
+Alexandre Bérard, Laurent Besacier, Ali Can Kocabiyikoglu, and Olivier Pietquin. 2018. End-to-end automatic speech translation of audiobooks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE.
+Najim Dehak, Pedro A.Torres-Carrasquillo, Douglas Reynolds, and Reda Dehak. 2011a. Language recognition via ivectors and dimensionality reduction. In INTERSPEECH 2011 - $12^{th}$ Annual Conference of the International Speech Communication Association, pages 857-860.
+Najim Dehak, Patrick J Kenny, Reda Dehak, Pierre Dumouchel, and Pierre Ouellet. 2011b. Front-end factor analysis for speaker verification. IEEE Transactions on Audio, Speech, and Language Processing, 19(4):788-798.
+Fenglin Ding, Wu Guo, Lirong Dai, and Jun Du. 2020. Attention-based gated scaling adaptive acoustic model for CTC-based speech recognition. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
+Linhao Dong, Shuang Xu, and Bo Xu. 2018. Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5884-5888. IEEE.
+Zhiyun Fan, Jie Li, Shiyu Zhou, and Bo Xu. 2019. Speaker-aware speech-transformer. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 222-229.
+Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In 2019 International Conference on Learning Representations (ICLR).
+S. Furui. 1980. A training procedure for isolated word recognition systems. IEEE Transactions on Audio, Speech, and Language Processing, 28(2):129-136.
+Mark J. F. Gales. 1998. Cluster adaptive training for speech recognition. Int. Conf. Speech Language Processing, 5:1783-1786.
+J.-L. Gauvain and Chin-Hui Lee. 1994. Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains. IEEE Transactions on Audio, Speech, and Language Processing, 2(2):291-298.
+
+Alex Graves. 2012. Sequence transduction with recurrent neural networks. In International Conference of Machine Learning (ICML), pages 235-242.
+Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. 2020. Conformer: Convolution-augmented transformer for speech recognition. In *INTERSPEECH* 2020 - 21st Annual Conference of the International Speech Communication Association, pages 5036-5040.
+Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. Learning both weights and connections for efficient neural networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems.
+Wei-Ning Hsu, Ann Lee, Gabriel Synnaeve, and Awni Hannun. 2020. Self-supervised speech recognition via local prior matching. arXiv preprint arXiv:2002.10336.
+Kazuki Irie, Rohit Prabhavalkar, Anjuli Kannan, Antoine Bruguier, David Rybach, and Patrick Nguyen. 2019. On the choice of modeling unit for sequence-to-sequence speech recognition. In *INTERSPEECH* 2019 - 20th Annual Conference of the International Speech Communication Association.
+Jacob Kahn, Ann Lee, and Awni Hannun. 2020. Self-training for end-to-end speech recognition. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE.
+Martin Karafiat, Lukas Burget, Pavel Matejka, Ondrej Glembek, and Jan Cernocky. 2011. iVector-based discriminative adaptation for automatic speech recognition. In 2011 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 152-157.
+Johnson Keith and Sjerps Matthias. 2005. Speaker normalization in speech perception. In The Handbook of Speech Perception, pages 363-389. Wiley Online Library.
+Taesup Kim, Inchul Song, and Yoshua Bengio. 2017. Dynamic layer normalization for adaptive neural acoustic modeling in speech recognition. In INTER-SPEECH 2017 - $18^{\text{th}}$ Annual Conference of the International Speech Communication Association.
+James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2017. Overcoming catastrophic forgetting in neural networks. In Proceedings of the national academy of sciences, pages 3521-3526.
+Roland Kuhn, Jean-Claude Junqua, Patrick Nguyen, and Nancy Niedzielski. 2000. Rapid speaker adaptation in eigenvoice space. IEEE Transactions on
+
+Audio, Speech, and Language Processing, 8(6):695-707.
+Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning filters for efficient convnets. In 2017 International Conference on Learning Representations (ICLR).
+Jianze Liang, Chengqi Zhao, Mingxuan Wang, Xipeng Qiu, and Lei Li. 2021. Finding sparse structures for domain specific neural machine translation. In The Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021.
+Hank Liao. 2013. Speaker adaptation of context dependent deep neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7947-7951.
+Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. 2019. Rethinking the value of network pruning. In 2019 International Conference on Learning Representations (ICLR).
+Christoph Luscher, Eugen Beck, Kazuki Irie, Markus Kitza, Wilfried Michel, Albert Zeyer, Ralf Schlüter, and Hermann Ney. 2019. RWTH asr systems for librispeech: Hybrid vs attention. In *INTERSPEECH* 2019 - 20th Annual Conference of the International Speech Communication Association, pages 231-235.
+Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. 24:109-165.
+Zhong Meng, Jinyu Li, and Yifan Gong. 2019. Adversarial speaker adaptation. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5721-5725.
+Yajie Miao, Mohammad Gowayyed, and Florian Metze. 2015. EESEN: End-to-end speech recognition using deep rnn models and wfst-based decoding. In 2015 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 167-174.
+Tsubasa Ochiai, Shinji Watanabe, Shigeru Katagiri, Takaaki Hori, and John Hershey. 2018. Speaker adaptation for multichannel end-to-end speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6707-6711. IEEE.
+Jia Pan, DIYuan Liu, Genshun Wan, Jun Du, Qingfeng Liu, and Zhongfu Ye. 2018. Online speaker adaptation for LVCSR based on attention mechanism. 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (AP-SIPA ASC), pages 183-186.
+Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE.
+
+Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The kaldi speech recognition toolkit. In 2011 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
+Lahiru Samarakoon and Khe Chai Sim. 2016a. Factorized hidden layer adaptation for deep neural network based acoustic modeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(12):2241-2250.
+Lahiru Samarakoon and Khe Chai Sim. 2016b. Subspace LHUC for fast adaptation of deep neural network acoustic models. In INTERSPEECH 2016 - $17^{th}$ Annual Conference of the International Speech Communication Association, pages 1593-1597.
+George Saon, Hagen Soltau, David Nahamoo, and Michael Picheny. 2013. Speaker adaptation of neural network acoustic models using i-vectors. In 2013 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
+Frank Seide, Gang Li, Xie Chen, and Dong Yu. 2011. Feature engineering in context-dependent deep neural networks for conversational speech transcription. In 2011 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
+Andrew Senior and Ignacio Lopez-Moreno. 2014. Improving DNN speaker independence with i-vector inputs. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
+Sabato Marco Siniscalchi, Jinyu Li, and Chin-Hui Lee. 2013. Hermitian polynomial for speaker adaptation of connectionist speech recognition systems. IEEE Transactions on Audio, Speech, and Language Processing, 21(10):2152-2161.
+Pawel Swietojanski, Jinyu Li, and Steve Renals. 2016. Learning hidden unit contributions for unsupervised acoustic model adaptation. IEEE/ACM Transactions on Audio, Speech and Language Processing, 24(8):1450-1463.
+Natalia Tomashenko and Yannick Esteve. 2018. Evaluation of feature-space speaker adaptation for end-to-end acoustic models. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.
+Karel Vesely, Shinji Watanabe, Katerina Zmolikova, Martin Karafiat, Lukas Burget, and Jan Honza Cernocky. 2016. Sequence summarizing neural network for speaker adaptation. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5315-5319. IEEE.
+
+Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. Espnet: End-to-end speech processing toolkit. In *INTERSPEECH* 2018 - 19th Annual Conference of the International Speech Communication Association, pages 2207-2211.
+
+Xurong Xie, Xunying Liu, Tan Lee, Shoukang Hu, and Lan Wang. 2019. BLHUC: Bayesian learning of hidden unit contributions for deep neural network speaker adaptation. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5711-5715.
+
+Kaisheng Yao, Dong Yu, Frank Seide, Hang Su, Li Deng, and Yifan Gong. 2012. Adaptation of context-dependent deep neural networks for automatic speech recognition. In 2012 IEEE Spoken Language Technology Workshop (SLT). IEEE.
+
+Zhao You, Dan Su, Jie Chen, Chao Weng, and Dong Yu. 2019. DFSMN-SAN with persistent memory model for automatic speech recognition. arXiv preprint arXiv:1910.13282.
+
+Dong Yu, Kaisheng Yao, Hang Su, Gang Li, and Frank Seide. 2013. KL-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7893-7897.
+
+Kai Yu and Mark J. F. Gales. 2006. Discriminative cluster adaptive training. IEEE Transactions on Audio, Speech, and Language Processing, 14(5):1694-1703.
+
+Yu Zhang, William Chan, and Navdeep Jaitly. 2017. Very deep convolutional networks for end-to-end speech recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4845-4849. IEEE.
+
+Yingzhu Zhao, Chongjia Ni, Cheung-Chi Leung, Shafiq Joty, Eng Siong Chng, and Bin Ma. 2020. Speech transformer with speaker aware persistent memory. In INTERSPEECH 2020 - $21^{st}$ Annual Conference of the International Speech Communication Association, pages 1261-1265.
+
+Michael Zhu and Suyog Gupta. 2018. To prune, or not to prune: exploring the efficacy of pruning for model compression. In 2018 International Conference on Learning Representations (ICLR).
+
+# A Appendix
+
+# A.1 Details of Librispeech dataset
+
+We use open-source Librispeech dataset for all our experiments, which is downloadable from https: //www.openslr.org/12. Table 5 shows the statistics of the dataset.
+
+| LibriSpeech |
| Training set | 100h (251 speakers) |
| Dev_clean set | 5.4h (20 males, 20 females) |
| Dev_other set | 5.3h (17 males, 16 females) |
| Test Cleaner set | 5.4h (20 males, 20 females) |
| Test_other set | 5.1h (16 males, 17 females) |
+
+Table 5: Statistics of Librispeech dataset used for experiments.
+
+| Algorithm | Training | Adaptation |
| Baseline | 2d13h | - |
| Finetune | 2d13h | 2min |
| I-vec | 2d12h | 2min |
| Pruning | 2d5h | 2min |
| Pruning+I-vec | 2d5h | 2min |
+
+Table 6: Average runtime.
+
+# A.2 Average Runtime
+
+In Table 6, we list the average runtime using one V100 GPU of 1) Baseline, 2) Finetune: directly finetuning the trained baseline model, 3) I-vec: speaker-aware persistent memory method by adding i-vectors, 4) Pruning: gradual pruning, 5) Pruning+I-vec: combining feature adaptation and model adaptation methods proposed. During training, all models are trained with the given 100h Librispeech training data, while during adaptation, all models are trained with 10 utterances of adaptation data, except for the baseline where no adaptation is performed.
+
+# A.3 Evaluation Metrics
+
+We evaluate model performance by word error rate (WER), which can be computed as following:
+
+$$
+W E R = \frac {S + D + I}{N _ {r}} = \frac {S + D + I}{S + D + C} \tag {14}
+$$
+
+where $S$ is the number of substitutions, $D$ is number of deletions, $I$ is the number of insertions, $N_{r}$ is number of words in the reference $(N_{r} = S + D + C)$ , $C$ is the number of correct words.
+
+# A.4 Computing Infrastructure
+
+We conduct our experiments on NVIDIA V100 GPU and Intel(R) Xeon(R) Platinum 8163 32-core CPU @ 2.50GHz.
\ No newline at end of file
diff --git a/aunifiedspeakeradaptationapproachforasr/images.zip b/aunifiedspeakeradaptationapproachforasr/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f974a49d24d52cd021ee8c0530c52157282191f3
--- /dev/null
+++ b/aunifiedspeakeradaptationapproachforasr/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a69685c6f1bb69434b00006a2a85783a96119c7528e7559c6248b0ab300d38e3
+size 430124
diff --git a/aunifiedspeakeradaptationapproachforasr/layout.json b/aunifiedspeakeradaptationapproachforasr/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cbd7e65abc6c677b7d4cce41664ccd70d63c1aa2
--- /dev/null
+++ b/aunifiedspeakeradaptationapproachforasr/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c23b0fcab890a62d4bbf29cf728bf036e92d09f46f264accd282121b7e4578e
+size 379056
diff --git a/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_content_list.json b/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2bee6573f9e423be291c82024bfe28ecc52888b6
--- /dev/null
+++ b/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a33c1178aa556982aab3554f0d3c35e21ce30c4159e366da163867c750db269
+size 74158
diff --git a/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_model.json b/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a2693c902afc5a1c44c28eea3acc507883433f27
--- /dev/null
+++ b/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1d0535eef7ac287c676540525b41f94195598d3e54b85c8d9d56ce23f5f645a9
+size 91003
diff --git a/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_origin.pdf b/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5853e13df3d16a1e625947be9fcff5da7677df02
--- /dev/null
+++ b/autosummautomaticmodelcreationfortextsummarization/37002276-fa77-4c91-9ec7-fb7951ad0251_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ca813785b7070cfe894b92f08b289debea32763d02560a814f5f74bffffea64
+size 3084876
diff --git a/autosummautomaticmodelcreationfortextsummarization/full.md b/autosummautomaticmodelcreationfortextsummarization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..baa3d64dad7c6b887fde9e8e698fee08ec7330f5
--- /dev/null
+++ b/autosummautomaticmodelcreationfortextsummarization/full.md
@@ -0,0 +1,329 @@
+# AUTOSUMM: Automatic Model Creation for Text Summarization
+
+Sharmila Reddy Nangi $^{1*}$ , Atharv Tyagi $^{2*}$ , Jay Mundra $^{2*}$ , Sagnik Mukherjee $^{2*}$ , Snehal Raj $^{2*}$ , Aparna Garimella $^{3}$ , Niyati Chhaya $^{3}$
+
+1Stanford University, USA 3Adobe Research, India
+
+$^{2}$ Indian Institute of Technology Kanpur, India
+
+srnangi@stanford.edu $^{1}$ {garimell, nchhaya}@adobe.com $^{3}$
+
+{atharv,jaym,sagnikm,snehal}@iitk.ac.in²
+
+# Abstract
+
+Recent efforts to develop deep learning models for text generation tasks such as extractive and abstractive summarization have resulted in state-of-the-art performances on various datasets. However, obtaining the best model configuration for a given dataset requires an extensive knowledge of deep learning specifics like model architecture, tuning parameters etc., and is often extremely challenging for a non-expert. In this paper, we propose methods to automatically create deep learning models for the tasks of extractive and abstractive text summarization. Based on the recent advances in Automated Machine Learning and the success of large language models such as BERT and GPT-2 in encoding knowledge, we use a combination of Neural Architecture Search (NAS) and Knowledge Distillation (KD) techniques to perform model search and compression using the vast knowledge provided by these language models to develop smaller, customized models for any given dataset. We present extensive empirical results to illustrate the effectiveness of our model creation methods in terms of inference time and model size, while achieving near state-of-the-art performances in terms of accuracy across a range of datasets.
+
+# 1 Introduction
+
+Machine learning algorithms, particularly, deep learning techniques have led to the simplification of several computationally expensive tasks. However, training and optimizing these models for different tasks demand the experienced engineering resources and require expertise, making it difficult for non-experts. Automated Machine Learning is a strategy to automate this pipeline for model creation including automated generation of the model itself.
+
+In the case of Natural Language Processing and Text analysis, the advent of large language models such as BERT(Devlin et al., 2019), GPT2(Radford et al., 2019), and more recently GPT3(Brown et al., 2020) have created resources that can be exploited for the creation of robust models for several downstream NLP tasks. However, the need for ML expertise creates a bottleneck. Further, these deep learning models have thousands of parameters and need fairly large datasets and computational resources for training.
+
+We focus on providing algorithms for autogeneration of ML models for complex NLP tasks such as extraction and generation, making them accessible to non experts. Our proposed approaches feed off the knowledge available in large pretrained models to auto-generate new, smaller, customized models for a custom dataset. Specifically, the major contributions are as follows.
+
+(1) We propose a method to create machine learning models that are efficient and customized to a given dataset for the tasks of extractive and abstractive summarization, using a combination of neural architecture search and task-specific knowledge distillation from large language models.
+(2) Additionally, we propose an alternate method for summarization model generation using Transformer distillation, which is superior in terms of performance and resource utilisation.
+(3) We conduct extensive experiments and present results illustrating the effectiveness of the proposed methods for extractive and abstractive summarization on a range of datasets, and compare our models in terms of model creation efficiency, model size, inference time, and performance, with the state-of-the-art models.
+
+To the best of our knowledge, this is the first effort towards automatically building customized and compressed models for text generation tasks, specifically summarization.
+
+# 2 Related Work
+
+Neural Architectural Search is a trending area in AutoML, which automates the process of model creation by searching efficient model architectures, without human expertise. A typical NAS problem involves identifying the search space, employing a search strategy to find the best task-specific model architecture, and training the model from scratch. Most of the NAS experiments are done on images - (Real et al., 2017), (Real et al., 2018), (Suganuma et al., 2018) using neuro-evolutional and genetic algorithms, which are computationally very expensive and time consuming. Recently, gradient based methods like DARTS (Liu et al., 2019), SNAS (Xie et al., 2019) and (Dong and Yang, 2019) are proposed to speed-up the search strategy. But, the explorations of using NAS for language-related tasks are very limited. (Wang et al., 2020) propose TextNAS with a search space for better understanding of the text representations. They use a simple and efficient, ENAS (Pham et al., 2018), which is guided by reinforcement learning for model generation. However, these models mainly focus on text understanding and do not directly extend to the generation-related tasks like summarization.
+
+Text Summarization: The neural attention model (Rush et al., 2015) marked the beginning of using deep neural architectures for text summarization. Seq2seq models variations with convolutional encoder (Chopra et al., 2016), (Narayan et al., 2018b), hierarchical attention-based RNN encoder (Nallapati et al., 2017), pointer-generator networks (See et al., 2017) were used for both extractive and abstractive tasks. With the recent advent of multi-head attention (Vaswani et al., 2017), transformer-based models like PEGASUS (Zhang et al., 2019), BERT-Summ (Liu and Lapata, 2019a) are proposed with pre-trained objectives tailored for summarization tasks. While these methods give good results, they demand extreme human expertise and computational overhead for designing and deployment.
+
+Knowledge Distillation & Model Compression: These techniques aim to take advantage of the immense knowledge from the pre-trained models. TinyBERT (Jiao et al., 2019) presents a distillation approach for text classification and natural language inference using BERT compression and distillation. Adabert (Chen et al., 2020a) present a differential NAS algorithm, leveraging a BERT model through knowledge distillation for classifica
+
+tion and NLI tasks. (Chen et al., 2020b) transfers BERT knowledge to a encoder-decoder model for text generation. However, all these approaches are limited to the specific tasks, and are not directly extensible to a generation-based tasks.
+
+# 3 Methodology
+
+Figure 1 shows the overview of our model creation framework. The input is the dataset and task specifications (summary type, size) and the output is a custom trained summarization model, which can be further used to create text summaries. In this paper, we generate models for both extractive and abstractive summarization tasks, with the former being a binary classification task to extract summary sentences from the input, while the latter aims to generate summaries containing novel words and phrases that may not be present in the input text.
+
+
+Figure 1: Overview of the proposed approach.
+
+Our proposed approaches distills knowledge from a language-model based teacher network to generate an encoder-decoder-based child model. We present two algorithms that aid in auto-creation of different types of resulting 'child' models - (1) a model with convolutional and recurrent units and (2) a mini-transformer based model. The first is achieved by our approach AUTOSUMM-CREATE and the second using AUTOSUMM-DISTILL, which are detailed as follows.
+
+# 3.1 AutoSumm-Create
+
+Figure 2 illustrates AUTOSUMM-CREATE method. Here, we combine knowledge distillation with neural architecture search to auto-create an encoder-decoder based summarization model. The stages in this method include:
+
+1. Task-specific knowledge distillation: We leverage knowledge from a transformer-based BERT model (teacher) fine-tuned for extractive and abstractive summarization (Liu and Lapata, 2019b) on the given task-specific (summarization) dataset. The predictions from the teacher model are used for distillation, i.e., the sentences classification scores
+
+
+Figure 2: AUTOSUMM-CREATE.
+
+for extractive and probability distributions over the vocabulary for abstractive are augmented to the ground truth. A Knowledge Distillation $(L_{KD})$ loss is included to perform informed search on the child models, ensuring that they mimic the performance of the teacher.
+
+In extractive summarization, $L_{KD}$ is the MSE loss between soft labels from augmented data and the scored predicted by the child model.
+
+$$
+L _ {K D} = \sum_ {i = 1} ^ {n} \left(y _ {i} ^ {\text {t e a c h e r}} - y _ {i} ^ {\text {p r e d}}\right) ^ {2} \tag {1}
+$$
+
+In abstractive summarization, $L_{KD}$ is calculated at each time step $t$ using soft labels $P_{teacher}(y_t)$ from teacher model and the predicted labels $P_{pred}(y_t)$ from child model over vocab V as follows:
+
+$$
+\begin{array}{l} L _ {K D} = \sum_ {t} \sum_ {w \in V} P _ {\text {t e a c h e r}} (y _ {t} = w | y _ {1: t - 1}, X). \\ \log \left(P _ {p r e d} \left(y _ {t} = w \mid y _ {1: t - 1}, X\right)\right) \tag {2} \\ \end{array}
+$$
+
+2. Neural Architectural Search: Augmented dataset, along with a small labelled custom dataset, is used to train the NAS module, which searches for the right combination of cells that result in the child model most suited for the summarization task. In our approach, we use NAS to search the encoder space while using a predefined (task-specific) decoder. The key components of this module are:
+
+Search space. Following (Wang et al., 2020), we define macro search space, such that the model can be represented by a directed acyclic graph (DAG) with nodes representing a layer from the search space and edges representing the directionality of flow of information. The search space has 4 key cell types - CNN (kernel sizes 1,3,5,7), RNN (bidirectional GRU), Pooling layers (avg. pool and
+
+max. pool with stride 1 and uniform padding), and Multi-head self-attention (8 heads, no positional embeddings). We constrain the search space by (1) defining the number of skip connections allowed, (2) limiting the maximum number of layers in the child architecture, $l$ (in our case $l \in 1,5,10,18,20$ ), and (3) defining the cells allowed in the new architecture. These constraints define the exhaustive list of possibilities for the NAS algorithm.
+
+Search algorithm: We implement ENAS (Pham et al., 2018), a reinforcement learning (RL) based algorithm used for several NAS implementations (Zoph and Le, 2017). It consists of an RNN controller network, that samples a model architecture from the search space and an RL reward to nudge this controller towards generating an optimal architecture.
+
+Pre-defined Model Specifications: As stated earlier, we auto-create the encoder layers in the model but predefine the task-specific decoder. For extractive summarization, the decoder is a scorer function with sigmoid activation, which takes in the text representations learnt from the encoder and scores each sentence on a scale of (0,1]. The sentences with the high scores are chosen as the final summary based on the summary size specified. For abstractive summarization, a recurrent neural network is used as the decoder. The input is the text representation from the encoder and the output is a generated summary (generated in auto-regressive manner, by decoding a word at every time step).
+
+Loss: The architectures are trained with a cross-entropy loss at sentence level for extractive and vocab level for abstractive as follows:
+
+$$
+E x t \left(L _ {C E}\right) = \sum_ {i = 1} ^ {n} \left(p _ {g t} \left(y _ {i}\right). l o g \left(y _ {i} ^ {c h i l d}\right) \quad (3) \right.
+$$
+
+$$
+A b s \left(L _ {C E}\right) = \sum_ {t} \sum_ {w \in V} P _ {g t} \left(y _ {t} = w\right). \tag {4}
+$$
+
+$$
+\log \left(P _ {p r e d} \left(y _ {t} = w \mid y _ {1: t - 1}, X\right)\right)
+$$
+
+Final Loss: The final end-end loss associated with this framework is computed as the weighted sum of the $L_{KD}$ and $L_{CE}$ in the NAS module:
+
+$$
+L _ {t o t a l} = \alpha . L _ {C E} + (1 - \alpha). L _ {K D} \tag {5}
+$$
+
+RL Reward: A reward based on the performance of the child model, is sent back to the RNN controller. The policy gradients of the RNN controller are updated through REINFORCE(Williams, 1992)
+
+algorithm. Reward (R) is defined as $1 - Loss_{valid}$ , normalized over the batchsize.
+
+Re-training: The newly generated model, is trained using the user-provided training data optimizing for the total loss $(L_{total})$ . This trained model can generate summaries for any given test sample.
+
+# 3.2 AutoSumm-Distill
+
+In this approach, the structure of the child is defined as a mini-transformer(4 layers). A knowledge distillation technique called transformer distillation (Jiao et al., 2019) is used to create a general-mini-transformer(4 layers) from a large transformer model (12 layers). Then, the knowledge is distilled from a task-specific fine-tuned BERT ('teacher') model to the general-mini-transformer. Figure 3 illustrates the workflow of this method. This method differs from AUTOSUMM-CREATE, in the child model architecture and the usage of two transformer teacher models. The key stages in this method are detailed below.
+
+
+Figure 3: AUTOSUMM-DISTILL.
+
+Knowledge distillation: There are two forms of knowledge distillation in this method (1 and 3 in Fig.3). We detail the knowledge distillation from a task-specific transformer teacher (we use BERT-Summ (Liu and Lapata, 2019b)) to the general-mini-transformer which forms the encoder layer for the final child model. The decoder is pre-defined based on the task, similar to AUTOSUMM-CREATE. A transformer model has various types of layers including multi-headed attention, embedding layers, and the hidden layers. The intuition behind knowledge distillation is to teach the layers in the child transformer to mimic the corresponding layers in the teacher transformer. This is implemented by introducing separate losses for each layer type.
+
+Attention-based distillation builds on the intuition that the attention layers in BERT capture linguistic information such as syntax and coreference information. Specifically, the student aims to learn the matrices of the multi-headed attention from teacher. This loss is given by $L_{attn} = 1 / h\left[\sum_{i=1}^{h} MSE(A_i^S, A_i^T)\right]$ where $h$ is the number of attention heads. $A_i$ refers to the attention matrix corresponding to the i-th head of the teacher (T) or the student (S), $l$ is the input text length and $MSE(.)$ refers to the mean squared error loss.
+
+Hidden-state distillation distills knowledge from the output of transformer hidden layer, with $L_{hidn} = MSE(H^{s}W_{h},H^{T})$ where $H^{s}$ and $H^{T}$ refer to the hidden states of the student and teacher models. $W_{h}$ is a learnable linear transformation.
+
+Embedding-layer distillation: Formulated as $L_{embd} = MSE(E^{s}W_{e},E^{T})$ where $E^{s}$ and $E^{T}$ are embeddings in the student and teacher networks respectively. $W_{e}$ plays a similar role as $W_{h}$ . Using these distillation objectives along with the general distillation already done to compress the transformer model to general-mini-transformer, the final loss is the unified distillation loss of the corresponding layers between the teacher and the student model. As a reminder, this step helps auto-learn the task specific encoder for extractive and abstractive summarization.
+
+Pre-defined Model Specifications: For extractive summarization, we define a single transformer layer on top of the newly created encoder with a classification layer as the decoder. For abstractive summarization, the decoder is 6-layer transformer.
+
+# Training and Re-training:
+
+General distillation & Fine-tuning: The above model is trained in a phased manner. The first distillation or training is done from a large transformer (BERT) to the general - mini- transformer. Parallelly, a large BERT model is fine-tuned for the specified tasks. Both these steps need not be repeated for every new dataset from the user and every run of the model. The fine-tuned model and the general-mini-transformers may be created once per task and once per a very large benchmark dataset.
+
+Task-specific Distillation: This process of teaching the student model from a fine-tuned teacher model is repeated each time a new user dataset is given to the system. Once trained, this is coupled with the specific decoder.
+
+Re-training: Once the final child model i.e. mini-transformer encoder and corresponding decoder are created, this complete model is trained on the input user dataset. The final model is the output for the user along with test summaries for any given text input.
+
+# 4 Experiments
+
+We evaluate our proposed framework by performing experiments that test the performance of the newly created models against benchmark summarization datasets on both extractive and abstractive tasks.
+
+# 4.1 Datasets
+
+Table 1 summarizes the train/val/test split of all the datasets. The CNN / Daily Mail dataset (Hermann et al., 2015) contains online news articles (781 tokens on average) paired with multi-sentence summaries (3.75 sentences or 56 tokens on average).
+
+The New York Times (NYT) Annotated Corpus contains the full text and metadata of NYT articles from 1987 to 2007. Following (Durrett et al., 2016), we extracted and filtered out the articles from 2000-2007, with abstractive summaries having more than 50 words. The articles were split based on the date of publication, where the articles from January 1, 2007 were chosen as test set.
+
+X-Sum (Narayan et al., 2018a) dataset is collected from online BBC articles, with short one sentence summaries. The Gigaword (Rush et al., 2015) dataset contains 4M examples from news articles for sentence summarization / headline generation task. The summaries are very short with 9 tokens per summary. The Contract dataset (Manor and Li, 2019) is a dataset compiled from two websites dedicated to explaining unilateral contracts in plain English: TL;DRLegal $^{1}$ and TOS;DR $^{2}$ . It is a small dataset with 500 samples.
+
+| Dataset | #train | #valid | #test |
| CNN/DM | 287,113 | 13,368 | 11,490 |
| NYT | 106,826 | 6,000 | 6,258 |
| XSum | 204,045 | 11,332 | 11,334 |
| Gigaword | 300,000 | 10,000 | 2,000 |
| Contract | 400 | 23 | 23 |
+
+Table 1: Dataset details
+
+# 4.2 Models
+
+The generated models through AUTOSUMM-CREATE for extractive and abstractive are CHILD-EXT and CHILD-ABS respectively. -KD denote the child model variations trained through Knowledge distillation (KD). The fine-tuned models through AUTOSUMM-DISTILL are FT-TINYBERT-EXT and FT-TINYBERT-ABS. We compare the performance of our models against BERT-Summ (Liu and Lapata, 2019a), as it had a general framework for extractive and abstractive and was shown to give state-of-the-art performances. These baseline models are FT-BERT-EXT and FT-BERT-ABS.
+
+# 4.3 Implementation Details
+
+For all our experiments, we use the existing splits if available, otherwise we split the data according to the statistics in Table 1 and keep them constant across all the experiments.
+
+In our AUTOSUMM-CREATE experiments, we perform a 20-layer neural architectural search for encoder. The decoders are task-specific and predefined as explained in our previous section. We use GloVe word embeddings while providing the input to the generated model. We set the batch size as 128, max input length as 64, hidden unit dimension for each layer as 32, dropout ratio as 0.5 and L2 regularization. We utilize Adam optimizer and learning rate decay with cosine annealing. The parameter of KD proportion $\alpha$ is varied in NAS module. We also perform experiments with varying layer size, discussed in the later sections.
+
+# 4.4 Evaluation metrics
+
+Summarization quality is evaluated using F1 measure of ROUGE score (Lin, 2004) calculated between generated and ground-truth summary. We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) to assess informativeness and the longest common sub-sequence (ROUGE-L) to access fluency. Additional metrics like number of parameters, disk-space and the inference time taken are considered to compare the computational efficiency between models.
+
+# 5 Results and discussion
+
+Extractive Summarization: Table 2 shows results comparing the performance of our generated
+
+
+(a) Number of parameters $(\downarrow)$
+
+
+(b) Disk Space $(\downarrow)$
+Figure 4: Efficiency comparison for the extractive summarization models on the CNN/DM dataset
+
+
+(c) Inference Time $(\downarrow)$
+
+| MODEL | CNN/DM | NYT |
| R-1 | R-2 | R-L | R-1 | R-2 | R-L |
| FT-BERT-EXT | 43.58 | 20.69 | 28.08 | 45.68 | 26.20 | 33.11 |
| FT-TINYBERT-EXT | 42.28 | 19.53 | 26.76 | 43.94 | 24.61 | 32.06 |
| CHILD-EXT | 39.10 | 14.68 | 20.78 | 45.68 | 26.38 | 35.04 |
| CHILD-EXT-KD | 41.08 | 18.73 | 26.72 | 45.89 | 26.60 | 35.20 |
+
+models and the baseline for extractive summarization across different datasets. The ROUGE scores show that the summaries by the auto-generated models from our proposed framework are close to the state-of-the-art BERT baseline. Whereas, our models gained significantly in terms of computational efficiency. Figure 4 illustrates the same - when the models trained on CNN/DM dataset are compared along three aspects - Number of parameters (in millions), disk space for storing the model (in MB) and the inference time(in milliseconds) needed to generate the summary of an input sample. These graphs depict that the generated models with comparable performance to Ft-BERT significantly reduce the disk space usage and the number of parameters. We note that the generated models from AUTOSUMM-CREATE lose some performance in terms of inference time, which is because the model architecture consists of RNNs and does not have the advantage of parallel computation present in BERT models. However, our Ft-TINYBERT-EXT model overcomes this and significantly reduces the inference time.
+
+Abstractive Summarization: Table 3 compares the performance of our abstractive-summarization models on Gigaword dataset, curated for extreme summarization. It is to be noted that our proposed summarization model with Transformer distillation FT-TINYBERT-ABS beats the FT-BERT-ABS with a huge margin, across all R-1,R-2,R-L.
+
+Table 2: A comparison of the generated models for Extractive summarization on CNN/DM and NYT; FT-BERT-EXT is used as a baseline to compare against
+
+| Model | R-1 | R-2 | R-3 |
| FT-BERT-ABS | 21.03 | 6.04 | 19.34 |
| FT-TINYBERT-ABS | 32.36 | 13.80 | 29.93 |
| CHILD-ABS | 25.04 | 8.14 | 23.13 |
| SOTA (Song et al., 2020) | 39.08 | 20.47 | 36.69 |
+
+Table 3: Abstractive Summarization on Gigaword
+
+| MODELS | R1 | R2 | RL |
| Baseline Lead-K | 24.38 | 7.52 | 17.63 |
| CHILD-ABS | 40.04 | 23.63 | 35.21 |
+
+Table 4: Abstractive Summarization on Contracts
+
+The other dataset for extreme summarization is the Contract dataset. Table 4 shows the performance of our generated CHILD-ABS model on contracts dataset. We compare our results against the reported best performing Lead-K scores by Cohen et al.,(2018). Note that the limited size of the dataset was a bottleneck to train FT-TINYBERT-ABS model.
+
+Model Architectures: The AUTOSUMM-CREATE approach generates a new encoder architecture from scratch for a desired task and dataset. It is an interesting study to dive deeper into the distribution of the cells in the generated models. Table 5 shows the distribution of cell type in the generated models with a 10 layer encoder architecture, on extractive (on CNN/DM) and abstractive (on Gigaword) tasks. It can be observed that the pooling and the attention layers are sparse in the extractive models, but are major contributors in the abstractive architecture. Most recent models use the multi-head attention from transformer to get good results in the language generation task. A similar pattern is observed in the models generated through our AutoSumm framework.
+
+| Percentage of cells | Extractive | Abstractive |
| Child Model (w/o KD) | Child Model (with KD) | Child Model (w/o KD) | Child Model (with KD) |
| CNN(1,3,5,7) | 0,0,0,2 | 0,0,0,6 | 1,1,0,0 | 1,0,0,2 |
| RNN | 8 | 4 | 1 | 0 |
| Avg-Pool, Max-pool | 0,0 | 0,0 | 2,0 | 1,0 |
| Attention | 0 | 0 | 5 | 6 |
+
+# 5.1 Ablation Study
+
+Variation in Layer Size: We analyze the performance of our framework across varying sizes of the target model i.e. varying the number of layers to be generated by the RNN-controller. We experiment with CHILD-EXT models. Figure 5 illustrates the results of this experiment for extractive summarization on the CNN/DM dataset. We observe that the CNN and RNN layers are the major constituents in these architectures. We can see that RNN cells are more preferred when the architecture is restricted to fewer layers (like 2, 5, 6), but as we increase the layers, Convolutional layers with larger stride(7) are preferred. Table 6 refers to the performance of these models. While the model with 15 layers gives the best performance, the performance does not drop too much with varying number of layers. Hence, a smaller model, with fewer layers and in turn lesser number of parameters can be efficiently generated for extractive summarization through our approach.
+
+
+Figure 5: Cell distribution across varying layer size
+
+Table 7 and Figure 6 presents the results when the same experiment is conducted on XSUM and Contract datasets with the layer sizes 5, 10, 12 and 15. While the trend in RNN preference for fewer layers and CNN preference for more layers still continues, it is noted that the larger architectures generated use the pooling and attention layers.
+
+Cross-Dataset Experiments: Table 8 shows ex
+
+Table 5: Division of CNN and RNN cells for Extractive (CNN/DM) and Abstractive(Gigaword)
+
+| Layer Size | R-1 | R-2 | R-L |
| 2 | 40.8 | 18.5 | 26.55 |
| 5 | 41.03 | 18.67 | 26.71 |
| 10 | 41.08 | 18.73 | 26.7 |
| 12 | 40.98 | 18.65 | 26.68 |
| 15 | 41.1 | 18.78 | 26.84 |
| 20 | 41.1 | 18.7 | 26.73 |
+
+Table 6: CHILD-EXT-KD model performance with layer size variation on CNN/DM
+
+| Layer Size | XSUM | CONTRACT |
| R-1 | R-2 | R-L | R-1 | R-2 | R-3 |
| 5 | 20.7 | 3.69 | 13.52 | 21.15 | 6.14 | 15.96 |
| 10 | 20.84 | 3.64 | 13.68 | 21.2 | 6.26 | 15.99 |
| 12 | 20.73 | 3.59 | 13.63 | 21.23 | 6.12 | 15.9 |
+
+Table 7: Varying layer size on CHILD-EXT-KD for XSUM and Construct
+
+
+(a) XSUM
+Figure 6: Layer distribution for XSUM and Contract
+
+
+(b) Contract
+
+periments with CHILD-EXT-KD-X models trained on one dataset (X) and tested on another. The visualisations of these results as in Figure 7, also show that the architectures trained on one dataset can be used to generate summaries on a different dataset without significant loss in performance, establishing the generalizability of the proposed approach making it usable by non-experts for real-world applications.
+
+
+(a) Test on CNNDM
+Figure 7: Cross-Data experiments
+
+
+(b) Test on Contract
+
+Training data variation: To reiterate, in our AUTOSUMM-CREATE framework, we generate an architecture with cells from a given search space, and re-train the generated architecture with the
+
+| CHILD-EXT-KD | CNN/DM
+(R1:R2:RL) | XSUM
+(R1:R2:RL) | Contract
+(R1:R2:RL) |
| CHILD-EXT-KD-CNNDM | 41.06, 18.91, 27.04 | 16.78, 1.83, 12.35 | 24.07, 6.42, 17.71 |
| CHILD-EXT-KD-XSUM | 18.05, 2.7, 13.52 | 20.84, 3.64, 13.68 | 24.07, 6.42, 17.71 |
| CHILD-EXT-KD-Contract | 40.32, 17.84, 25.57 | 18.52, 2.43, 11.93 | 21.22, 6.07, 15.95 |
+
+Table 8: Cross-Dataset experiments on CHILD-EXT-KD models
+
+| Model | 1 (R1 : R2 : RL) | 3 (R1 : R2 : RL) | 5 (R1 : R2 : RL) |
| CHILD-EXT-KD | 33.05, 14.86, 24.1 | 41.08, 18.73, 26.72 | 37.31, 17.43, 24.05 |
| CHILD-EXT | 30.20, 12.04, 21.38 | 40.00, 18.18, 25.91 | 38.02, 18.03, 24.86 |
| FT-BERT-ABS | 33.84, 15.72, 24.88 | 43.58, 20.69, 28.08 | 39.85, 19.26, 24.93 |
| FT-TINYBERT-EXT | 32.65, 14.95, 24.07 | 42.29, 19.53, 26.76 | 38.48, 18.04, 23.87 |
+
+Table 9: Performance comparison with different decoding sizes
+
+| Contract |
| Input | Third party vendors including Google use cookies to serve ads based on a user's prior visits to our website or other websites . google
+s use of advertising cookies enables it and its partners to serve ads to you based on you visit to our sites and or other sites on the
+internet . |
| Reference | This service allows tracking via third party cookies for purposes including targeted advertising . |
| CHILD-ABS | This service allows tracking via third party cookies |
| Gigaword |
| Input | Tens of thousands of demonstrators marched through brussels sunday , calling for EU action to defend jobs , amid widespread anger
+at a decision by French carmaker Renault to close a factory here . |
| Reference | Tens of thousands march through brussels to defend jobs |
| FT-BERT-ABS | Tens of thousands march in brussels against renault 's closures plant closures closure |
| CHILD-ABS | Thousands march in brussels as eu protest over |
+
+Table 10: Examples of Abstractive summaries on Contract and Gigaword
+
+
+Figure 8: Performance vs Training data variation
+
+training size and Knowledge Distillation. In order to measure the amount of data required for this retraining, we performed an experiment by varying the size of the training data used for this purpose. Figure 8 illustrates the result of this experiment, where we vary the amount of training data(from $0\%$ to $100\%$ of the total available data), used to re-train an architecture searched for extractive summarization on CNN/DM dataset. Here, $0\%$ data refers to randomly initialized model that has not been re-trained. Note that the Rouge scores(R-1, R-2, R-L) reported for all these model variations are calculated on the test set. While it is intuitive
+
+that, more training data results in improved performance, we note that even with $10\%$ data the decent performance value is achieved. We believe such an observation can help support the hypothesis that the using the proposed framework, we can build usable models with limited training data.
+
+Decoding Size variation: Table 9 denotes the results of varying ROUGE scores with the decoding summary sizes (1,3,5) on CNN/DM extractive summarization task. While, a summary size of 3 sentences yields best result (some of it due to the nature of the training data), we observe that the proposed framework allows generating shorter or longer summaries without significant loss in performance, again establishing the generalizability.
+
+Table 10 shows qualitative examples of the output summaries for a couple of newly generated models. Through the above experiments, we establish that the auto-generated models generated using the proposed NAS and transformer-distillation based frameworks report near state-of-the-art performance for both extractive and abstractive summarization. We establish the generalizability of the models through various experiments, while also showing the efficacy when learning with limited data.
+
+# 6 Conclusions
+
+We present a framework for auto- generation of ML models for extract and generate tasks by leveraging knowledge distillation, NAS, and transformer-distillation techniques. The proposed approach successfully creates new model architectures that are more efficient in terms of inference time and space while achieving near state-of-the-art performance in terms of accuracies across datasets for extractive and abstractive summarization. We believe our work can help create the foundation towards democratizing the use of deep-learning for NLP applications for non-experts in practice.
+
+# References
+
+Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
+Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei Lin, and Jingren Zhou. 2020a. Adabert: Task-adaptive bert compression with differentiable neural architecture search. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 2463-2469. International Joint Conferences on Artificial Intelligence Organization. Main track.
+Yen-Chun Chen, Zhe Gan, Yu Cheng, J. Liu, and Jing jing Liu. 2020b. Distilling knowledge learned in bert for text generation. In ACL.
+Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93-98, San Diego, California. Association for Computational Linguistics.
+Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621,
+
+New Orleans, Louisiana. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Xuanyi Dong and Yi Yang. 2019. Searching for a robust neural architecture in fourgpu hours. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1761-1770.
+Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1998-2008, Berlin, Germany. Association for Computational Linguistics.
+K. Hermann, Tomás Kocisky, Edward Grefenstette, Lasse Espeholt, W. Kay, Mustafa Suleyman, and P. Blunsom. 2015. Teaching machines to read and comprehend. In NIPS.
+Xiaoqi Jiao, Y. Yin, L. Shang, Xin Jiang, X. Chen, Linlin Li, F. Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. ArXiv, abs/1909.10351.
+Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
+Hanxiao Liu, K. Simonyan, and Yiming Yang. 2019. Darts: Differentiable architecture search. ArXiv, abs/1806.09055.
+Yang Liu and Mirella Lapata. 2019a. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.
+Yang Liu and Mirella Lapata. 2019b. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345.
+Laura Manor and Junyi Jessy Li. 2019. Plain English summarization of contracts. In Proceedings of the Natural Legal Language Processing Workshop 2019, pages 1-11, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarrunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, page 3075-3081. AAAI Press.
+Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, Brussels, Belgium. Association for Computational Linguistics.
+Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747-1759, New Orleans, Louisiana. Association for Computational Linguistics.
+Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. 2018. Efficient neural architecture search via parameters sharing. volume 80 of Proceedings of Machine Learning Research, pages 4095-4104, Stockholm Müssan, Stockholm Sweden. PMLR.
+Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
+Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. 2018. Regularized evolution for image classifier architecture search.
+Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V. Le, and Alexey Kurakin. 2017. Large-scale evolution of image classifiers. volume 70 of Proceedings of Machine Learning Research, pages 2902-2911, International Convention Centre, Sydney, Australia. PMLR.
+Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics.
+Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics.
+Kaiqiang Song, Bingqing Wang, Zhe Feng, Ren Liu, and Fei Liu. 2020. Controlling the amount of verbatim copying in abstractive summarization. In The
+
+Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8902-8909. AAAI Press.
+Masanori Suganuma, Shinichi Shirakawa, and Tomoharu Nagao. 2018. A genetic programming approach to designing convolutional neural network architectures. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 5369-5373. International Joint Conferences on Artificial Intelligence Organization.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.
+Yujing Wang, Yaming Yang, Yi-Ren Chen, Jing Bai, Ce Zhang, Guinan Su, Xiaoyu Kou, Y. Tong, M. Yang, and L. Zhou. 2020. Textnas: A neural architecture search space tailored for text representation. In AAAI.
+Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8(3-4):229-256.
+S. Xie, H. Zheng, Chunxiao Liu, and L. Lin. 2019. Snas: Stochastic neural architecture search. ArXiv, abs/1812.09926.
+Jingqing Zhang, Y. Zhao, Mohammad Saleh, and Peter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. ArXiv, abs/1912.08777.
+Barret Zoph and Quoc V. Le. 2017. Neural architecture search with reinforcement learning.
+
+# A Additional Results
+
+KD Proportion variation: Figure 9 illustrates the variation in performance as we vary the KD proportion of loss during training of models through AUTOSUMM-CREATE. It is to be noted that the models with KD perform better than the models without KD. However, it cab be observed that the performance reaches a saturation after certain point and increasing KD-proportion might not better the performance.
+
+Model Architecture: Figure 10 illustrates an example of a model created with NAS in AUTOSUMM-CREATE. It is visualised through tensorboard.
+
+
+(a) Rouge-1
+
+
+(b) Rouge-2
+Figure 9: Variation of performance with the increase in KD proportion on CNN DM dataset
+
+
+(c) Rouge-L
+
+
+Figure 10: Model created with NAS module in AUTOSUMM-CREATE, as visualised through tensorboard
\ No newline at end of file
diff --git a/autosummautomaticmodelcreationfortextsummarization/images.zip b/autosummautomaticmodelcreationfortextsummarization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fbce12921476ecd4b6900af68ca6198ac855f769
--- /dev/null
+++ b/autosummautomaticmodelcreationfortextsummarization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba145d16ee97f3c49448b126eba6d7fa304b07d1bb58a2e12b35204a2e3718d1
+size 539377
diff --git a/autosummautomaticmodelcreationfortextsummarization/layout.json b/autosummautomaticmodelcreationfortextsummarization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..50130ed1143fb1f35ad6d3a2553f9120e343f533
--- /dev/null
+++ b/autosummautomaticmodelcreationfortextsummarization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ceca28898dcbc2b9a4c35400527739c97ad00264c9d94b1592aa5676bf91773
+size 346675
diff --git a/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_content_list.json b/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4ec866a1fb16817b869d0ec2f282395ee3476058
--- /dev/null
+++ b/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b4422cb73264b8fb76b219982e2b965b7c1f40e4ae8af322cb717f58c08c800
+size 65067
diff --git a/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_model.json b/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0987b46de3402681234e32379ab793afbfb233ba
--- /dev/null
+++ b/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:86507d43470939354c211bc17ad58bded0a39082ed90c524e645cb8e0d301b7e
+size 76381
diff --git a/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_origin.pdf b/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e2d1fa7c75853e851180f7674777add23b1b0f2a
--- /dev/null
+++ b/avocadostrategyforadaptingvocabularytodownstreamdomain/a8db4752-ac6d-4341-8b04-7a7e335664bc_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:590ef7335ae2e3c8f8417a1b50dfc77b67ae5b5988ea04e2927b20c5724c3486
+size 1352915
diff --git a/avocadostrategyforadaptingvocabularytodownstreamdomain/full.md b/avocadostrategyforadaptingvocabularytodownstreamdomain/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2dcecd9f23cbd69dd54d444bcece11e09bcd32d6
--- /dev/null
+++ b/avocadostrategyforadaptingvocabularytodownstreamdomain/full.md
@@ -0,0 +1,277 @@
+# AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain
+
+Jimin Hong*, Taehee Kim*, Hyesu Lim* and Jaegul Choo
+
+Korea Advanced Institute of Science and Technology (KAIST)
+
+{jimmyh, taeheekim, hyesulim, jchoo}@kaist.ac.kr
+
+# Abstract
+
+During the fine-tuning phase of transfer learning, the pretrained vocabulary remains unchanged, while model parameters are updated. The vocabulary generated based on the pretrained data is suboptimal for downstream data when domain discrepancy exists. We propose to consider the vocabulary as an estimizable parameter, allowing us to update the vocabulary by expanding it with domain-specific vocabulary based on a tokenization statistic. Furthermore, we preserve the embeddings of the added words from overfitting to downstream data by utilizing knowledge learned from a pretrained language model with a regularization term. Our method achieved consistent performance improvements on diverse domains (i.e., biomedical, computer science, news, and reviews).
+
+# 1 Introduction
+
+A language model (LM) is pretrained with a large corpus in a general domain and then is fine-tuned to perform various downstream tasks, such as text classification, named entity recognition, and question answering. However, fine-tuning the LM is challenging when the downstream domain is significantly different from the pretrained domain, requiring domain adaptation to improve the downstream performance [Gururangan et al., 2020, Lee et al., 2020, Beltagy et al., 2019].
+
+Prior approaches conducted additional training with a large domain-specific corpus in between pretraining and fine-tuning. In these approaches, the pretrained vocabulary remains unchanged, although the model is being adapted to a downstream domain, such as biomedicine or politics.
+
+We argue that the vocabulary should also be adapted during the fine-tuning process towards downstream data. Recent studies (e.g., SciBERT [Beltagy et al., 2019]) showed that using an
+
+
+(a) Fine-tuning only model parameters
+
+
+(b) Fine-tuning both model parameters & vocabulary (AVocaDo)
+
+| Domain Word | Tokenized with (A) | Tokenized with (B) |
| bluetooth | blue ##tooth | bluetooth |
| corticosterone | co ##rti ##cos ##ter ##one | cor ##tic ##ozero |
| disrespectful | di ##sr ##es ##pe ##ct ##ful | disrespect ##ful |
+
+(c) Examples of tokenization
+
+Figure 1: Overview of AVocaDo. AVocaDo updates the vocabulary (b) not only fine-tuning the model parameters as done by previous approaches (a). fine-tuning the vocabulary has benefit on tokenizing domain-specific words (c).
+
+optimized vocabulary for a particular downstream domain is more effective than using the vocabulary generated in pretraining stage. However, these approaches required a large domain-specific corpus additional to the downstream data in order to construct optimized vocabulary for the downstream domain.
+
+We propose to Adapt the Vocabulary to downstream Domain (AVocaDo), which updates the pretrained vocabulary by expanding it with words from the downstream data without requiring additional domain-specific corpus. The relative importance of words is considered in determining the size of the added vocabulary. As shown in Figure 1-(c), domain-specific words are tokenized in unwilling manner in the corresponding domain. For example, in reviews domain, the "bluetooth" represents a short-range wireless technology standard, but when the word is tokenized into "blue" and "tooth", the
+
+combined meaning of each subword is totally different from the intended meaning of "bluetooth". Furthermore, we propose a regularization term that prevents the embeddings of added words from overfitting to downstream data, since downstream data is relatively small compared to the pretraining data.
+
+The experimental results show that our proposed method improves the overall performance in a wide variety of domains, including biomedicine, computer science, news, and reviews. Moreover, the advantage of the domain adapted vocabulary over the original pretrained vocabulary is shown in qualitative results.
+
+# 2 Related Work
+
+As transfer learning has shown promising results in natural language processing (NLP), recent work leveraged the knowledge learned from the pretrained model, such as BERT [Devlin et al., 2018] in various domains.
+
+SciBERT [Beltagy et al., 2019] trains a language model with the large domain-specific corpus from scratch, showing that the vocabulary constructed from the domain-specific corpus contributes to improving performance. Lee et al. [2020] and Gururangan et al. [2020] conducted additional training on a pretrained LM with a large domain-specific corpus before fine-tuning. On the other hand, exBERT [Tai et al., 2020] extended the pretrained model with new vocabulary to adapt to biomedical domain. Similarly, Poerner et al. [2020] and Sato et al. [2020] proposed to expand vocabulary and leverage external domain-specific corpus to train new embedding layers.
+
+On the contrary, AVocaDo requires only downstream dataset in domain adaptation. Furthermore, our method selects a subset of domain-specific vocabulary considering the relative importance of words.
+
+# 3 Methods
+
+In AVocaDo, we generate domain-specific vocabulary based on the downstream corpus. The subset of the generated vocabulary is merged with the original pretrained vocabulary. The size of subset is controlled by the fragment score. Afterwards, we apply a regularization term during fine-tuning to prevent the embeddings of added words from overfitting to the downstream data.
+
+# Algorithm 1 Adapting Vocab in AVocaDo
+
+1: Input: pretrained vocab $V_{\mathcal{P}}$ ; corpus C; domain-specific vocab $V_{\mathcal{D}}$ .
+2: Output: adapted vocab $V_{\mathcal{A}}$ .
+3: Initialize hyperparameters $\alpha, \beta, \gamma$ .
+4: $V_{\mathcal{A}} \gets V_{\mathcal{P}} \cup \{V_{\mathcal{D}_i}\}_{i=0}^{\alpha}$
+5: while $f_{\mathbf{C}}(V_A) > \gamma$ do
+6: $V_{\mathcal{A}} \gets V_{\mathcal{A}} \cup \{V_{\mathcal{D}_i}\}_{i = \alpha}^{\alpha + \beta}$
+7: $\alpha \gets \alpha +\beta$
+8: end while
+9: return $V_{A}$
+
+# 3.1 Adapting Vocabulary
+
+In this section, we describe the procedure of adapting the vocabulary to the downstream domain through Algorithm 1. First, the domain-specific vocabulary set $V_{\mathcal{D}}$ is constructed from the downstream corpus $\mathbf{C}$ given a vocabulary size $N_{\mathcal{D}}$ and a tokenizing algorithm. The adapted vocabulary set $V_{\mathcal{A}}$ is constructed by merging the subset of $V_{\mathcal{D}}$ , size of $n_{\mathcal{D}}$ , with the original pretrained vocabulary set $V_{\mathcal{P}}$ , size of $N_{\mathcal{P}}$ . In other words, $N_{\mathcal{A}}$ , the size of $V_{\mathcal{A}}$ , is equal to the sum of the merged vocabulary sets, i.e., $N_{\mathcal{A}} = n_{\mathcal{D}} + N_{\mathcal{P}}$ . Note that $n_{\mathcal{D}} < N_{\mathcal{D}}$ , because when too many words are added, the added infrequent subwords might cause the rare word problem [Luong et al., 2015, Schick and Schütze, 2020].
+
+The subset of $V_{\mathcal{D}}$ , that is added to $V_{\mathcal{P}}$ , is determined by the fragment score $f_{\mathbf{C}}(V) \in \mathbb{R}$ , which we introduce as a new metric that measures the relative number of subwords tokenized by a vocabulary $V$ from a single word in corpus $\mathbf{C}$ , i.e.,
+
+$$
+f _ {\mathbf {C}} (V) = \frac {\text {t h e n u m b e r o f s u b w o r d s t o k e n i z e d b y} V}{\text {t h e n u m b e r o f w o r d s i n} \mathbf {C}}. \tag {1}
+$$
+
+Motivated by Rust et al. [2020], we keep $f_{\mathbf{C}}(V_{\mathcal{A}})$ from exceeding a certain threshold $\gamma$ . $\gamma$ is a hyperparameter determining the lower bound of the $f_{\mathbf{C}}(V_{\mathcal{A}})$ . Decreasing the lower bound leads $V_{\mathcal{A}}$ to less finely tokenize $\mathbf{C}$ . In contrast, increasing the lower bound leads $V_{\mathcal{A}}$ to finely tokenize $\mathbf{C}$ .
+
+We sought to consider the importance of subwords when adding $V_{\mathcal{D}}$ . We simply selected a subset of $V_{\mathcal{D}}$ following the order of merging subwords used in byte pair encoding algorithm [Sennrich et al., 2015]. The number of added vocabulary in each iteration is indicated by the hyperparameters $\alpha$ and $\beta$ .
+
+In summary, as frequent subword pairs are added from $V_{\mathcal{D}}$ to $V_{\mathcal{A}}$ as subwords, the $f_{\mathbf{C}}(V_{\mathcal{A}})$ decreases.
+
+
+Figure 2: Fine-tuning with regularization. Identical sentence "... the bluetooth function in my car ..., sampled from AMAZON, is tokenized with pretrained vocabulary (left) and with adapted vocabulary (right). The domain-specific word "bluetooth" is tokenized in two ways, which are highlighted as green and yellow respectively. The model is fine-tuned with regularization on $l$ -th layer, highlighted as brown box, to preserve the embeddings of added words (e.g., bluetooth) from overfitting to downstream dataset.
+
+The objective of adding $V_{\mathcal{D}}$ to $V_{\mathcal{A}}$ is to decrease the $f_{\mathbf{C}}(V_{\mathcal{A}})$ , but we make sure that $f_{\mathbf{C}}(V_{\mathcal{A}})$ does not become too small, i.e., lower than the threshold $\gamma$ . Therefore, we continue to add $V_{\mathcal{D}}$ to $V_{\mathcal{A}}$ if $f_{\mathbf{C}}(V_{\mathcal{A}})$ is higher than $\gamma$ , and terminate the merging step otherwise.
+
+# 3.2 Fine-tuning with Regularization
+
+The embeddings of words in the subset of $V_{\mathcal{D}}$ which is merged with $V_{\mathcal{P}}$ to construct the adapted vocabulary $V_{\mathcal{A}}$ are trained only with downstream data during fine-tuning. Since the size of downstream data is much smaller than that of the pretraining corpus, the embeddings trained only with the downstream data possibly suffer from overfitting. To prevent the potential overfitting, we leverage the pretrained contextual representation learned from a large corpus.
+
+In contrastive learning [Chen et al., 2020], a pair of instances is encouraged to learn representations in relation to the similarity of the instances. We apply this contrastive learning framework as a regularization in fine-tuning. As described in Figure 2, an identical sentence is tokenized in two ways: one with the pretrained vocabulary $V_{\mathcal{P}}$ and the other with the adapted vocabulary $V_{\mathcal{A}}$ . A minibatch consists of $B$ input sentences $\mathbf{x} = \{x_1, \ldots, x_B\}$ . Each input $x_i$ is tokenized with two types of vocabularies, and their $l$ -th layer encoder outputs are denoted as $h_{\mathcal{P},i}^{(l)}$ and $h_{\mathcal{A},i}^{(l)}$ . Note that they are encoded with a single encoder given the identical input sentence, but with different tokenizations. $h_{\mathcal{P},i}^{(l)}$ and $h_{\mathcal{A},j}^{(l)}$ are considered as a positive pair when $i = j$ , and as a negative pair when $i \neq j$ . The positive pair $h_{\mathcal{P},i}^{(l)}$ and $h_{\mathcal{A},i}^{(l)}$ are trained to maximize the agreement by
+
+the regularization term $\mathcal{L}_{\mathrm{reg}}$ i.e.,
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {r e g}} (\mathbf {h} _ {\mathcal {A}} ^ {(l)}, \mathbf {h} _ {\mathcal {P}} ^ {(l)}) \\ = - \frac {1}{B} \log \sum_ {i = 1} ^ {B} \frac {e ^ {\left(\operatorname {s i m} \left(h _ {\mathcal {A} , i} ^ {(l)} , h _ {\mathcal {P} , i} ^ {(l)}\right) / \tau\right)}}{\sum_ {j = 1} ^ {B} e ^ {\left(\operatorname {s i m} \left(h _ {\mathcal {A} , i} ^ {(l)} , h _ {\mathcal {P} , j} ^ {(l)}\right) / \tau\right)}}, \tag {2} \\ \end{array}
+$$
+
+where $\tau$ is a softmax temperature, $B$ is a batch size, $\mathbf{h}_{\mathcal{A}}^{(l)} = \{h_{\mathcal{A},1}^{(l)},\dots ,h_{\mathcal{A},B}^{(l)}\}$ and $\mathbf{h}_{\mathcal{P}}^{(l)} = \{h_{\mathcal{P},1}^{(l)},\dots ,h_{\mathcal{P},B}^{(l)}\}$ . The cosine similarity function is used for $\sin (\cdot)$ . $\mathbf{h}_{\mathcal{A}}^{(l)}$ is prevented from overfitting by making it closer to its positive sample.
+
+The model is trained to perform the target task with the regularization term $\mathcal{L}_{reg}$ . The output of the encoder with $V_{A}$ is supervised by the label of downstream data with cross entropy loss $\mathcal{L}_{CE}$ . The total loss $\mathcal{L}$ for domain adaptive fine-tuning is formalized as
+
+$$
+\mathcal {L} = \mathcal {L} _ {C E} + \lambda \mathcal {L} _ {r e g},
+$$
+
+$$
+\mathcal {L} _ {C E} = - \frac {1}{B} \sum_ {i = 1} ^ {B} \sum_ {i = 1} ^ {C} t _ {i} \log \left(f \left(s _ {i}\right)\right), \tag {3}
+$$
+
+where $f$ is a softmax function, $C$ is the total number of classes, $s_i$ is the logit for $i$ -th class, $B$ is the batch size, and $t_i$ is the target label. In our implementation, we set $\lambda$ as 1.0 for all experiments.
+
+# 4 Experimental Settings
+
+Datasets We conducted experiments on four domains that are significantly different from the pretraining domain; biomedical (BIOMED) papers, computer science (CS) papers, NEWS, and amazon reviews (REIEWS). CHEMPROT [Kringelum et al.], ACL-ARC [Jurgens et al.], HYPERPARTI-SAN [Kiesel et al., 2019], and AMAZON [McAuley et al., 2015] datasets are used in respective domains. Target task for each dataset is text classification. Appendix C describes more details.
+
+| Domain | Dataset | BERTbase | BERTAVocaDo | SciBERT | SciBERTAVocaDo | BioBERT | BioBERTAVocaDo |
| BIOMED | CHEMPROT | 79.38 | 81.07(+1.69) | 82.16 | 82.71(+0.55) | 83.58 | 84.42(+0.84) |
| CS | ACL-ARC | 56.82 | 67.28(+10.46) | 66.89 | 75.02(+8.13) | - | |
| NEWS | HYPERPARTISAN | 84.51 | 89.31(+4.80) | - | - | - | - |
| REIEWS | AMAZON | 55.50 | 68.51(+13.01) | - | - | - | - |
+
+Evaluation Protocol We report the macro- $F_{1}$ score for ACL-ARC, HYPERPARTISAN, and AMA-ZON and micro- $F_{1}$ score for CHEMPROT as done by previous work [Lee et al., 2020, Beltagy et al., 2019]. The score is averaged over five random seeds.
+
+# 5 Results
+
+# 5.1 Quantitative Results
+
+BERTbase [Devlin et al., 2018], SciBERT [Beltagy et al., 2019], and BioBERT [Lee et al., 2020] are chosen as the pretrained LMs for our experiments. Each model is fine-tuned in two ways: one with pretrained vocabulary and the other with adapted vocabulary.
+
+SciBERT is pretrained with scientific corpus while BERT is pretrained with general domain corpus (e.g., Wikipedia), and thus SciBERT can be fine-tuned only with BIOMED and CS. BioBERT conducted additional training with biomedical corpus, so that BioBERT can be fine-tuned only with BIOMED.
+
+As described in Table 1, fine-tuning with AVocaDo significantly improved the performance of the downstream task in all domains. Note that the performance is improved despite the low-resource environment, where the size of dataset is smaller than 5,000 as described in Appendix C (CHEMPROT, ACL-ARC, and HYPERPARTISAN). In BIOMED domain, applying AvocaDo improved the overall performance in various pretrained language models. This improvement shows that utilizing the domain-specific vocabulary has additional benefits on the downstream domain. In CS, AVocaDo outperforms BERT $_{\text{base}}$ and SciBERT, showing the performance improvements of 10.46 in BERT $_{\text{base}}$ and 8.13 in SciBERT. In NEWS and REVIEWS, our strategy significantly improved the performance; 4.80 in NEWS and 13.01 in REVIES.
+
+Table 1: Comparisons with baselines in four different domains. Pretrained LMs (i.e., BERTbase, SciBERT, and BioBERT) are fine-tuned in two ways: one with pretrained vocabulary (represented without a subscription) and the other with adapted vocabulary (represented with subscription AVocaDo). The performance improvement is represented inside the parentheses with +. The reported value is averaged $F_{1}$ score (micro- $F_{1}$ for CHEMPROT and macro- $F_{1}$ for the others) over five random seeds. Invalid comparisons are represented as -.
+
+| Domain | Domain Word | Pretrained VOC \( V_{\mathcal{P}} \) | Adapted VOC \( V_{\mathcal{A}} \) |
| BIOMED | glucuronidation | g, lu, cu, ron, ida, tion | glucuron, ida, tion |
| sulhydration | sul, f, hy, dra, tion | sulf, hydr, ation |
| CS | nlp* | nl, p | nlp |
| syntactic | syn, ta, ctic | syntactic |
| NEWS | tweet | t, wee, t | tweet |
| disrespectful | di, sr, es, pe, ct, ful | disrespect, ful |
| REVIEWS | otterbox | otter, box | otterbox |
| thunderbolt | thunder, bolt | thunderbolt |
+
+Table 2: Qualitative results. Carefully selected tokenization examples from $V_{\mathcal{P}}$ and $V_{\mathcal{A}}$ . * represents capitalized in the original sentence.
+
+# 5.2 Qualitative Results
+
+To analyze the effectiveness of the adapted vocabulary $V_{A}$ , we show the sampled words from each domain that are tokenized with two types of vocabulary in Table 2.
+
+The adapted vocabulary $V_{\mathcal{A}}$ tokenizes the domain-specific word into subwords that are informative in the target domain. For example, in the case of "sulphhydration", the word is tokenized as "sul, f, hy, dra, tion" with $V_{\mathcal{P}}$ and "sulf, hydration" with $V_{\mathcal{A}}$ . "sulf" and "hydr" imply "sulfur" and "water" respectively, which are frequently used in BIOMED domain.
+
+Furthermore, $V_{\mathcal{A}}$ preserves the semantic of a domain-specific word by keeping it as a whole word, where the subwords tokenized with $V_{\mathcal{P}}$ have completely different semantics from its original meaning. For instance, "otterbox" is an electronics accessory company in the REVIEWS domain. However, with $V_{\mathcal{P}}$ , it is split into "otter" and "box", where the "otter" is a carnivorous mammal and "box" is a type of container. Randomly sampled tokenization examples from $V_{\mathcal{P}}$ and $V_{\mathcal{A}}$ are presented in Appendix Table 8.
+
+# 5.3 Ablation Studies
+
+The effectiveness of each component in AVocaDo, i.e., vocabulary adaptation and contrastive regularization, is shown in this section. As described in
+
+| Model | CHEMPROT | ACL-ARC | HYPERPARTISAN | AMAZON |
| AVocaDo | 81.07 | 67.28 | 89.31 | 68.51 |
| w/o Lreg | 78.45(-2.62) | 64.00(-3.28) | 87.84(-1.47) | 61.23(-7.28) |
| BERTbase | 79.35(-1.72) | 56.82(-10.46) | 84.51(-4.80) | 55.50(-13.01) |
+
+Table 3, vocabulary adaptation improves the performance in three domains (i.e., ACL-ARC, HYPERPARTISAN, and AMAZON) even in the absence of the regularization term.
+
+# 5.4 Size of Added Vocabulary
+
+The size of the added vocabulary $n_{\mathcal{D}}$ is automatically determined by the fragment score of the adapted vocabulary $V_{\mathcal{A}}$ , as described in Algorithm 1. In order to analyze how $n_{\mathcal{D}}$ affects the performance, we compare the performance of downstream tasks by manually setting the $n_{\mathcal{D}}$ as 500, 1000, 2000, and 3000 without using the fragment score, as shown in Table 4. Automatically determined $n_{\mathcal{D}}$ is 1600, 700, 2850 and 1300 for each dataset. Except for AMAZON dataset, we demonstrate that determining $n_{\mathcal{D}}$ by the fragment score shows the optimal performance.
+
+# 6 Conclusion
+
+In this paper, we demonstrate that a pretrained vocabulary should be updated towards a downstream domain when fine-tuning. We propose a fine-tuning strategy called AVocaDo that adapts the vocabulary to the downstream domain by expanding the vocabulary based on a tokenization statistic, and by regularizing the newly added words. Our approach shows consistent performance improvements in diverse domains on various pretrained language models. AVocaDo is applicable to a wide range of NLP tasks in diverse domains without any restrictions, such as massive computing resources or a large domain-specific corpus.
+
+# Acknowledgment
+
+This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST)), and Samsung Advanced Institute of Technology, Sam
+
+Table 3: Ablation study. w/o $\mathcal{L}_{reg}$ denotes that the model is fine-tuned with the adapted vocabulary but not applying regularization loss. $\mathrm{BERT}_{\mathrm{base}}$ denotes that the model is fine-tuned without applying AVocaDo. The performance difference is represented inside the parentheses.
+
+| Dataset | Size of added vocabulary nD |
| 500 | 1000 | 2000 | 3000 | AVocaDo |
| CHEMPROT | 80.36 | 80.43 | 80.24 | 79.89 | 81.06 |
| ACL-ARC | 65.24 | 64.43 | 66.08 | 65.37 | 67.28 |
| HYPERPARTISAN | 84.70 | 85.03 | 80.49 | 84.85 | 89.31 |
| AMAZON | 68.57 | 68.31 | 68.85 | 67.89 | 68.51 |
+
+Table 4: Analysis on the size of the added vocabulary. $n_{\mathcal{D}}$ is manually set (500, 1000, 2000, and 3000) or automatically determined (AVocaDo).
+
+sung Electronics Co., Ltd. We thank the anonymous reviewers for their helpful feedback and discussions. We also thank Seong-Su Bae for his insight and helpful opinion.
+
+# References
+
+Iz Beltagy, Kyle Lo, and Arman Cohan. SciBERT: A pretrained language model for scientific text. In Proc. the Annual Meeting of the Association for Computational Linguistics (ACL), 2019.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Proc. the International Conference on Machine Learning (ICML), 2020.
+Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. In Proc. the International Conference on Learning Representations (ICLR), 2020.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2018.
+Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don't stop pretraining: Adapt language models to domains and tasks. In Proc. the Annual Meeting of the Association for Computational Linguistics (ACL), 2020.
+David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics (TACL).
+Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. SemEval-2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, 2019.
+
+Jens Kringelum, Sonny Kim Kjaerulff, Soren Brunak, Ole Lund, Tudor I. Oprea, and Olivier Taboureau. ChemProt-3.0: a global chemical biology diseases mapping. Database.
+Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 2020.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
+Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. 2015.
+Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. Image-based recommendations on styles and substitutes. In Proc. of Special Interest Group on Information Retrieval (SIGIR), 2015.
+Nina Poerner, Ulli Waltinger, and Hinrich Schütze. Inexpensive domain adaptation of pretrained language models: case studies on biomedical ner and COVID-19 qa. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP): Findings, 2020.
+Phillip Rust, Jonas Pfeiffer, Ivan Vulic, Sebastian Ruder, and Iryna Gurevych. How good is your tokenizer? on the monolingual performance of multilingual language models. In Proc. the Annual Meeting of the Association for Computational Linguistics (ACL), 2020.
+Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, and Masaru Kitsuregawa. Vocabulary adaptation for domain adaptation in neural machine translation. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP): Findings, 2020.
+Timo Schick and Hinrich Schütze. Rare words: A major problem for contextualized embeddings and how to fix it by attentive mimicking. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proc. the Annual Meeting of the Association for Computational Linguistics (ACL), 2015.
+Merit Stephen, Xiong Caiming, Bradbury James, and Richard Socher. Pointer sentinel mixture models. In Proc. the International Conference on Learning Representations (ICLR), 2016.
+
+Wen Tai, HT Kung, Xin Luna Dong, Marcus Comiter, and Chang-Fu Kuo. exBERT: Extending pre-trained models with domain-specific vocabulary under constrained training resources. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP): Findings, 2020.
+
+# Appendix
+
+# A Details on Fragment Score
+
+Fragment score is a measure of the fineness of tokenization. We observed that the pretrained vocabulary set $V_{\mathcal{P}}$ tokenizes domain-specific words (i.e., words that are frequently appeared in a downstream corpus but not in a pretrained corpus) into larger number of subwords than the number of subwords that non-domain-specific words are tokenized into (Figure 3). These finely tokenized subwords are not semantically informative enough.
+
+Inspired by the observations, we construct a new vocabulary $V_{\mathcal{A}}$ that less finely tokenizes the domain-specific words than $V_{\mathcal{P}}$ , i.e., $V_{\mathcal{A}}$ such that $f_{\mathbf{C}}(V_{\mathcal{A}}) < f_{\mathbf{C}}(V_{\mathcal{A}})$ . This is why we chose the fragment score of the newly constructed vocabulary set $V_{\mathcal{A}}$ as a metric for selecting a subset of domain-specific vocabulary $V_{\mathcal{D}}$ .
+
+# B Different Aspects of the Vocabularies
+
+Figure 3 shows the relative number of tokenized subwords from a single word in four domains where the publicly available vocabulary in BERT [Devlin et al., 2018] is denoted as $V_{\mathcal{P}}$ and domain adapted vocabularies are denoted as $V_{\mathcal{A}}$ . WikiText [Stephen et al., 2016] represents the general domain that is similar to the corpus that is used for pretraining BERT, while others are chosen as the downstream domain. The red and orange bar indicate the average number of subwords tokenized with pretrain vocabulary and adapted vocabulary. We observe that AVocaDo mitigates the domain gap.
+
+# C Implementation Details
+
+# C.1 Downstream Datasets
+
+| Domain | Dataset | Task (# of Classes) | Train | Dev. | Test |
| BIOMED | CHEMPROT | relation (13) | 4169 | 2427 | 3469 |
| CS | ACL-ARC | citation intent (6) | 1688 | 114 | 139 |
| NEWS | HYPERPARTISAN | partisanship (2) | 515 | 65 | 65 |
| REVEWS | AMAZON | helpfulness (2) | 115251 | 5000 | 25000 |
+
+We used four datasets in various domains for classification. As shown in Table 5, the size of the training data varies from 500 to about 110,000. The
+
+number of classes for each dataset varies from 2 to 13.
+
+# C.2 Experimental Settings
+
+In all experiments, we trained the networks on a single 3090 RTX GPU with 24GB of memory. We implemented all models with PyTorch using Transformers library from Huggingface. All baselines are reproduced as described in previous works [Gururan et al., 2020, Tai et al., 2020, Lee et al., 2020, Beltagy et al., 2019]. In our experiment, the performance in HyperPARTISAN dataset tends to have high variance depending on random seeds since the size of the dataset is extremely small. To produce reliable results on this dataset, we discard and resample seeds.
+
+The embeddings of newly added words in AVocaDo are initialized as a mean value of BERT embeddings of subword components. For instance, if the word "bluetooth" is tokenized into ["blue","##tooth"] with $V_{\mathcal{P}}$ and "bluetooth" with $V_{A}$ , we initialize the embedding of "bluetooth" with the average value of the two subword embeddings.
+
+# C.3 Hyperparameters
+
+Table 5: Datasets used in experiments. Sources: CHEMPRO [Kringelum et al.], ACL-ARC [Jurgens et al.], HYPERPARTISAN [Kiesel et al., 2019], and AMAZON [McAuley et al., 2015].
+
+| Hyperparameter | Value |
| lower bound of fragment score γ | 3 |
| number of added vocabulary (initial) α | 500 |
| number of added vocabulary β | 50 |
| batch size B | 16 |
| learning rate | 1e-5, 2e-5, 5e-5 |
| number of epochs | 10 |
| temperature τ | from 1.5 to 3.5 |
| domain vocabulary size ND | 10,000 |
+
+Table 6: Hyperparameters used in experiments. We conduct grid search for finding the best hyperparameter settings.
+
+As shown in Table 6, we followed the hyperparameter setting in the previous work [Lee et al., 2020, Beltagy et al., 2019, Gururangan et al., 2020]. To search the value for learning and temperature $\tau$ , we use grid search.
+
+# D Qualitative Results
+
+For each downstream dataset, we randomly sampled ten words that are differently tokenized by pretrained vocabulary $V_{\mathcal{P}}$ and by adapted vocabulary $V_{\mathcal{A}}$ . As shown in Table 8, subwords tokenized by $V_{\mathcal{A}}$ are more informative in the target domain
+
+
+
+
+
+
+Figure 3: The analysis of the pretrained and adapted vocabularies on WikiText and downstream domains. $\mathcal{P}$ and $\mathcal{A}$ denote the pretrained vocabulary and the adapted vocabulary respectively. AVocaDo mitigates the domain gap in terms of the average number of subwords tokenized from a single word.
+
+
+
+because they preserve the semantic of a domain-specific word.
+
+E Comparison with Previous Works
+
+| Model | Adaptive Pretraining | Domain-specific Corpus |
| AVocaDo | × | downstream corpus only |
| SciBERT | ✓ | 3.17 billion words |
| BioBERT | ✓ | 18.0 billion words |
| exBERT | ✓ | 0.9 billion words |
| Gururangan et al. [2020] | ✓ | 7.55 billion words |
+
+Table 7: Comparison with previous works. The adaptive pretraining phase and the size of biomedical domain corpus used for domain adaptation in previous works. No additional training resource is needed in AVocaDo.
+
+AVocaDo does not require additional domain-specific corpus. As shown in Table 7, all other baseline models require an adaptive pretraining stage before fine-tuning using domain-specific corpus. In general, the corpus used for adaptive pretraining is relatively large compared to the size of downstream dataset. Therefore, most methodologies that require adaptive pretraining require large training resources.
+
+# F Other Baselines
+
+We perform additional experiments with other baseline models. In this experiment, we set
+
+exBERT [Tai et al., 2020], which expands the pretrained vocabulary from original BERTbase vocabulary, and SciBERTSCIVOCAB [Beltagy et al., 2019], which constructs the customized vocabulary based on science and biomedical large corpora as baselines. Table 9 shows the overall performance on BIOMED and CS domains. We outperform exBERT in BIOMED domain. In comparison with SciBERTSCIVOCAB, AVocaDo shows the competitive performance.
+
+# G Other Pretrained Language Models
+
+To demonstrate the performance of AVocaDo on the other pretrained language models, we additionally conducted experiments on RoBERTa [Liu et al., 2019] and ELECTRA [Clark et al., 2020]. Table 10 shows the overall performance on four downstream domains. RoBERTa and ELECTRA with AVocaDo shows the improvements on the various domains except for NEWS and BIOMED domain respectively.
+
+| Domain | Word | Pretrained Vocab Vp | Adapted Vocab VA |
| BIOMED | epidermal | ep, ##ider, ##mal | epidermal |
| cetuximab | ce, ##tu, ##xi, ##ma, ##b | ce, ##tu, ##xi, ##ma, ##b |
| lumiracoxib | lu, ##mir, ##aco, ##xi, ##b | lum, ##irac, ##oxib |
| peroxidation | per, ##ox, ##ida, ##tion | perox, ##ida, ##tion |
| reductase | red, ##uc, ##tase | reductase |
| dihydrotestosterone | di, ##hy, ##dro, ##test, ##ost, ##eron | dihydro, ##test, ##oosterone |
| pparalpha | pp, ##ara, ##pl, ##ha | ppar, ##alpha |
| sulphydation | sul, ##f, ##hy, ##dra, ##tion | sulf, ##hydr, ##ation |
| glucuronidation | g, ##lu, ##cu, ##ron, ##ida, ##tion | glucuron, ##ida, ##tion |
| proliferating | pro, ##life, ##rating | prolifer, ##ating |
| CS | annotation | ann, ##ota, ##tions | annotation |
| unsupervised | un, ##su, ##per, ##vis, ##ed | unsupervised |
| entails | en, ##tails | entail, ##s |
| sgd | sg, ##d | sgd |
| parser | par, ##ser | parser |
| nlp | nl, ##p | nlp |
| summarization | sum, ##mar, ##ization | summarization |
| syntactic | syn, ##ta, ##ctic | syntactic |
| coreference | core, ##ference | coreference |
| ner | ne, ##r | ner |
| NEWS | manafort | mana, ##fort | manafort |
| disrespectful | di, ##sr, ##es, ##pe, ##ct, ##ful | disrespect #fful |
| tweet | t, ##wee, ##t | tweet |
| divisive | di, ##vis, ##ive | div, ##isi, ##ve |
| recaptcha | rec, ##ap, ##tch, ##a | recaptcha |
| brexit | br, ##ex, ##it | brexit |
| irreplaceable | ir, ##re, ##pl, ##ace, ##able | ir, ##re, ##place, ##able |
| supermacists | su, ##pre, ##mac, ##ists | supermacists |
| politize | pol, ##itic, ##ize | politic, ##ize |
| GOP | go, ##p | GOP |
| REVISIONS | telestial | tel, ##est, ##ial | tele, ##sti, ##al |
| rechargesminutes | rec, ##har, ##ge, ##min, ##ute, ##s | recharge, ##min, ##utes |
| verizon | ve, ##riz, ##on | verizon |
| thunderbolt | thunder, ##bolt | thunderbolt |
| bluetooth | blue, ##tooth | bluetooth |
| otterbox | otter, ##box | otterbox |
| headset | heads, ##et | headset |
| kickstand | kicks, ##tan, ##d | kickstand |
| detachable | det, ##ach, ##able | detach, ##able |
| htc | h, ##tc | htc |
+
+Table 8: Randomly sampled words that are differently tokenized by $V_{\mathcal{P}}$ and $V_{\mathcal{A}}$ .
+
+| Domain | Dataset | BERTbase | BERTAVocaDo | exBERT | SciBERTBASEVOCAB† | SciBERTAVocaDo |
| BIOMED | CHEMPROT | 79.38 | 81.07 | 74.63 | 83.64 | 82.71 |
| CS | ACL-ARC | 56.82 | 67.28 | - | 70.98 | 75.02 |
+
+Table 9: Comparisons with other baselines. The symbol $\dagger$ indicates the performance reported by Beltagy et al. [2019].
+
+| Domain | Dataset | RoBERTa\(_{\text{base}}\)† | RoBERTa\(_{\text{AVocaDo}}\) | ELECTRA\(_{\text{base}}\) | ELECTRA\(_{\text{AVocaDo}}\) |
| BIOMED | CHEMPROT | 81.9 | 82.8(+0.9) | 74.3 | 73.4(-0.9) |
| CS | ACL-ARC | 63.0 | 67.3(+4.3) | 57.1 | 59.3(+2.2) |
| NEWS | HYPERPARTISAN | 86.6 | 84.5(-2.1) | 70.6 | 77.8(+7.2) |
| REIEWS | AMAZON | 65.1 | 70.8(+5.7) | 66.2 | 69.9(+3.7) |
+
+Table 10: Experiments on other pretrained language models. The pretrained language models (i.e., RoBERTabase and ELECTRAbase) are fine-tuned with or without AVocaDo. The performance improvement is represented inside the parentheses with +. The symbol † indicates the performance reported by Gururangan et al. [2020].
\ No newline at end of file
diff --git a/avocadostrategyforadaptingvocabularytodownstreamdomain/images.zip b/avocadostrategyforadaptingvocabularytodownstreamdomain/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d170bcc2526e84926ecc2376799ebba88502bfe9
--- /dev/null
+++ b/avocadostrategyforadaptingvocabularytodownstreamdomain/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca2d48e0f7ae03e87cf80caba93658b16996cf071d70c4b59fc952a3cd1a7136
+size 634474
diff --git a/avocadostrategyforadaptingvocabularytodownstreamdomain/layout.json b/avocadostrategyforadaptingvocabularytodownstreamdomain/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8ca6f2497555ada9ad7ae8e8226ce82fd7311420
--- /dev/null
+++ b/avocadostrategyforadaptingvocabularytodownstreamdomain/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:75c7e7978927c4a24ed2e824fa00fd0703465a07f70eb11d109bfe7e7d6128e3
+size 363144